copy
Law & Internet Cultures :: Kathy Bowrey
:: Chapter One: Defining internet law
This chapter uses the example of blacklists to show how seemingly voluntary, technical decisions, disseminated through a decentralized system, nonetheless have real practical consequences for internet users. It explores how technical questions of system maintenance are more than “just technical”, introducing questions of the potential for “legal” notions of responsibility and accountability for such decisions.
It is common for discussions of decentralized architecture to fall into the trap of assuming that because there is no one party responsible for a centre from which all else flows, that there is no power to analyze behind that structure. This chapter does not look for conspiracies and the unacknowledged work of Dr Evil. Instead it describes the diffusion of power, and the complex of actors implicated in its manifestation. I also discuss the potential sites where State power and traditional forms of legislative intervention occur. The space for global governance, through organizations like ICANN and WSIS is also considered.
Ultimately internet governance is defined in terms of private power exercised through technical decision making, but with an emphasis on how this intersects with government policy, legislation and the courts. However while avenues for formal accountability are noted, the intention is not for code or law to take centre stage. Rather the text emphasizes the importance of understanding the interaction of various actors, their activities, reactions, and ambitions in structuring the internet and associated technologies. It is this complex of factors that is described as affecting possibilities for action and for intervention.
DISCUSSION POINTS
In relation to the internet the possibility of law, law making, and legal authority is often treated with outright scepticism or as of marginal influence on conduct. This class looks at why this is so. It considers three influential ideas about internet regulation. Firstly, there is the idea of the internet as a space beyond regulation. Secondly, there is the idea of co-regulation, where government establishes a regulatory body and principles, to be administered in a spirit of self-regulation. Here there is an expectation of weak government control and faith in global, free market approaches.Thirdly, there is the idea of regulation by code. We conclude with a discussion of the role of global norms in regulating the internet.
The internet as “BEYOND REGULATION”
As a
technological phenomenon the internet is often described as a decentralised
network, designed to prevent network dependency on any one node or site, so
that the system is not vulnerable to shut down or switching off. Similarly
there is no reliance on any particular data path. Understanding why this
communicative freedom was considered important requires an address to internet
history and the specific technical and communications problems that were identified
and resolved. These early decisions about the design of internet space
emphasised the importance of maintaining information flows and efficient
communications. The content of the communications was not considered as a
regulatory problem of concern to the network designers.An
excellent starting point is the one-page summary at Living Internet.Living Internet See also
Wikipedia:You
should at least skim read this information. Note the social idealism linked to
communications technology here. Note also the important role played by
protocols in internet communications from the start, in particular the TCP/IP
Transmission Control Protocol / Internet Protocol. TCP/IP is built into the
UNIX operating system and is the de facto standard for transmitting data over
networks. See
Wikipedia:
TCP/IP
What is a protocol?
Protocols are
standards that are established by engineering decisions about system design.
See Wikipedia:
Communications Protocols
As this entry describes it “Systems engineering principles have been applied to create a set of common network protocol design principles. These principles include effectiveness, reliability, and resiliency.”Throughout the 80s and early 90s the decentralised nature of the design that facilitated efficient information flows, and the technical emphasis on protocol decisions made by informal groups of technically skilled individuals, led to the presumption that with the internet, communication was “free”, meaning it was a network designed to facilitate free flows of information - which might also include freedom from the jurisdictional reach of law, courts and governments. There is a political dimension to this.
Cyberlibertarianism
This
idea is best exemplified in the popular work of John Perry Barlow. See Wikipedia:John Perry Barlow
His
writings, “The Declaration of the Independence of Cyberspace” and “The Economy
of Ideas” defiantly embrace the political and economic freedom of cyberspace,
characterising it as a freedom from government, from legislation, from the
reach of the courts. Here citizenry presumes a wired individual, whose identity
is expressed through participating in life online. This culture is discussed
further in Chapter Two.“Techno-hippie capitalism” was of interest
to some lawyers. The affront to legal authority implicit in the engineering
design, posed a real challenge for understanding questions of governance. On a
practical score, it was particularly Intellectual property law that seemed most
obviously problematic. Trade mark and copyright law, designed for territorial
application and associated with tangible goods seemed particularly threatened
by digital communications. Legal writers such as David Johnson
& David Post expressed the challenge for law this way:“Global
computer-based communications cut across territorial borders, creating a new
realm of human activity and undermining the feasibility--and legitimacy--of
applying laws based on geographic boundaries. While these electronic
communications play havoc with geographic boundaries, a new boundary, made up
of the screens and passwords that separate the virtual world from the
"real world" of atoms, emerges. This new boundary defines a distinct
Cyberspace that needs and can create new law and legal institutions of its own.
Territorially-based law-making and law-enforcing authorities find this new
environment deeply threatening. But established territorial authorities may yet
learn to defer to the self-regulatory efforts of Cyberspace participants who
care most deeply about this new digital trade in ideas, information, and
services.” Law and Borders:
The Rise of Law in Cyberspace
The free market/self
regulation advocated here is critiqued by writers such as Richard Barbook &
Andy Cameron who, in 1995 wrote:“The California Ideology is a mix of
cybernetics, free market economics, and counter-culture libertarianism and is
promulgated by magazines such as Wired and
Mondo 2000 as well as the
books of,
Stewart Brand,
Douglas Rushkoff,
Kevin Kelly and many others. The new faith
has been embraced by computer nerds, slacker students, thirty-something
capitalists, hip academics, futurist bureaucrats and even the President of the
USA himself. As usual, Europeans have not been slow to copy the latest fashion
from America. While a recent EU report recommended adopting the Californian
free enterprise model to build the 'infobahn', cutting-edge artists and
academics have been championing the 'post-human' philosophy developed by the
West Coast's Extropian cult. With no obvious opponents, the global dominance of
the Californian Ideology appears to be complete.” See
“Californian
Ideology: critique of West Coast cyberlibertarianism”
To understand the idea of the internet as a space beyond legal regulation we need to appreciate the presumptions of the original network design, and the way particular political and economic expectations were mapped onto that in the early-mid 90s.
- What are the regulatory presumptions of free information flows?
- Why did early 90s cyberlibertarian anticipate weak government intervention?
- Does technology express political values?
CO-REGULATION
However there was one
obvious site of internet regulation where the US government was clearly
involved. One of the earliest regulatory sites to receive considerable legal
academic attention was ICANN.Read ICANNWatch, ICANN for Beginners
See also
-
ICANN
-
Caslon Analytics Profile: ICANN and the UDRP
-
AuDA
-
Caslon Analytics Profile: auDA and dot-au space
-
Living History: Domain Name RFCs
While DNS administration was initially considered one, amongst many other technical aspects of the communications system, this area was carved out early as one requiring particular bureaucratic co-ordination and policy development. The transition from a voluntary organization to US government involvement, and the development of a corporate structure to manage world domain name rights brought to the fore questions of representation for this administration.
- Is an IP address only a technical matter?
- Why does the DNS system lead to government involvement in internet regulation?
- How do you explain the corporate models for ICANN and auDA ?
- How does the DNS system demonstrate the relevance of national laws to internet law making?
- Who should be responsible for global policy making
about the DNS system? See for example
Dan Hunter, ICANN and the Concept of Democratic Deficit
- To what extent is the call for self regulation a consequence of - the technical nature of the decision making?- problems of accountability for policy making and impacts?
- What happens when governments want strong oversight on internet traffic and access to content?
CODE AS LAW
Lawrence
Lessig argues that it is code — the software and hardware that makes each
cyber-space the way it is. He says “the central regulatory challenge in the
context of cyber-space is how to make sense of this effect of code”. See Lawrence Lessig,
"The Law of the Horse: What Cyberlaw Might Teach" 113
Harv. L. Rev. 501 (1997) And,
Code and other
laws of cyberspace (Basic Books, 1999)But there are
many different kinds of code, and there is so much of it. What code is the most
important to study in terms of effects?So far you should have
some understanding of the role played by technical protocols and internet
standards, particularly TCP/IP in the development of the internet. However
there are now many internet standards, formalised through the process of
developing RFCs.
What is an RFC?
See
-
The Request for Comments (RFCs) Homepage
-
RFC History
-
rfc 1000 the request for comments reference guide
What began as a system for documenting technical practices ending up have significant official status, once bodies like the IETF, IESG and ISOC created a framework for formally authorizing the setting of standards. The role of these bodies will be more closely considered in Chapter three.
-
The Internet Engineering Task Force
-
Internet Engineering Steering Group ( IESG )
-
Internet Society: About ISOC
How is an RFC different to a law? - in genesis - in application - in impact
- Does technical rule making require lawyers?
- How do you understand the dispute between
rfc -ignorant .org and Australian privacy law?
- What caused it? List all the dimensions to this dispute.
- Who can resolve such a problem?
- Is it possible to resolve the matter at the local level, in accordance with national laws?
Is this an example of excessive US authority, power
and control over global internet usage and design? Why/Why not?Here
the non-obligatory status of technical standards, and the lack of centralised
authority responsible for the way technical decisions are implemented and
enforced does make timely resolution of the ‘technical’ problem of being
blocked somewhat intractable. The problem suggests the need for global
co-ordination of national legal policy and standard setting. This is now being
pursued at ICANN. See ICANN: Whois
Privacy Page
- At what stage should technical standard setting incorporate sensitivities to national legal priorities?
- Is global legal policy making the answer?
- Where should the normative content of global standards come from?
- How do new norms develop?
LAW AS NORMS
The Gutnick dispute is a good example of a clash between internet norms of free communications and free speech, rooted in internet history, and legal expectations of responsibility for statements causing harm to reputation, grounded in Victorian defamation law.
- What are the respective values and expectations of law as exemplified in the High Court decision and the example of media reaction to it?
-
Dow Jones & Company Inc. v Gutnick [2002] HCA 56
-
SriMedia, Dow Jones v Gutnick: Australian Court on a very slippery slope to totalitarianism?
- Why was it possible to resolve Mr Gutnick’s complaint in Australia, but not the problem of blocking of domains listed by rfc-ignorant?
- What is the proper way for resolving such cross jurisdictional disputes?
- Should laws designed for the non-digital age accommodate the expectation of communicative freedom?
- Whose freedom is this?
CONCLUSION
This class should have introduced you to thinking about the complexity of the legal environment pertaining to the internet. Understanding the role of non-legal, technical decision makers is considered most important. We also touched on the question of appreciating the cultural values that inform technical decision making.If much of internet regulation is technical in nature, and private in character, how are we to study this power? As lawyers we are used to legislatively and judicially defined legal categories or legal objects. But here this eludes us. Formal law may have a role to play, but it mainly intersects with these “non-legal” technical forms of regulation. We need to develop different methodological tools to address this kind of power. This is the focus of our discussion of Chapter Two.