Sie sind auf Seite 1von 53

Internet Safety for Children: International Experience and Challenges for Russia

2012 Moscow

Luke Fenwick, A. Kiselev, D. Anosova and D. Kot Edited by S. Greene, E. Trushina

Internet Safety for Children: international experience and challenges for Russia [Electronic resource] / edited by S. Greene, E. Trushina .: Center for the study of new media & society, New economic school, 2012 Legislators, public activists and Internet sector specialists have been dealing with the sensitive issue of creating a safe Internet for children since the very moment the WorldWideWeb made possible the transfer of content and images. This report is aimed at highlighting the most salient international lessons for interested stakeholders in Russia. The report does not question the necessity of finding an effective solution to this delicate issue in Russia.

Center for the Study of New Media & Society New Economic School Suite 1918, Nakhimovskii Prospekt 47, 117418, Moscow, Russia Phone: +7 495 956 9508 newmedia@nes.ru http://www.newmediacenter.ru http://www.nes.ru

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/ or send a letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041, USA.

Contents

Introduction Review of Legislation Economic Aspects of Filtering Filtering and Blacklists in the UK, Germany and France Labeling and Content-Rating Systems Self-Regulation and List-Filtering Filter Systems and Problems Relating to Civil Liberties Conclusions and Recommendations

4 6 13 22 28 31 39 42

Introduction

1 November 2012 saw the entry into force in the Russian Federation of Federal Law No 139-FZ of 28 July 2012, On Amendments to the Federal Law on Protecting Children from Information Harmful to their Health and Development. 1 As its title indicates, the new legislation amended Federal Law No 436-FZ of 29 December 2010 2 and several other existing laws. It allows for the creation of a blacklist of internet sites promoting child pornography, use of illicit drugs, and suicide, and empowers the Federal Service for Supervision of Communications, Information Technologies and Mass Communications (Roskomnadzor, a state body under the jurisdiction of the Federal Ministry of Communications and Mass Media) to block such sites without a court order. The internet hosts a wide spectrum of material that is potentially harmful to minors, ranging from the legal (such as pornography) to the illegal (such as depictions of sexual activity by or against children, commonly known as child pornography).3 Almost all states criminalize the production, dissemination and possession of child pornography, while other content remains legally accessible to users.4 Several countries legislate against other forms of obscene or inappropriate material. At least in the USA, research suggests that children today are safer when browsing the internet than they were at the turn of the century.5 Even so, children remain at risk.6 Numerous studies confirm the extent of their exposure to potentially harmful internet content. Valcke et al. (2008) found that 40.7% of 10-12 year olds in Flanders (out of a sample of 1,700 respondents aged 7-12) had been exposed to pornography, violence, racism or hate-speech online.7 In another study, 57% of 1,511 9-18 year olds in the UK who used the internet at least once a week said they had viewed pornography.8 And, according to a survey conducted by Yahoo! in August 2011, many children and teenagers expressed concern about online safety. On a scale of 1 to 5 (where 5 indicated extremely concerned/think about a lot and 1 indicated not at all concerned/do not think about at all), 36% of 10-12 year olds reported a score of 4 or 5, compared with 45% of 13-17 year olds.9 The Norton Online Family Report of 2010, based on a survey of 7,000 adults and 2,800 children aged 8-17 in 14 countries, concluded that some two-thirds of children had had a negative experience online.10 Perhaps the most exhaustive and widest-ranging study is the 2010 European

Union (EU) Kids Online conducted in 25 European countries among 25,142 children and teenagers aged 9-16 who use the internet. This found that 14% of all respondents had seen some sexual imagery online, while 12% of 11-16 year olds had seen hate sites, 10% pro-anorexia sites, 7% self-harm sites, 7% drug forums and 5% suicide sites.11 In all, 21% of all 11-16 year olds interviewed said they had viewed at least one of these categories.12 There are both legislative and non-legislative means of addressing child safety online. There is, unfortunately, little literature on the efficacy of these tools to protect children online, and research in developing countries is minimal.13 A report published in 2011 by the Family Online Safety Institute and Global Resource and Information Directory noted that: Overall the lack of research, both qualitative and quantitative, is the single biggest barrier to developing a better understanding of how technology impacts childrens lives both positively and negatively.14 In order to cast light on the implications of Russias new legislation, the present study explores world legislative approaches to strengthening childrens safety online, focusing on the laws of 12 countries in 5 continents. These include both established and emerging economies: Brazil, Canada, China, France, Germany, India, Indonesia, Japan, South Africa, Turkey, the UK and the USA. Given the difficulty of outlawing all forms of potentially harmful content, various means of self-regulating, co-regulating and digital literacy have been developed to promote child safety. This study outlines four that are of particular relevance to Russia: industrial self-regulation and codes of ethical conduct; awareness and education programs; hotlines for reporting allegedly illegal content; and technical measures such as filters and content-rating.

Introduction

Review of Legislation

This chapter summarises the most noteworthy and innovative legislative initiatives undertaken by leading world nations, including the so-called developing economies and the BRICS (Brazil, Russia, India, China and South Africa). Table 1 provides a concise overview. Three points stand out. First, the main threat perceived by lawmakers tends to be the production and distribution of pornography that includes or portrays minors, and other related illegal actions that concern them. All the countries investigated criminalize the production and online dissemination of child pornography for commercial purposes. There is plenty of other content that may be declared offensive or obscene where minors are concerned, but which is not with certain exceptions regulated by law. Second, legislation and law-enforcement relating to the defence of minors on the internet including matters relating to child pornography are grounded in almost all countries in existing offline legislation and practice. It is rare for laws to be adopted that relate exclusively to the internet, and those that are tend to be technical and procedural in nature. Third, a countrys approach to childrens online security tends to coincide with its general tradition of information-regulation. Where the state has historically tended to take a non-interventionist approach (as in Brazil, France, Germany, South Africa, US and UK) lawmakers rarely go further than defending children from direct threats of a sexual nature, while other categories of content remain unregulated. Where state intervention and censorship have deeper roots, and where the state has traditionally sought to restrict the information space in accordance with an official ideology (as for example in China, Indonesia, Turkey and parts of India) we find more assertive efforts to regulate child security online.

Jurisdiction

Child pornography and material showing sexual exploitation of minors Member-states are required to remove such content and may block its distribution. It is a criminal offence to produce, store or distribute such material. Internet service providers (ISPs) must inform users of the availability of filters and alert police to the presence of illegal content. It is a criminal offence to produce, store or distribute such material. The Commission for the Protection of Youth in the Media (KJM) investigates and prosecutes producers and distributors of such material. Compulsory filtering has not been enforced, and an earlier law on it has been repealed. It is a criminal offence to produce, store or distribute such material. It is a criminal offence to produce, store or distribute such material. ISPs are required to alert the authorities to the presence of illegal content.

Other content of a sexual character

Content showing violence

Other unacceptable content

European Union

Not regulated

Not regulated

Not regulated

France

Not regulated

Not regulated

Not regulated

Germany

The KJM investigates and prosecutes producers and distributors who knowingly supply pornography to children

The KJM investigates and prosecutes producers and distributors who knowingly supply children with content glorifying violence.

The KJM investigates and prosecutes producers and distributors who knowingly supply children with content inciting hatred or violence, or denying the crimes of the Nazi regime.

UK

Not regulated

Not regulated

Not regulated

USA

Institutions in receipt Not regulated of federal funding (schools, libraries etc.) are required to filter for pornographic content.

Not regulated

Review of Legislation

Canada

It is a criminal offence to produce, store, distribute, or deliberately access such material. It is a criminal offence to produce and distribute such material.

Not regulated

Not regulated

Not regulated

Japan

Not regulated

Not regulated

Proof of age is required to use online dating sites and similar services.

Brazil

It is a criminal offence to produce, store or distribute such material. It is a criminal offence to produce, store or distribute such material.

Not regulated

Not regulated

Not regulated

India

The police are empowered to investigate, confiscate and remove obscene material.

Not regulated

The police are empowered to investigate, confiscate and remove content that represents a threat to national security or public order. Not regulated

Indonesia

It is a criminal offence to produce, store or distribute such material. It is a criminal offence to produce, store or distribute such material. It is a criminal offence to produce, store or distribute such material. The courts may order the removal of illegal content. It is a criminal offence to produce, store or distribute such material.

There is a wideranging ban on obscene material. The deliberate distribution of pornographic material to minors is forbidden. The courts may order the removal of sexual or obscene content.

Not regulated

South Africa

Not regulated

Not regulated

Turkey

The courts may order the removal of violent content.

The courts may order the removal of content that advertises gambling games, promotes suicide or narcotics, or criticises Atatrk. It is a criminal offence knowingly to distribute to minors material showing terrorism or gambling games.

China

It is a criminal offence knowingly to produce, store or distribute pornographic material.

It is a criminal offence knowingly to distribute to minors material displaying violence.

Table 1. Overview of International Legislation to Protect Children on the Internet

Review of Legislation

Example 1: China A System of Total Filtering


In China, the internet is governed by a maze of legislation. Article 11 of the Law of the Peoples Republic of China on the Protection of Minors of 1992 (amended in 2006) charges parents and guardians with ensuring that those in their care (under 18 years of age) do not engage in harmful practices including becoming addicted to the internet.15 Article 33 also charges the state with ensuring that young people do not become so addicted. For its part, the state undertakes to foster the development of technologies and services that serve this goal and are conducive to the healthy growth of minors. Article 34 criminalizes the selling, renting and/or dissemination to minors of material that is pernicious to minors. This includes audio-visual products, electronic publications and network information concerning pornography, violence, murder, terrorism [and] gambling. Article 31 stipulates that public internet services should not only be accessible to minors, but also provide safe and sound services for their online activities. Article 36 states that commercial internet services that offer material inappropriate for minors may not be set up near primary or secondary schools, and that these businesses must post over-18 notices and turn minors away. In March 2010, the Chinese Ministry of Culture decreed that online games must not contain material that damages the physical or mental health of minors, or that encourages activities against public morals and laws.16 The Chinese authorities campaign strongly against pornography in general, and the law applies to adults and minors alike. Even so, there is a particular sensitivity to the corrupting influence of pornographic material on children. In September 2004, the Supreme Peoples Court and Supreme Peoples Procuracy issued the Interpretation of Several Issues on the Specific Application of Law in the Handling of Criminal Cases about Producing, Reproducing, Publishing, Selling and Disseminating Pornographic Electronic Information via the Internet, Mobile Communication Terminals and Sound Message Stations.17 This applied the provisions of the Penal Code to the production, reproduction, publication and selling or dissemination of pornography on the internet for profit. Article 3 criminalized the production, reproduction, publication and selling or dissemination of pornography on the internet even if not carried out for profit (though the quantity of material qualifying for a criminal charge was higher than that prescribed for vendors seeking a profit). Article 6 envisioned a harsher punishment than those carried by the Penal Code for individuals who engage in the above-listed activities while either involving minors in the process or selling or distributing pornography to them. In September 2005, the State Council Information Office and the Ministry of the Information Industry passed the Provisions on the Administration of Internet News Information Services.18 Article 19 prohibits 11 categories of content including, in point 6, spreading obscenity, pornography, gambling, violence, terror or abetting the commission of a crime. Otherwise, pornography is regulated by Articles 363-367 of Chinas 1979 Penal Code (as revised in 1997). According to Article 152, persons who smuggle pornographic materials for profit and/or dissemination face a fine and/or a prison sentence of 3-10 years. Extenuating circumstances are laid out in Articles 152

Review of Legislation

In June 2004, and with the approval of the Chinese Government, the Internet Society of China (ISC) set up the China Internet Illegal Information Reporting Center (CIIRC). The Centers objectives include protecting the public, minors in particular, from the influence of illegal and harmful information. The Center identifies its main focus as materials harmful to the healthy growth of minors, such as obscenity and pornography, violent games, terrorism, abetting the commission of crimes, the propagation of racial hatred, libel and insult, [and] the violation of others rights and of intellectual property rights. Private citizens may file complaints with the CIIRC by means of the website www.net.china.cn. Over 30,000 complaints are reported to have been filed with the CIIRC since its inception.20 A White Paper published by the Chinese Government in June 2010 made reference to other reporting organizations, including the Network Crimes Reporting Website, the Harmful and Spam Internet Information Reporting and Reception Center, [and] the Pornography Crackdown and Press and Publication Copyright Joint Reporting Center. 21 The White Paper also noted the Governments promotion of internet tools and state-funded education modules designed to heighten internet safety for minors, an example of the latter being the Mothers Education Program. 22

Review of Legislation

and 153. Apart from smuggling, the production, duplication, publication, sale and dissemination of pornographic materials for profit is a crime under Article 363 and guilty parties face imprisonment of up to 3 years. Article 364 goes further and, in serious circumstances, criminalizes not-for-profit dissemination of pornography. The public display of pornography is also illegal under Article 364, while harsher punishments await offenders who produce or duplicate audio-visual pornographic materials and organise their viewing. As mentioned above, the dissemination of pornographic materials to minors carries a harsher sentence.19

10

Example 2: Turkey A Slippery Slope


In April 2011, Turkey attempted to launch a filtering system based on identifying key words. Turkeys main internet regulator, the Turkish Telecommunications Directorate (TB) 23 was charged with enforcing Law No 5651 on Regulating Broadcasting in the Internet and Fighting against Crimes Committed through Internet Broadcasting.24 This required internet service providers (ISPs) to block websites identified through a search for 138 key words which included Adrianne, free, fat, pic; the Turkish words for forbidden, skirt, animals and sister-in-law; and the German word Verbot (ban).25 The Information and Communication Technologies Authority (BTK) which is linked to the prime ministers office scheduled for 22 August 2011the introduction of the new filtering system as part of the Procedures and Principles Regarding Safe Internet Use. The system was connected to the Regulation Regarding Consumer Rights in the Electronic Communications Sector, published on 28 July 2010, which laid down parameters for producer/operator and consumer interaction and for the safe use of the Internet (which is enshrined as a consumer right).26 As a consumer-protection regulation, the system obliged internetusers to submit to content-filtering before they could access the internet. Originally, the system was intended to have four different settings: family, children, domestic and standard. However, in response to civil society and media protests, the BTK postponed introduction of the Procedures, and a new version of the Regulation appeared on 16 September. This decreed that the filtering system would be voluntary (that is, internet users would not be required to install the filtering software on their computers), and that there would be two rather than four versions of the software: one for adults (who may access any sites as long as they are not on a blacklist) and another for children (who may only access sites approved by the BTK). The system finally came into force on 22 November 2011. The greatest problem with the voluntary label was that it meant that ISPs who chose not to offer the filtering program could find themselves in defiance of the BTK and the state. There was as a result considerable pressure on all ISPs to offer the voluntary filters to users. Most disturbing was the fact that internet users were not offered a third option internet access without filtering.27

11

Review of Legislation

Example 3: The EU Cooperation between Jurisdictions


European Council Decision 2000/375/JHA of 29 May 200028 and Framework Decision 2004/68/JHA29 of 22 December 2003 were adopted with the intention of countering the sexual exploitation of children and child pornography online. Both aimed to encourage cooperation between EU member-states, while 2004/68/ JHA spelled out the minimum national legislation required for all EU members. A proposal to repeal 2004/68/JHA was put before the European Parliament in March 2010.30 A compromise was agreed in June 2011 and approved by the Parliament on 27 October 2011.31 While it remains for member-states to ratify this Directive and adopt its resolutions, it is intended to supersede and strengthen the regulations of 2004/68/JHA by facilitating the prevention and prosecution of child abuse and increasing the protection of victims. This would include awareness-raising campaigns, capacity building through the training of professionals (Article 19), and an obligation on member-states to remove any and all child pornography hosted within their respective territories and to collaborate with other states where and when such content is hosted internationally (Article 21). On content-blocking, Article 21 states that: Member States may take measures to block access to webpages containing or disseminating child pornography towards the Internet users in their territory. These measures must be set by transparent procedures and provide adequate safeguards, in particular to ensure that the restriction is limited to what is necessary and proportionate, and that users are informed of the reason for the restriction. These safeguards shall also include the possibility of judicial redress.32 It is important to note that Sections 44-46 of the European Parliaments E-Commerce Directive 2000/31/EC provide for ISP liability only in circumstances where a provider is cognisant of transmitting illegal content.33

12

Review of Legislation

Filter Systems: Specifics and Problems

13

Introduction
Many countries seek to restrict minors access to online material that is not considered desirable for them to view. Access may be restricted by law, by selfregulation by ISPs, or by society. Initiatives may take the form of either automated or manual filtering. This chapter describes the technical aspects of filtering in general, and then goes into greater detail concerning the automated filtering. The difference between automated and manual filtering hinges on when the decision is taken either to show or to block a certain piece of content. With manual filtering (also known as the list method), the decision is taken early in the process and involves human participation. This may be done in several ways. A blacklist of forbidden sites may be created and/or a whitelist of approved sites. It is also possible to create a system of indicators, according to which every site will be rated according to its suitability for access, usually pegged to the age of the viewer. In these instances, the system is reactive, blocking access to sites already deemed to contain bad content. Automated filter systems may also act proactively, blocking the users access to undesirable content by means of linguistic analysis of texts or algorithmic analysis of statistics and dynamic images. Filtering occurs in response to every unique interrogation of the data (that is, the content). The interrogated content is first worked over by the filter and then is found to be either accessible or not. A major difference between the various approaches to filtering consists not in the system but in how it is applied. The filter may be imposed by the state or made obligatory at the level of either the main provider or the individual user. Much depends on where the choice is made, including the transparency and accountability of the system, its security, and on whose shoulders the burden of responsibility falls (that is, on the provider or on the user). As we will show in this and following chapters, these considerations can have direct consequences as regards the protection of children, freedom of speech, and economic efficiency.

Examples of Use and Technical Specifics


There are three main ways of filtering content, regardless of whether it is carried out manually or automatically internet provider (IP) addresses, Domain Name Systems (DNS) and Uniform Resource Locaters (URLs). Blocking of IP-addresses effectively prevents access to the intended target and may be implemented by ISPs at or near international gateways. This is normally the easiest and cheapest method of filtering as it requires little additional expertise and hardware. Filtering of IP-addresses may however lead to over-blocking, since it may well block access to other web-pages hosted on the targeted server. DNS filtering, which is the commonest form, an ISP may configure its DNS server in such a way that it does not divulge the IP address to a user when requested.35 India prefers DNS filtering to IP blocking primarily in order to restrict access to content that poses a risk to national security (e.g., inflammatory sectarian material). In such cases, the ISP Bharti issues the invalid IP address 0.0.0.0.36 A significant problem with DNS filtering is however that all pages associated with a blocked domain will be censored, not just the URL that carries illegal content. DNS filtering is also prone to under-blocking: it will not block a URL that contains an IP address, even though that URL itself may contain illegal content.37 Even so, the DNS method is not as crude as IP filtering (given that one server may hold several domains).38 Turkey and Brazil, among other countries, have in the past used DNS filtering to block YouTube domains.39 As mentioned above, Turkey also uses keyword filtering according to the URL method. The Child Sexual Abuse Anti-Distribution Filters (CSAADF) filter-system blocks the DNS cache and is used in Denmark, Finland, Italy, Malta, Norway, Sweden, Switzerland and New Zealand. And, in countries that use CSAADF, over-blocking tends to be viewed not so much as a problem as an advantage, since it is seen as a deterrent to domain owners.40 As a method, URL filtering avoids both under- and over-blocking since it denies access to a single offending webpage. China, for example, uses URL filtering (as well as IP blocking), and its system analyses certain words contained in URLs.41 China also possesses packet inspection (hash value) technology that can analyse content and scan images.42 But the URL method also has drawbacks: it is becoming increasingly difficult to implement as the number of users grows, which is a particular problem for developing countries; it is inefficient in proceeding against a certain type of content, such as pornography; and building a blacklist of URLs is labour-intensive. Lastly, akin to the IP and DNS methods, tech-savvy users may easily circumvent filters blocking URLs.43 In fact, commentators agree that users may easily circumvent filtering mechanisms by using such means as anonymizing-technology, proxy servers (a recursive transformer in the case of DNS filtering, or an encrypted proxy for URL filtering), or peer-to-peer file-sharing.44 This applies even with the sophisticated filtering system.45 From the point of view of content-providers, too, IP, DNS and URL

14

Filter Systems: Specifics and Problems

Use of Filter Systems at Non-State Level


Specific concerns pertain to non-state systems and to the relationship between filtering and human rights. While national law-enforcement agencies oversee the CIRCAMP DNS filtering-system in much of Europe, Cleanfeed is overseen by a private company. This creates concerns about legitimacy, transparency and accountability. 50 In the UK, as elsewhere in Europe and North America, there is no legislative foundation upon which filtering takes place, only strong policy and informal bonds; some see this as the states abdicating its responsibilities.51 Neither Cleanfeed nor CIRCAMP is a law-enforcement agency, nor is there a transparent complaints procedure.52 The blocking relationship therefore exists only between the end-user and the ISP; there is virtually no place for constitutional and public law.53 It follows that decisions on filtering which some see as amounting to privatized censorship are opaque to the public and are not subject to legislative or judicial decisions.54 And, while it may be understandable that no list of blacklisted sites is published in the UK (on the grounds that this would provide free publicity for the most dangerous sites), the individuals who compiled the list are not identified either.55

Filtering at the Level of the End-User


Filtering systems at the level of the end-user are also ineffectual.56 Studies in 2001-3 found that such filtering software managed to block between 10% and 30% of inappropriate content.57 A US study in 2005-6 of the search indices of Google and MSN, and the search results of 685 of the most popular search results on AOL, Yahoo! and MSN, used 15 permutations of content-filters and filter settings. It found that, regardless which search-engine was used, a filter would block an average of 22.1 clean webpages for every adult page blocked on Google and 23 for every adult page on MSN. As for the most popular search results, 1.1 clean pages were blocked for every correctly identified adult page. On standard searches, this figure rose to 7.6 clean pages blocked for every adult one.58 In its report for 2010, the German organization Jugendschutz im Internet found that 1 in 5 sites blocked by voluntary filtering programs contained no harmful content. The only satisfactory (befriedigend) result was that the filters blocked almost all pornographic or sexual content. Otherwise, they allowed every second problematic site. In fact, the report concluded, There have been no appreciable improvements [in the effectiveness of filtering technology] in the last 5 years. 59 Nor is this all. The Turkish scholar

Filter Systems: Specifics and Problems

filtering may be ineffectual. For example, CIRCAMP (the COSPOL Internet-Related Child Abusive Material Project 46) and the UK-originated Cleanfeed 47 block locations rather than content. Content-providers may avoid the block simply by moving material to another IP address, domain and/or URL. This is not a problem faced by the hash-value blocking system (or deep-packet inspection) used by Facebook and AOL, which analyses content regardless of location.48 The hash-value technology is the most expensive means of filtering and is rated the most effective, but even so it is not considered 100% effective.49

15

To conclude, a body of literature suggests that the social cost of censor-ware is higher than the benefit.61 Preferable to the installation of filtering or surveillance technology that undermines the trust between parent and child is passive parental monitoring that promotes dialogue and affirms childrens autonomy. Such an approach better promotes the childs cognitive and emotional development.62

Economic Aspects of Filtering


Use of an internet filter can have a serious impact on the speed of internet connections. Research in Australia indicates that the process of censoring information can slow internet access by at least 20%63, leading to a reduction in productivity in companies whose activity is in any way connected with the internet. Costs incurred as a result of the installation of a filtering system, such as hiring call-center staff to handle complaints, are likely to be passed on to the consumer. Loss of speed of internet access will reduce productivity of small- and medium-sized firms because it means they will be slower to do business online. Such a decrease in speed threatens to wipe out whatever progress such firms had earlier made as a result of increasing their operating speed by going online. In Australia, experts calculated that the increase in speed of access by means of broadband access could significantly increase gross national product.64 It seems logical to assume that a fall in speed would have the opposite effect, creating, if not direct losses, absolutely concrete missed benefits. Along with companies, consumers of Internet-services are among the main groups likely to be affected as a result of implementation of Russian Federal Law No 139-FZ. We begin by considering internet development in Russia. The Boston Consulting Group (BCG)S e-Intensity Index analysis measures and compares different levels of internet-use based on 3 indicators: internet availability; volumes of sales carried out over the internet; and user activity.65 According to these indicators, Russia lags behind the members of the Organization for Economic Cooperation and Development (OECD), but is one of the leaders among the actively developing economies.66 It is of course important to note that, within Russia, regional distinctions are

Filter Systems: Specifics and Problems

Yaman Akdeniz notes that many end-user filters impose the standards of softwaredevelopers rather than allowing parents the freedom to choose. Moreover, filterings functionality is limited since it cannot regulate communication environments such as chat-rooms, peer-to-peer networks, file-transfer protocol (FTP) servers and voice over internet technology (VoIP).60

16

substantial: Moscow and St Petersburg outstrip all of Russias other regions. This was the conclusion reached by Russias Public Opinion Foundation (FOM). Its Internet World research project of 2011 subdivided Russias regions into 4 groups according to level of internet penetration and potential for growth (see Table 1). The first group included 15 regions, headed by Moscow and St Petersburg; the second was made up of 16 regions where there was a relatively high level of internet penetration but its potential was under-exploited; the third consisted of 10 regions characterised by weak internet penetration but high user activity; while the fourth consisted of some 30 regions with weak penetration and predominantly passive users.67 The authors of the project estimated that, rated according to monthly share of Internet audience, there is a difference of more than two times between leaderregions and outsider-regions (67%-68% in Moscow and St Petersburg compared with 29% in Tyva or 34% in Mordovia). 68 They found that, while the average Russian indicator of internet penetration at that time was 48% 69, internet use was growing strongly. They noted for example that data for the second quarter of 2011 put the total number of users aged over 18 at about 55 million, an increase of 7% over the first quarter of the same year. 70 Figure 1. Internet Penetration in Russias Regions

17

Source: Internet World poll conducted by MegaFOM, June 2011

(Monthly user figures for Nenets Autonomous District, Republic of Ingushetia, Chechen Republic and Chukotka Autonomous District are estimated from a regression model).

Filter Systems: Specifics and Problems

Being not only consumers, but also manufacturers of content in the internet environment, frequent users are likely to be the first to feel the impact of the new law. As the BCG points out, internet users play an important role, creating content in, for example, social networks and blogs.71 To sum up: a likely consequence of the introduction of Russian Federal Law No 139-FZ for the ordinary Russian internet user will be higher tariffs for internet services. This is because the obligation to introduce a filter will increase the costs of internet providers, who are likely to pass these costs on to those who use their services. If the increase proves substantial, small internet providers may find themselves forced out of business. Not only will this reduce competition on the internet; the virtual monopoly that larger providers will find themselves enjoying may encourage them to raise tariffs yet further. An increase of even 100 roubles a month could lead to overall losses to users of as much as 5.5 billion roubles.72 The regions most heavily hit by increased tariffs would be those with the most active users (groups 1 and 3 in Table 1 above). Higher tariffs might in turn provoke a fall in the overall number of internet users. Moreover, the introduction of a filter may lead not only to higher prices for internet services, but also to a fall in their quality. Lower internet speeds, restrictions on range of choice and information search, false filteroperations according to some estimates, the number of false operation for the most efficient filter systems averages 10,000 sites per million, that is, 1% 73 (see Figure 2) and possible restrictions on freedom of speech may be some of the negative consequences for the internet-industry and its users. Figure 2. Results of a Comparison of Various Filter Systems

18

Filter Systems: Specifics and Problems

Moreover, the introduction of filtering will increase the risk of system failure. In order to filter data, it is necessary to centralise the flow of information. At present, the world is observing an opposite trend, that is, the decentralization of information streams with the aim of reducing the danger of system-breakdown and its attendant costs.74 But a failure in the filter system could lead to heavy losses since it could impact a significant portion of this flow. As well as the negative effects mentioned above, however, consumers will also experience positive results from the introduction of Russias new legislation. This relates first and foremost to those members of the population at whom the law is aimed children, teenagers and parents. The benefits they will receive are not material and therefore cannot be easily evaluated, but they include protection of the mental and moral health of children and teenagers, reduction in parental anxiety, and freedom for young people to use the internet unsupervised since, with a filter in place, the need for direct parental supervision disappears. As a result, internet possibilities in the education sphere can be expanded. As for financial considerations, introduction of a universal filter will cut the expenses of those parents who until now have used special parental control filters, since from now on these costs will be equally distributed between internet users and ISPs. The new law will also affect the interests of the state. First, implementation of the measures mandated by it will require considerable budgetary outlay. Secondly, increased charges for internet services will lead to a fall in the number of users, which in turn will mean a reduction in internet penetration. Internet penetration is one of the indicators that the OECD uses to calculate its e-readiness index, which assesses a countrys readiness to make the transition to an information society toward which many states currently aspire.75 Introduction of a filter system will lead to a decrease in internet speeds which will in turn lower the quality of internet communication. According to data for the second quarter of 2010, Russia ranked 27th out of the 50 countries with the largest internet communities. The average speed of the internet in Russia is 2.6 Mbit/s compared with an average speed of 1.8 Mbit/s for the world in general (see Figure 3). Meanwhile, some 80% of users have access with speeds ranging from 256 Kbit/s to 5 Mbit/s.76 Experts have identified a positive correlation between the quality of

Filter Systems: Specifics and Problems

A further threat might be posed to the security of bank data transferred over the internet, since such information would be accessible by the filter system. This in turn could lead to a significant decline both in internet sales and in the use of online banking services. Whereas online purchases and bank operations usually involve lower transaction costs, a fall in data security is likely to provoke a corresponding fall in efficiency savings.

19

communications, the quantity of innovation, and the rate of economic growth.77 The BCG has for example argued that in Russia internet access contributes as much as $19.3bn, or 1.6% of GDP. Moreover, experts believe that this could substantially increase if infrastructure, logistics and legislation were developed over a 3-4 year period. According to the best-case scenario, Russias internet economy could by 2015 constitute 3.7% of GDP (or 5% of GNP if the oil and gas component were excluded). This comes close to the level of developed economises.78 A decrease in internet speeds would, by contrast, threaten to reduce Russias competitiveness and investment appeal. For an investor considering whether to engage in direct or portfolio investment in the Russian economy, a decrease in internet speeds could significantly increase transaction costs and make Russia less attractive a place for investment. Finally, it should be noted that installation of a filter system could lead to a fall in freedom of speech in Russia. As President Medvedev argued, the internet has an important role in the development of democracy in Russia.79 Restricting freedom of speech on the internet could have a negative impact on Russias international rating which could, in turn, harm the countrys investment climate.

20

Filter Systems: Specifics and Problems

Figure 3. Average connection speeds for 50 leading countries on the internet

21

Filter Systems: Specifics and Problems

Filtering and Blacklists in the UK, Germany and France

22

The UK, Germany and France are three examples of countries where internet content may be added to a blacklist and then submitted to filtering. But while in the UK the manufacture, storage, distribution and showing of images of child pornography are prohibited under the 1978 Protection of Children Act, the 1988 Criminal Justice Act and the 2003 Sexual Offences Act, ISPs are under no legal obligation to install a filter system to protect minors on the internet. In line with the recommendations of the 2008 Byron Review regarding self-regulatory measures for increasing internet security, preference is instead given to self-regulation. In fact, the majority of providers have installed filters to block content containing child pornography. British Telecom, for example, uses the Cleanfeed system and acts in accord with the advice of the Internet Watch Foundation (IWF), which maintains and updates a blacklist of sites (according to their URLs) and conveys the relevant information to ISPs. This means that, while Cleanfeed does not itself actively filter content, it does block content blacklisted by the IWF. All major ISPs in the UK now filter the sites listed by the IWF. In 2010, the IWF had identified 16,739 potentially criminal URLs containing child sexual abuse out of a total of 43,190 reports processed.80 Freedom House estimates that the IWF list contains 500-800 live URLs at any one time, while the OpenNet Initiative puts the figure at 800-1,200.81 The IWF maintains a hotline that allows users to report undesirable onlinecontent, which may then be added to the blacklist. Once a site has been flagged up, all aspects of the image and content are reviewed, as are the risks inherent in blacklisting. The process is described here: http://www.iwf.org.uk/members/memberpolicies/url-list/iwf-url-list-policy-and-procedures. URL-addresses that are located in international space (and that cannot, therefore, be removed at source) and that have been flagged up for blacklisting, go to the committee of the IWF Council for consideration. The committee, which consists of at least two independent members for every representative of the industry, decides whether or not the URL-address should be added to the blacklist. There is an appeals process but, once a local site has been blacklisted, the IWF notifies ISPs and law-enforcement agencies; at this point ISPs are required to remove the content from their networks. As for illegal content stored in international space, there is also a blacklist of sites whose content ISPs are required to block. The IWF alerts the various internet protection organizations and hotlines in the same jurisdiction so that they can notify lawenforcement agencies of the existence of suspicious content. This is consistent with the principles of the International Association of Internet Hotlines (INHOPE) to

remove the source. The IWF is not dependent on the UK Government but relies for funding on the European Union (EU) and partners such as ISPs, mobile operators, content- and hosting-providers, and filter and web-search companies. As for content, almost everything that is filtered by Cleanfeed involves an image of child sexual abuse. However, the UK regulatory authority Ofcom has explored the idea of extending Cleanfeeds remit to include violations of authors copyright. In December 2010, for example, the Motion Picture Association filed an injunction against British Telecom (BT) to force it to block a file-sharing website, and in October 2011 the UK High Court ordered BT to block the site. In Germany, several attempts have been to blacklist sites and block access to their content. The Inter-State Treaty on the Protection of Minors (Jugendmedienschutz-Staatsvertrag JMStV) came into force on 1 April 2003. Section 4 prohibits content that incites racial hatred, glorifies violence or war, denies the atrocities committed under National Socialism, displays sexual abuse, violence or bestiality, and encourages murder or genocide. Section 5 addresses content that, while it may not be illegal, may be harmful to minors. The Commission for the Protection of Youth in the Media (Kommission fr Jugendmedienschutz KJM) has central oversight over implementation of the JMStV, and may impose penalties that are, in turn, enforceable at state level. It may recommend blocking sites, though the final decision rests with the Federal Department for Media Harmful to Young Persons (Bundesprfstelle fr jugendgefhrdende Medien BPjM). German contentproviders found to be in violation of the JMStV will be prosecuted, while the sites will be blacklisted in accordance with Article 5, Section 24 of the 2003 Youth Protection Law (Jugendschutzgesetz JuSchG). In June 2009, the German Parliament adopted a Law on Preventing Access to Child Pornography over Communications Networks (Gesetz zur Erschwerung des Zugangs zu kinderpornographischen Inhalten in Kommunikationsnetzen ZugErschwG). It required the Federal Criminal Office (Bundeskriminalamt BKA) to draw up a blacklist of domain names, IP addresses and URLs. The Office should then inform content- and hosting-providers about the blacklisting, while ISPs with more than 10,000 customers should block access to the listed domains, IP-addresses and URL-addresses, and re-direct users to a Stop sign. An anonymous count of users attempting to access the blocked sites was to be kept and sent to the ISPs. Responsibility to decide which sites should be blocked rested with a council consisting of five experts meeting 4 times a year. The ZugErschwG was politically controversial. While it entered into force on 23 February 2010, the Interior Ministry instructed the BKA not to begin to compile a blacklist. The law was repealed in December 2011 and removed from the statute book in January 2012. Despite the repeal of ZugErschwG, providers must still, in accordance with the JMStV, withdraw material depicting the sexual abuse of children when prompted to do so by hotlines that compile lists of URL-addresses similar

23

Filtering and Blacklists in the UK, Germany and France

to those put together in the UK by the IWF. These hotlines are the Voluntary SelfRegulation of Multimedia Service-Providers association (Freiwillige Selbstkontrolle Multimedia-Diensteanbieter FSM) and Youth Protection, both of which belong to INHOPE. Analysts conduct an initial evaluation of the content before sending the case to the appropriate law-enforcement body and, if the content is locally placed, informing the hosting-provider. If the content is hosted in another country, the hotline will inform either the relevant INHOPE organization or, if INHOPE is not represented in the country in question, Interpol. Finally, a filter system issued by the BPjM may be applied to the end-user and will block blacklisted sites; it is used by information and communications technology (ICT) companies belonging to the FSM. As for France, it has neither a legally enforced filter system nor one that is voluntarily administered. But Article 6 of Part 1 of Law No 2004-575 of June 2004 on upholding trust in the digital economy requires all information-providers to inform subscribers of the presence of program-filters which may be applied on demand to any account. Providers also are obliged to inform the law-enforcement agencies of any illegal content that is placed on their servers. The Association of Suppliers of Access and Internet Services (Association des Fournisseurs dAccss et de Services Internet AFA), established in 1997, represents the entire French industry. In 1998, AFA set up the Point de Contact (Point of Contact) information service to counter images of child abuse and incitement to racial hatred. In June 2004, AFA signed a Charter to campaign more effectively against such material. With this in mind, the industry established a hotline www. pointdecontact.net to improve access to information about online safety for minors. It registers public complaints and responds to them, passes the relevant information on to the law-enforcement agencies, and supports the concept of voluntary filtering. In France as in Germany and the UK a hotline registers complaints and alerts ISPs and law-enforcement agencies to URLs containing illegal material. If the content is locally accessible, ISPs remove it from the server; if it is located in international space, they block the URL and send its coordinates to INHOPEmembers in the given country or to Interpol.

24

System Similarities and Differences


As seen above, blacklist and filter procedures in the UK, Germany and France are broadly similar. The three countries share a common procedure for notification and deletion which is overseen by NGOs and executed by the industry. This includes hotlines, analysis of complaints and their onward transmission to ISPs and/or lawenforcement agencies. If the material is local, providers seek to remove it at source. At the same time, there are significant differences. In the first place, filtering in

Filtering and Blacklists in the UK, Germany and France

the UK and Germany is aimed at images of child pornography, whereas in France providers filter content harmful to human dignity, which includes images of child pornography, accessible content harmful to minors, and incitement to racial hatred, crime, terrorism and suicide. Secondly, the German Government sought at one time to establish a filter system and blacklists drawn up by the BKA. The UK and France have not adopted such legislation. But, while Germany adopted ZugErschwG in 2009, it was condemned by many as unconstitutional and was eventually repealed in 2011. Self-regulation remains the rule in Western Europe, and not only ISPs but many other internet and communications technology (ICT) companies have access to blacklists compiled by means of hotlines and, as a result, block content. For example, Vodafone offers a filter that will manage content and block access to 18+ content on Vodafone Live! (Mobile Internet) and that works with internet filtering to block access to sites rated 18+. This technology blocks sites from the blacklist compiled by IWF.

25

Problems with Blacklists and Filter Systems


There are concerns about the effectiveness of blacklists and filter systems. Of course, the effectiveness of a filter depends on the purpose of each system. URL filters operating in the UK, Germany and France seem successful in blocking random access to suspicious sites with alleged images of child pornography. While the filters are effective in blocking inadvertent access, however, they appear ineffective against those who are determined to access to illegal content. In 2010, for example, a researcher in Sweden, which has a similar system to that in France and Germany, interviewed 15 convicted sex- offenders and found that most, if not all of them, admitted to being able to circumvent the filtering system, though they agreed that the system could block casual access. After all, Cleanfeed and the German and French filter systems are blocking sources, not the content itself. A producer of such content can avoid blocking by moving it to another IP-address, domain and/or URL, while tech-savvy users are able to bypass the filter process. Other technical and practical drawbacks include the fact that, as the number of internet users grows, blocking a URL becomes increasingly difficult. A filter system may not be efficient in tackling certain types of content. And building a blacklist of URLs is labour-intensive. As for filtering in general, there are concerns regarding the relationship between private systems and human rights. The UKs IWF is an independent NGO, while Cleanfeed is an industrial initiative, and neither was set up as a result of state initiative. As a result, questions arise about the systems legitimacy, transparency and accountability. Instead of legally-enforced relations, there is merely a strong policy and informal relations. Some argue that all this really shows is the states abdication of responsibility toward the internet. The blocking relationship exists only between the end-user and the ISP, leaving little room for the law. And that is not all. Cleanfeed is not subject to judicial review, and its procedures are not transparent. It

Filtering and Blacklists in the UK, Germany and France

follows that, for some, decisions on filtering are nothing more than private censorship that is neither transparent nor subject to legal and judicial control. Finally, neither the blacklists themselves nor the identities of those who compile them are published. Claims that filter systems violate the civil rights of users are well documented. One example occurred in the UK, where in 2008 the IWF blacklisted a Wikipedia page devoted to a German heavy-metal group because it displayed the cover of the groups album depicting a naked child. Eventually, the IWF responded to public condemnation and removed the page from the blacklist. The EU, which in the early 2000s flirted with the idea of blocking offending sites, has since concluded that filtering and/or blocking are unreliable ways of counteracting illegal internet content, and provide minors with only a minimum of safety. In its Recommendation CM/ Rec(2009)5 (On measures to protect children against harmful content and behaviour and to promote their active participation in the new information and communications environment), the Committee of Ministers of the Council of Europe in July 2009 noted that blocking websites in an effort to ensure online safety for minors is a violation of Article 10 of the European Convention on Human Rights (ECHR). As mentioned above, the German Parliament eventually rejected ZugErschwG as unconstitutional, and voiced doubts about the legitimacy and transparency of censorship in general. Even so, blacklisting and blocking have some advantages. The clearest and most socially beneficial result in the UK, Germany and France is a fall in the number of cases of both child pornography and minors accidental access to such images. Filtering URLs also makes it possible to avoid the problem of over-blocking which often affects, for example, DNS-blocking, since it blocks access to content that, while not illegal, is in the same domain as other offending content. (Similarly, IP-blocking can result in censorship of completely legal content).

26

Filtering and Blacklists in the UK, Germany and France

UK Is there a legally- based filter No system or blacklist? Does the state regulate filtering? No. The Internet Watch Foundation (IWF) compiles a blacklist. Filtering is carried out by ISPs (BT and Cleanfeed).

Germany Not now, but there was a failed attempt to introduce such a law (ZugErschwG) No. Filtering is carried out by ISPs in accordance with the recommendations of hotlines (Eco, Jugendschutz, FSM).

France No

27

No. Filtering is carried out by ISPs in accordance with the recommendations of hotlines (Point de Contact).

What sort of content is blocked?

How are blacklists compiled? Hotlines register complaints about harmful content, which are then reviewed. If the committee of the IWF Council upholds a complaint, the content is blacklisted and removed. The IWF informs ISPs and the lawenforcement agencies. How is internet content blocked? URL filtering

Hotlines register complaints about harmful content. These are reviewed by informed specialists. ISPs and the law-enforcement agencies are informed of the specialists recommendations. URL filtering The blocking mechanism is inefficient. The hotlines lack legitimacy, accountability and transparency. There is a threat to free speech.

Hotlines register complaints about harmful content. These are reviewed by informed specialists. ISPs and the law-enforcement agencies are notified of the specialists recommendations. URL filtering Inefficiency of the blocking mechanism. The decisions of Point de Contact lack legitimacy, accountability and transparency. There is a threat to free speech.

What are the main objections The blocking mechanism to the system? is inefficient. The decisionmaking process lacks legitimacy. The decisions of the IWF lack accountability and transparency. There is a threat to free speech. Table 2. Summary of UK, German and French Practice

Filtering and Blacklists in the UK, Germany and France

Images of child pornography. Images of child pornography. Content harmful to In one case, violation of human dignity, which includes images of child authors rights. pornography, accessible content harmful to minors, and also incitement to racial hatred, crime, terrorism and suicide.

Labeling and Content-Rating Systems

28

Despite the widespread use of labeling and ratings to curtail childrens access to undesirable content in other media such as movies, music and computer games attempts to apply similar methods to the internet have been few and largely ineffective. Some legislative initiatives in this area, including Germanys relatively extensive experiment with labelling, have later been rejected. More successful attempts have been made by means of agreements on self-regulation within the industry itself. The idea of labeling internet content is not new. In the USA, the industryled World Wide Web Consortium (W3C) responded to the 1996 Communications Decency Act (CDA) by launching the Platform for Internet Content Selection (PICS).82 The project was specifically designed to pre-empt state regulation and censorship. PICS used two methods to limit access to indecent content such as violence, pornography and sex. First, every website was required to self-rate its own content using metadata elaborated by either the RSACi rating system (developed by the Recreational Software Advisory Council [RASC]) or the SafeSurf rating system.83 If a site contained violence or pornography, for instance, content-providers should mark it accordingly. A filter would then block content rated harmful. Secondly, third-parties could also label web-content. This meant that users had some discretion and were not solely reliant on the websites labelling themselves. Microsystems launched the first PICS server in February 1996, and Netscape and Microsoft pledged to install PICS in their browsers soon after. The RSACi system was installed on Internet Explorer browser versions 3-6 and on Netscape 4.5.84 However, the failure of the CDA declared unconstitutional by the US Supreme Court in 1997 brought PICS down too. PICS was widely condemned as censorship: websites should not be compelled to label their content. At the time, the incentives for search-engine companies were insufficient to encourage them to develop products that would restrict search-results to PICS-labeled sites. As a result, not enough sites labeled content (120,000 sites according to one report), software for third-party labeling never materialized, and the most-efficient labeling method in the HTTP header rather than in the HTML of the web page was rarely used since only a few companies had developed the necessary software. According to one evaluation: once the Supreme Court found the CDA unconstitutional, the development of

software for PICS was essentially stopped. The consequent lack of support from commercial filtering firms, the W3Cs members, and other childrens groups led to the abandonment of PICS. 85 In Europe, the EU Safer Internet Programme and Germanys Bertelsmann Foundation provided funding for the Internet Content Rating Association (ICRA) which was supposed to use the PICS technology to develop a system to replace the RSACi. Three versions were developed in 2000, 2005 and 2008.86 The technology aimed to evaluate content neutrally and objectively using binary descriptors for example, bare breasts or absence of bare breasts. Providers were to classify their product in accordance with these descriptors, which would then be installed in browsers and could be configured by parents either manually or according to filteringtemplates developed by third-parties. In theory, information would flow from the producer to the end-user without intermediary censorship (except in cases where parents themselves intervened to protect their childrens safety).87 The project failed. According to one of its developers, there were several reasons for this. The main reason was that ICRA technicians ran up against the fact that, as with the PICS technology, content-providers refused to label their websites. A critical mass of sites was never reached, with fewer than 10,000 being labeled. Even companies represented on the ICRA board did not label their sites. Another problem was whether or not to block the unlabeled sites. Google did develop a customized safe search that provided results only from labeled sites. But in addition to the fact only a small number of sites were labeled, no authority was tasked with monitoring compliance by those sites that were not. This proved crucially important given the vagueness of terms: for example, how little clothing constitutes nudity? Furthermore, the browsers never upgraded the content advisor (from the RSACi model), nor did parents configure the browser to read the ICRA labels. And, with only two exceptions, third-parties did not create filtering templates. In many respects, the problem resembled that faced by PICS: companies were not sufficiently incentivized to develop and apply the technology; the disincentives, on the other hand, were even more compelling. If for example a company designed a template that did not work 100% effectively (if it allowed through even one undesirable page), the brand could suffer a loss of credibility. There were also problems with the filter in the browser: no company would develop one voluntarily. In sum, technical problems plagued the under-financed ICRA filter, and many hard-drives had to be re-formatted after installing the software. Even after additional funding was injected into the ICRAplus program in 2005-6, the software continued to suffer from technical problems, and the project was abandoned at the end of 2008.88 Several more content-labelling technologies followed the PICS, RSACi and ICRA programs. W3Cs Protocol for Web Description Resources (POWDER) is the successor to PICS, in comparison to which POWDER features a range of technical developments.89 In August 2009, after the POWDER working group had ceased its

29

Labeling and Content-Rating Systems

Japan provides another example. There, the Internet Association of Japan (IAJ) and the Ministry of Internal Affairs and Communications have agreed that filtering of online content should not be state-mandated but should take place at the behest of private users. As in France and Germany, the IAJ provides content-filters to users on request, while a number of Japanese ISPs provide filtering technology to consumers.94 The filtering software may apply the SafetyOnline2 rating system developed by IAJ, which assesses internet content on a scale of 0-4. The score (where 0 is considered safe) evaluates material on the basis of nudity, sex, violence and use of swear words.95 Commercial partners have come on board. NTT Communications announced in May 2009 that it was introducing ONC Kids Care for the use of subscribers. This program allows parents or guardians to set parameters regarding internet content and use. It is a chargeable service.96 Partner-company NTT Resonant offers Kids-goo, an educational web-portal designed for children and offering a filtered web-search program.97 At first sight, labelling and rating systems have much to recommend them. They offer the vision of an information highway that allows data to proceed unmediated from the source (content-provider) to the end-user, except where the end-user chooses to apply a filter. As the failures of PICS and ICRA show, however, contentlabeling brings with it a host of problems. Labeling can work only if there is a critical mass of sites, and here the greatest problem is persuading content-providers to label their sites. This became especially apparent when doing so provided business with no financial incentive and may even have represented a disincentive. The RTA tag in the US and the commercial systems operating in Japan, on the other hand, are relatively small-scale undertakings and offer participants commercial incentives. A lack of incentive is not however the only problem for mass-use rating and labeling systems. As the ICRA project showed, adults were unwilling to spend the time required to configure the filter and expect others to ensure the online safety of their children. Any solution deploying labeling and content-rating systems needs, it seems, to instil in parents a sense of responsibility that transcends the technical complexity of installation and set-up.98

Labeling and Content-Rating Systems

activity, W3C recommended the technology. POWDER was designed not to filter harmful content, but to analyse pages and resources in accordance with commercial standards, medical accuracy, and licensing and copyright information.90 The EUs project Quatro Plus builds on POWDERs technology to adapt labelling with the aim of securing consumer rights. This hinges on collaboration with the tried and tested technology of i-Sieve, including a filter system that processes language and photos by means of hash technology.91 However, Quatro Plus remit extends well beyond child safety to a wide range of end-users, including labelling authorities, contentproviders, domain experts, awareness-raising organizations, and group or individual users. 92 In the US, the adult industry applies a Restricted to Adults (RTA) tag adopted by the Association of Sites Advocating Child Protection (ASACP). The tag must be displayed on the web-pages of ASACP members that carry commercial pornographic content. At least in theory, filters recognize the tag and block sites bearing it. This technology appears to enjoy some success.93

30

Self-Regulation and List-Filtering

31

Introduction
In many countries, ISPs and other ICT companies undertake to remove child pornography when notified of its presence by users, hotlines and/or law-enforcement agencies. Users may also independently regulate such content by voluntarily installing a filter-system. These systems can be set up in accordance with the wishes of the consumer and may, for example, block pornographic websites. While ISPs in France and Germany are not legally required to monitor content passing through or posted on their networks, a number of providers do offer filtering services to subscribers. In November 2005, Frances AFA reached an agreement with the Government and undertook to offer a filtering service free-of-charge to subscribers.99 The software offers three profiles: child (enfant), where users may surf authorized sites (liste blanche); youth (adolescent), where users may access all sites except those on a blacklist generated by a program that analyzes both French and English words; and adult (adulte), where the entire internet is accessible. Parents and guardians may also restrict the amount of time children spend on the internet.100 Politicians recognize that national legislation cannot adequately regulate the transnational character of the internet.101 There is therefore a preference, at least in Europe and the US, for self-regulation.102 The PointSmart Task Force in the USA noted in July 2009 that: voluntary activity strongly supported by industry is likely to be significantly more effective than legislated or mandated solutions; and light touch regulation in this area is the superior approach for encouraging resource-rich companies to design progressive and innovative solutions, both now and in the future.103

6.1 EU Initiatives
In its Recommendation 98/560EC of September 1998 On the protection of minors and human dignity, the EUs European Council called for the establishment of national frameworks and codes of conduct within member-states and for industry selfregulation initiatives to promote awareness of the dangers present on the internet.104 That document was updated in December 2006 by Recommendation 2006/952/EC, which called for greater awareness among parents and teachers of both the potential and the dangers of the internet, for the establishment of a code of conduct for use by companies, professionals, intermediaries and users; for the introduction of a quality label for ISPs indicating their adherence to the said code of conduct; and for the establishment of effective reporting tools for users. It also mooted the prospect of ISPs developing filtering systems.105 In addition to these recommendations, the European Parliament has ratified three Action Plans to improve child safety online: the Safer Internet Action Plan (SIAP) which ran from 1999-2004; the Safer Internet plus program of 2004-08; and the Safer Internet program of 2009-13.106 SIAP sought, first, the establishment of hotlines; second, awareness-raising; third, filtering, labeling and rating of content; and, fourth, industry self-regulation and codes of conduct.107 The first two plans saw the creation of a network of hotlines throughout Europe for public reporting of alleged illegal content. An annual Safer Internet Day was inaugurated; held in February, this is organized at national level by participating organizations (members of the Europewide InSafe network, which is dedicated to awareness-raising). An assessment of the efficacy of filtering programs was also carried out, and support provided for industry self-regulatory initiatives.108 The most recent Safer Internet Program, dated 16 December 2008, declared similar objectives: increasing public awareness; developing contact points where members of the public can report allegedly illegal content, especially concerning the sexual abuse of children, grooming and cyberbullying; facilitating self-regulatory initiatives; involving children in developing a safer online environment; and conducting research into trends and developments in ICT that may affect childrens lives.109 The European Commissions Digital Agenda for Europe of August 2010 outlined a set of goals designed to develop ICT in the Eurozone.110 Among these was a Trust and Security program of which three objectives were particularly relevant to childrens online safety. Action 36 defines the responsibility of memberstates to support reporting of illegal online content and awareness campaigns on online safety for children, while Action 40 asserts that member-states should set up hotlines for reporting harmful content.111 The Safer Internet program has now established centers in member-states. These centers, which form part of the INHOPE network, work to raise awareness; to educate children, parents and educators in safe Internet use; and, through their hotlines, to combat illegal online material.112

32

Self-Regulation and List-Filtering

Action 37 of the Digital Agenda aims to encourage self-regulation in the use of online services. 113 So far, the most prominent result has been the establishment, in December 2011, of the Coalition to Make the Internet a Better Place for Kids. This is a voluntary initiative and signatory-companies include Apple, BSkyB, BT, Dailymotion, Deutsche Telekom, Facebook, France Telecom-Orange, Google, Hyves, KPN, Liberty Global, LG Electronics, Mediaset, Microsoft, Netlog, Nintendo, Nokia, Opera Software, Research In Motion (RIM), RTL Group, Samsung, Sulake, Telefonica, TeliaSonera, Telenor Group, Tuenti, Vivendi and Vodafone. Signatories undertook to adopt five measures in 2012: the introduction of simple tools for users to report harmful content; the availability of age-appropriate privacy settings; the broader adoption of content-classification for video-games; greater availability and use of parental controls; and the effective blocking of material showing violence against children. The coalition set up working-groups to study each of these five issues and undertook to work both with the European Commission and with interested parties and stakeholders individually. It also promised to carry out a progress review in summer 2012. 114 The European Commission continues to push for self-regulation in Web 2.0 applications. Its Safer Social Networking Principles for the EU of February 2009 aimed at maximising the positive experience of children who use social networking technologies or who frequent social networking sites, while minimizing the potential risks to them.115 The principles include awareness-raising through educational programs aimed at users, teachers, parents and guardians; age-appropriate controlmechanisms; empowering users; the development of technologies and software to ensure privacy; easy-to-use tools for reporting potentially harmful content; effective mechanisms for detecting and blocking harmful content; clear and accessible useraccess to privacy settings; and self-assessment procedures. Signatories include Google, Facebook, Bebo, DailyMotion, MySpace and Yahoo Europe. 116 The May and August 2011 evaluations of compliance with the Safer Social Networking Principles revealed some encouraging results. The sites tested in May included Arto, Bebo, Facebook, Giovani, Hyves, IRC-Galleria, Myspace, NaszaKlasa, Netlog, One, Rate, SchuelerVZ (Vznet Netzwerke), Tuenti and Zap. Tested in August were Dailymotion, Habbo Hotel, Skyrock, Stardoll, Windows Live, Xbox Live (the online gaming platform and the console), Flickr, Yahoo! Pulse, and YouTube. The tests showed that, in all, compliance was highest with Principle One (raising awareness), and that the security information supplied by each provider during the August assessment was on the whole age-appropriate, easy-to-understand and easy-to-find.117 In August, seven services were assessed as very satisfactory for Principle Two (age-appropriate settings). Two others were assessed as rather satisfactory. In general, Principle Three (empowering users) reported the lowest levels of compliance: in May, six services were rated unsatisfactory, five rather satisfactory and only three very satisfactory. In August, however, five sites were rated very satisfactory and four rather satisfactory. In August, eight out of nine providers were rated very satisfactory for Principles

33

Self-Regulation and List-Filtering

6.2 Self-Regulatory Organizations and Codes of Conduct


Within the EU, civil society organizations and enterprises cooperate to promote self-regulation among interested parties. Codes of conduct have also been developed as self-regulatory authorities have been encouraged to formulate the conditions of service and the obligations of service-providers, content-providers and end-users. Such Codes are important from the human rights perspective since they tell end-users what they need to know about what companies will and will not do in response to the demands of the state. 118 In 2001, the UK Home Office set up a Task Force on Child Protection on the Internet. Consisting of representatives of industry, child-welfare bodies, Government and law-enforcement agencies, it advises companies, parents and children on best practice, aiming at raising awareness of the dangers of the internet. 119 Earlier, in 1999, 11 NGOs working to advance childrens interests established the Childrens Charities Coalition on Internet Safety (CHIS). Today the CHIS works at a policy level with stakeholders, including members of the ICT industry, to promote changes and improvements in child safety online. It has for example worked closely with the UK Council for Child Internet Safety (UKCCIS) to implement the recommendations of the 2008 Byron Review.120 The UKCCIS comprises organizations and representatives from government, industry, law-enforcement, academia and civil society.121 In January 2004, a number of phone companies including Orange, O2, T-Mobile, Virgin Mobile and Vodafone signed a Code of Practice for the SelfRegulation of New Forms of Content on Mobile Phones. This sought to restrict minors access to illegal content by means of monitoring and content controls. A review carried out in 2008 and based on interviews with operators and stakeholders concluded that the Code was effective in regulating access to illegal material and a good example of industry self-regulation. 122 In 2006, the UK Child Exploitation and Online Protection Centre (CEOP) was established. This initiative also brought together representatives of the Government, NGOs, law-enforcement agencies and the ICT industry.123 It aims to create a repository of knowledge (e.g., copies of blacklists) on which all stakeholders can draw so as to make the internet safer for minors. The Centre also runs an information site for children and parents, while its Intelligence and Operations Department assists in the tracking and capture of criminals.124 On 28 October 2011, four ISPs BT, Sky,

Self-Regulation and List-Filtering

Four, Eight and Nine (relating to reporting mechanisms and provider-responses) while the remaining one provider was rated rather satisfactory. In the May assessment of Principle Four, however, only three providers were rated very satisfactory, while ten were rated rather satisfactory and one unsatisfactory. No service was rated very satisfactory in May on all the principles assessed in their self-declarations, but two managed this in August. Overall, however, there were few unsatisfactory ratings.

34

TalkTalk and Virgin launched a new Code of Practice to protect minors online. According to the guidelines, new customers will have to choose whether or not to activate parental controls, while existing customers will be invited to use the new service.125 In Germany, several organizations pursue self-regulation initiatives and consult the Commission for the Protection of Youth in the Media (Kommission fr Jugendmedienschutz KJM) regarding best practice among ICT companies. The KJM has recently allied with the organizations Freiwillige Selbstkontrolle der Filmwirtschaft (Voluntary Self-Regulation of the Movie Industry FSK) and Unterhaltungs-Software Selbstkontrolle (Self-Regulation of Entertainment Software USK) to agree on best practice among representatives of the film industry and computer games, respectively.126 Even more significant, perhaps, is Freiwillige Selbstkontrolle Multimedia-Diensteanbieter (Voluntary Self-Regulation of Multimedia Service-Providers FSM), founded in 1997 with the goal of preventing the proliferation of content that is illegal and/or harmful to minors. It advises online groups and companies, including ISPs, on compliance with the 2003 Inter-State Treaty on the Protection of Minors (Jugendmedienschutz-Staatsvertrag JMStV). Google is a member, and searches on Google.de and Google.com may accordingly yield filtered results. Under the oversight of the KJM, the FSM may prosecute members and adopt various punitive measures.127 In the United States, the Internet Service Provider Association (ISPA) reached an agreement in 2006 with www.cyertipline.com whereby ISPs undertook to report instances of potentially illegal content stored on their networks. The PROTECT Our Children Act of 2008 has since made the reporting of child sexual-abuse images mandatory by ISPs. The ISPA makes no reference to content that may be harmful to minors in its founding principles, but it does stress that ISPs are not liable for content created, transmitted and kept by users. 128 In Canada, the Ministry of Public Safety and Emergency Preparedness released its five-year National Strategy to Protect Children from Sexual Exploitation on the Internet in May 2004.129 This was renewed in February 2009. Three federal bodies manage the Strategy: the Ministry of Public Safety and Emergency Preparedness, Industry Canada (SchoolNet program) and the Royal Canadian Mounted Police. The Strategy aims to increase cooperation between law-enforcement bodies throughout Canada, to facilitate public reporting and awareness, and to create an alliance and consensus spanning the state and private sectors. Much effort has been made to spread information and increase cooperation between stakeholders, as well as to raise awareness among state, private and law-enforcement institutions. The National Child Exploitation Coordination Centre (NCECC) and A Canada Fit for Children seek, for example, to forge cooperation and buy-in from law-enforcement and civilsociety actors. The NCECC runs an internet course on child exploitation that trains law-enforcement officers to investigate and track crimes carried out on the internet.130

35

Self-Regulation and List-Filtering

The Internet Association of Japan (IAJ) is a self-regulated industry organization comprising 277 members. It was set up in 2001 with the support of the Ministry of Internal Affairs and Communications (MIC) and the Ministry of Economy, Trade and Industry (METI). The IAJ provides a code of conduct and various resources for children, parents and guardians designed to promote pro-social surfing.131 In Brazil, a multi-stakeholder approach is seen as essential to effective selfregulation. Representatives of the Government, industry, civil society and academia sit on the Brazilian Internet Steering Committee (CGI.br), formed in 1995 to advise on internet governance. Policies are debated and resolutions decided within this forum. The CGI.br sponsors organizations such as the NGO SaferNet Brasil and internetawareness initiatives such as Brazils annual Safer Internet Day.132 Further to this, the Federal Public Prosecutors Office has a Task Force to Combat Cyber Crimes in So Paulo that investigates and prosecutes cases of hate crime and images of child abuse on the internet.133 This Task Force collaborates with the CGI.br, lawenforcement agencies, the legislature and NGOs in order to create consensus and a cohesive legal, technological and operational infrastructure in response to childsafety concerns. In 2005, the Task Force reached an agreement with a number of Brazilian ISPs whereby the service-providers undertook to assist state prosecutors in collecting evidence in potentially criminal cases. In July 2008, the Task Force signed an agreement with Google Brazil and SaferNet. Google declared its intention of developing filtering technologies for its social networking site Orkut; it also pledged that it would, either upon judicial orders or in some instances on its own initiative, hand over all the information in its possession concerning specific users (Google retains such information for 180 days); similarly, it undertook to remove any allegedly illegal content identified by SaferNet or other organizations and to develop a filter to remove images of child abuse.134 The Brazilian Association of Internet Service Providers (ABRANET) released a code of conduct in April 2007 the Cdigo De Auto-Regulamentao De Operadores De Rede E De Servios Internet (Self-Regulatory Code of Operators of Network and Internet Services). According to Chapter II, signatory ISPs undertake to host neither child pornography nor content inciting violence or prejudice on grounds of origin, race, ethnicity, gender, sexual orientation, skin colour, age or religious belief. ISPs also pledge to provide transparent services and to refrain from exploiting minors inexperience. Under Chapter III, signatories pledge not only to take down illegal content when notified but also to monitor for content that might prove harmful, especially to minors. ISPs must also notify the authorities of child-abuse images on their servers, and inform adults and guardians about available technology for filtering out content that is inappropriate for minors.135 In India, the Internet Service Providers Association of India (ISPAI) is a voluntary initiative set up by ISPs in 1998 to promote the internet for the benefit of all. Its code of conduct specifies that its members are obliged to observe Indian laws and will not knowingly permit any user or fellow-member to engage in any illegal activity. They will also not openly encourage anything that is in any way unlawful (Section 5.1).136

36

Self-Regulation and List-Filtering

In South Africa, Points 17 of the code of conduct of the Internet Service Providers Association (SAISPA) stipulates that: [M]embers must take reasonable steps to ensure that they do not offer paid content subscription services to minors without written permission from a parent or guardian. Point 18 states: [M]embers must provide customers with internet access with information about procedures and software applications which can be used to assist in the control and monitoring of minors access to internet content. This requirement does not apply to corporate customers where no minors have internet access.139 Section 24 of the code of conduct goes on to state that, if a member becomes aware of the presence of illegal content on its server, it must terminate the relevant customers service and report the case to the relevant law-enforcement agency within a reasonable period of time. This conforms to Section 77 (1) of South Africas Electronic Communications and Transactions Act (ECT) of 2002, which states that users may request ISPs to take-down unlawful content. SAISPA offers a service for this purpose.140 All SAISPA members undertake to abide by the ECT. Section 78 of the ECT notes that ISPs are obliged actively to monitor their servers. However, they have immunity relating to illegal content stored on their servers until such time as its illegality is established.141 In China, by contrast, it is unclear where the states influence ends and industry self-regulation begins. It is perhaps more accurate to write of joint-regulation. For instance, the Internet Society of China (ISC) was created in 2001 with the support of the Chinese Government, which appears to retain a significant stake in the ISC through various ministries. Officially at least, the ISC comprises over 170 companies, including telecoms businesses, ISPs, hardware suppliers and academics. It seeks to foster industrial self-discipline by promoting the development of the internet by means of political consultations and popular-awareness campaigns.142 One ISC document governing participating ICT businesses is the Public Pledge of Self-Regulation and Professional Ethics for the Chinese Internet Industry of March 2004.143 According to Article 9, participating-ISPs are bound to:

Self-Regulation and List-Filtering

The Indonesian Internet Service Providers Association (IISPA) was created in 1996 to develop the ICT market in Indonesia.137 In late 2010 and early 2011, the IISPA collaborated with civil society organizations and media in an Initiative for Content and Community Collaboration. This initiative developed out of concern over increasing government regulation of online activity and content, and was designed to create a forum on freedom of expression and internet censorship. Planned outputs include a code of ethics, a website portal and workshops.138

37

Refrain from producing, posting or disseminating pernicious information that may jeopardize state security and disrupt social stability, contravene laws and regulations, and spread superstition and obscenity. Monitor the conformity with the law of information published by users on websites, and promptly remove harmful information.

38

Article 10 stipulates that ISPs undertake to monitor both domestic and foreign websites, and to block access to URLs that disseminate harmful information. 144 The ISC has also published Provisions of Self-Regulation on Not Spreading Pornographic and Other Harmful Information by Means of Internet Websites, a Public Pledge of Self-Regulation on [] Malicious Software, a Public Pledge of Self-Regulation on Blog Services, a Public Pledge of Self-Regulation Regarding [] Internet Viruses, and a Declaration of Self-Regulation on Copyright Protection [in] Chinas Internet Industry. 145

Self-Regulation and List-Filtering

Filter Systems and Problems Regarding Civil Liberties

39

Allegations that filter systems violate users civil liberties have been meticulously documented by commentators.146 They are based mainly on the charge that filtering violates international human rights legislation, notably Articles 19 and 20 of the International Covenant on Civil and Political Rights (ICCPR)147 and Article 19 of the Universal Declaration of Human Rights (UDHR), which states that: Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.148

Article 10 of the European Convention on Human Rights (ECHR) also affirms the right to freedom of expression without interference by public authority and regardless of frontiers. However, the second paragraph of Article 10 goes on to qualify that right by stating that The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society.149 While there is near-consensus on the desirability of blocking child pornography, there are, as we have seen, different cultural nuances among the European states. For example, Germanys Jugendschutz.net recommends blocking right-wing extremist content. 150 Despite enthusiasm for the idea of blocking in the early 2000s, in the EU filtering and/or blocking is today considered a dubious method of countering illegal online content, and as an even less effective way of defending minors. Recommendation CM/Rec(2008)6 of the Committee of Ministers to Member States on Measures to Promote the Respect for Freedom of Expression and Information with Regard to Internet Filters noted that: the voluntary and responsible use of internet filterscan promote confidence and security on the Internet for users, in particular children and young

people, while [we are] also aware that the use of such filters can impact on the right to freedom of expression and information, as protected by Article 10 of the ECHR. 151 States should be aware that denying access to public information by means of general blocking or filtering systems may constitute a violation of Article 10 of the ECHR, where there is an established consultative process to ensure compliance with the provisions of the Articles second paragraph. In protecting minors from harmful content, adults access should not be blocked.152 Recommendation CM/Rec(2009)5 of the Committee of Ministers to Member States on Measures to Protect Children Against Harmful Content and Behaviour and to Promote Their Active Participation in the New Information and Communications Environment of July 2009 again noted that blocking websites in order to provide online safety for minors runs the risk of contravening Article 10 of the ECHR.153 Article 19.3 of the ICCPR allows for the imposition of restrictions on the right to freedom of expression when, and only when, these are provided by law and are necessary either for respect of the rights or reputations of others, or for the protection of national security or of public order, or of public health or morals.154 Interpretation of these provisions may be considered subjective and, in some countries, one may speak of a public mandate to block certain types of content.155 The blocking of pornography in Indonesia is one example, while in Turkey the hotline has received thousands of complaints about contraventions of its Law 5651. It is however necessary to be very cautious about both examples, especially in light of recent civil society uproar over threats by the Indonesian Government to introduce statutory blocking, and the massive protests that took place in Istanbul in 2011 against Turkeys new filtering regime.156 The situations in Indonesia and Turkey may in fact reflect the tyranny of the most active and powerful minority groups of the population, with the support of the state. Perhaps more tenable under the provisions of the ICCPR is the removal by Indian Government of the websites of political extremists on grounds of concern for national security. But even in India, the ICT climate is uncertain since Google, Facebook, Yahoo! and 18 other companies are currently facing a lawsuit in New Delhi that demands they screen all content before posting it online. This would require extensive and expensive human monitoring, which would not only affect freedom of expression but could also impede ICT growth in India.157 The use of vague terminology to describe illegal online content may also contribute to state censorship and present a legitimate human rights concern. China offers a striking example of loose definitions of what is or is not legal, where any content-filtering may be considered legal.158 Turkeys Law No 5651 can be widely applied, as can be seen from the number of websites blocked under its provisions. The Turkish Telecommunications Directorate (TB) has not published statistics on the number of blocked websites since May 2009, when it revealed that 2,601 sites had been blocked under Law No. 5651, an increase of 433 over May 2008.159 The Organization for Security and Co-operation in Europe (OSCE) put the number at 3,700 websites by December 2009,160 while Freedom House estimated that some

40

Filter Systems and Problems Regarding Civil Liberties

5,000 had been blocked by mid-2010.161 And, according to one independent observer (Engelliweb.com), the TB blocked 3,027 websites with a single decision in January 2011 and a further 2,735 in April of that year. Other websites have since been added to the blacklist. 162 There is an unknown correlation between the number of sites blocked and child safety, and it seems possible that the numbers may indicate overblocking and an infraction of international human rights accords.163 There is clearly tension between international human rights law and the pursuit of online safety through filtering.164 Marie Eneman of the University of Gothenburg argues that a balance needs to be struck between civil liberties and child abuse, given the right of the child not to be sexually exploited or abused.165 This position is almost universally accepted.166 There is no reason why this equilibrium cannot be maintained if states and ICT companies seeking to filter content seek legal advice on compliance with international human rights law and seek to foster openness, transparency and accountability. One example of a collaborative endeavour among companies such as Google, Microsoft and Yahoo!, NGOs and academics is the Global Network Initiative (GNI). Established in 2008, GNI seeks to evaluate human-rights risks in the ICT sector while protecting and advancing freedom of expression and privacy.167 Dawn Nunziato of George Washington University has identified three steps to legitimize any form of filtering. First, there must be clear and precise definitions of the speech to be regulated; second, filtering must operate in a transparent manner, with users and content-providers kept informed about which pages have been blocked and why; third, the operators of the filtering system must provide users and/or content-providers with the opportunity to appeal against any blocking decisions to a judicial body.168 As Thomas McIntyre has argued: There is a very real risk that by promoting blocking, the constitutional values associated with freedom of expression and privacy of communications may be sacrificed and worse, may be sacrificed for systems which are ineffective for their stated goal.169 Other drawbacks have also been ascribed to filtering systems. There is for example a possibility of mission creep whereby, once blocking of child-abuse images is up and running, censorship will extend to other content, such as suspected copyright violations, racism and other harmful material. In the US and Canada, requirements operate that an ISP must without fail investigate (but not monitor) cases of suspected child pornography.170

41

Filter Systems and Problems Regarding Civil Liberties

Conclusions and Recommendations

42

No-one disputes the need to make the internet safer for children, protecting societys most vulnerable members from threats to their physical and mental health. Ever since the internet made it possible quickly to exchange text and images, lawmakers, civil society activists and representatives of the internet industry have been concerned about how to provide effective protection. Over the past 20 years, many attempts have been made to enhance childrens safety with the help of legislation, self-regulatory initiatives and educational programs. Many of these efforts have been quite successful but, for various reasons, many of them have failed. While not questioning the need to find effective solutions in Russia itself, the present report aims to help Russian stakeholders draw useful conclusions from the quagmire of international experience.

Five Conclusions from International Experience


1. Many studies conclude that national legislation alone cannot effectively regulate the internet. The internet is a decentralized and international system, outside the confines of any one state. Governments are well advised, therefore, to shape their policy in line with international standards and conventions.171 Russia has yet to ratify the Optional Protocol to the UN Convention on the Rights of the Child on the Sale of Children, Child Prostitution and Child Pornography of 2000, while Article 242 of the Russian Penal Code does not provide adequate protection for minors.172 In fact, the possession of child pornography (without intent to disseminate, advertise or trade) remains legal. 2. The failures of PICS in North America and of ICRA in Europe highlight some of the difficulties Russia may face in implementing its own rating and labeling system. Technicalities aside, a successful ratings system has at least three prerequisites: adequate funding, dynamic supervision, and a way of incentivizing content-providers to label their sites. Russian Federal Law No 139-FZ promises to satisfy the first and second of these demands. But the third and arguably most important of these prerequisites could misfire, as experience in North America and Europe suggests. If compulsion, and the corresponding threat of sanctions, are all that is relied upon, this may both undermine civil liberties and have a serious impact on the economy. Innovation is likely to prove particularly vulnerable.173 The Russian Association of Electronic Communications (RAEK), which represents more than 60 ICT companies,

has already flagged up potential problems in connection with the new system of ratings and labeling. RAEK has drawn attention in particular to the technical impossibility of labeling all content in Russia (let alone abroad), the inability of ISPs to moderate content dynamically and, if content is subject to monitoring before being uploaded, the threat of censorship. 174 3. Federal Law No 139-FZ paves the way for filtering at ISP-level.175 As we have seen, no filtering technique is completely effective and all may be circumvented. The most effective technique, it seems, is deep-packet filtering, but this is also the most expensive. In Russia, the Moscow City Telephone Network has announced that it will introduce deep-packet filtering in schools.176 But there are further costs to ISP-level filtering. The most obvious is the potential violation of international human rights conventions, notably regarding freedom of expression. While the filtering of child pornography is not generally seen as transgressing freedom of expression, the state-mandated blocking of other forms of content must satisfy stringent criteria.177 It is doubtful whether there is a necessity for filtering in the Russian case (in line with the wording of Paragraph 3 of Article 19 of the International Covenant on Civil and Political Rights), while the broad definition of harmful content contained in Law No 139-FZ could potentially place Russia in the same class as filtering regimes such as China and Turkey. The vague definitions in Law No 139-FZ are problematic and include the classifications vulgar language and denigrating family values. To use an example from English: is the word darn vulgar? Or damn? Some people may believe so, others not. 4. The costs of filtering are not restricted to setting the system up and using the relevant software and computing power, though the latter are not cheap. We estimate that, in Russia, installing and beginning to run an automated filter system could cost as much as $500 million. Without additional investment in network infrastructure, such a system is likely to reduce the speed of data-transfer by as much as 20%; this in turn could result in a fall of 2%-5% of GDP. Manual filtering entails lower direct and indirect costs, but even these cannot be ignored. A locking system for IP addresses is cheaper to set up and to manage (about $600,000 for the first year and $300,000 a year thereafter for each major IP), but essentially it shifts the financial burden onto the owners of unblocked sites, which are estimated to outnumber blocked sites by a ratio of 1000 to one.178 Blocking by URL address is more accurate but technically more difficult, while the creation and running of such a system for ISPs is about 10 times more expensive than a system that blocks IP addresses.179 5. The effectiveness of manual filtering (or listing) is high compared to that of blocking listed sites, but low as regards the overall protection of children. The bestknown such system that run by the UKs Internet Watch Foundation receives, according to its annual report, some 40,000 blocking requests a year. For the past four years, it has blocked some 1,500 domains annually, which suggests that the scale of the problem is not decreasing.

43

Conclusions and Recommendations

Importance of Digital Literacy


As shown by studies throughout the world, the protection of minors on the internet is best ensured not by concentrating power in the hands of government censors or other state-mandated authorities, but by collective responsibility. In other words, responsibility should be jointly shared by all stakeholders.180 In her 2008 review Safer Children in a Digital World, British psychologist Tanya Byron recommended forging a culture of responsibility, a collaborative approach where stakeholders would work together to help children keep themselves safe, to help parents to keep their children safe, and to help one another support children and parents in this task. This would be facilitated by a national strategy for child internet safety which involves better self-regulation and better provision of information and education for children and families. 181 Byrons 2010 follow-up progress review found collaborative endeavors in the UK under the aegis of the UK Council for Child Internet Safety (UKCCIS) to be on the right track. Awareness-raising and education campaigns to increase digital literacy, in particular, had taken an all-important first step, though there was much more to be done. 182 Reports published in the US in 2008 and 2009 also promoted a collective approach. The PointSmart.ClickSafe Task Force and the Internet Safety Technical Task Force (ISFTF) included representatives of industry, civil society, education, public health, academia and the technology sector. The ISFTF stressed the importance of collaboration and innovation: Members of the internet community, including social network sites, should continue to develop and incorporate a range of technologies as part of their strategy to protect minors from harm online. They should consult closely with child safety experts, mental health experts, technologists, public policy advocates, law enforcement, and one another as they do so. But they should not overly rely upon any single technology or group of technologies as the primary solution to protecting minors online. Just as there is no single solution to protecting minors online, any technological approach must be appropriately tailored for the context in which it operates, given the wide range of services on the internet. Parents, teachers, mentors, social services, law enforcement, and minors themselves all have crucial roles to play in ensuring online safety for all minors and no ones responsibility to help solve the problem should be undervalued or abdicated. 183 There is no substitute for the promotion of digital literacy among children, since it will help them to deal more effectively with online risk. In August 2011, President Medvedev railed against stupid prohibitions on the internet in favour of educating children in online safety.184 Schools should take responsibility for the education and support of children and for enhancing digital literacy.185 Parents should develop their own skills and engage with their children on- and offline, while embracing the range of internet control mechanisms available to them.186 Yaman Akdeniz advocates an increased role for parents, educators and other figures of authority in instructing minors in the benefits and dangers of the internet. In his view, adults should depend neither on technology filters, in particular nor on industrial self-regulation.187

44

Conclusions and Recommendations

Akdeniz writes: Governments and regulators should invest more in educational and awareness campaigns rather than promoting ineffective rating and filtering tools which only create a false sense of security for parents and teachers, while children quickly manage to find any loopholes. The advice to be given to concerned users, and especially parents, would be to educate your children rather than placing your trust in technology or in an industry that believes it can do a better job of protecting children than parents. The message is to be responsible parents, not censors. 188 Civil society organizations, for their part, should work with schools, parents and society at large to increase computer literacy.189 There are resources for this project in Russia, as a number of organizations and companies are seeking to encourage schools and parents to get involved in childrens online safety. Google offers parents tips about online safety; the Kids Online project offers online advice and counselling to parents and children; while other projects such as the Russian Childrens Foundation and Smeshariki aim to increase digital literacy and foster awareness of the risks of the internet.190 NGOs should collaborate with academics and government researchers to address the considerable dearth of research. We need to know more about childrens needs and the best methods of mitigating the consequences of upsetting and harmful experiences online.191

45

Self-Regulation: Advantages and Disadvantages


Industry self-regulation is, as we have seen, widely practiced around the globe, and it assumes a pivotal role in any multi-stakeholder approach to child safety online. 192 It may assume various forms, including codes of conduct, hotlines, filters and classification systems. In Russia, the RAEK advocates self-regulation.193 Igor Shchegolev, former Minister for Telecommunications and Mass Media, has also expressed a preference for internet self-regulation.194 The foundations exist. There are, for instance, self-regulatory agreements and at least two hotlines for Russian users to file complaints about internet illegal content http://old.saferunet.ru/hotline/ content.php/ and http://hotline.friendlyrunet.ru/?l=ru though the former offers a particularly broad definition of illegal content that extends even further than the categories set forth in Russian law. 195 There are, nonetheless, issues with industry self-regulation, and it may be that there is also a need for nationwide principles and even a revision of legal concepts regarding self-regulation.196 Codes of conduct, to take one example, will not work unless they are accompanied by reporting and monitoring protocols to assess compliance.197 Herein lie potential problems of accountability and enforcement. Even if reporting is efficient and thorough, there is often little tangible punishment for companies that fail to deliver on principles laid out in self-regulatory codes or

Conclusions and Recommendations

agreements.198 Another potential complication is that ICT businesses are bound to consider the interests of shareholders who invest for profit. In these circumstances, human rights issues may be pushed to one side.199 Privatized filtering systems, for example, may present a problem if ISPs over-block simply to avoid legal liability.200 This has been the case in Turkey and Indonesia, as we have seen. In China, ISPs Google included censor content.201 If companies comply with state demands, civil liberties will suffer. Again, ISPs have signed a pledge with the Chinese Government in order to secure market access.202 There is a need for corporate responsibility. The disadvantages of selfregulation can be offset by transparency and accountability. Googles semi-annual Transparency Report is an example of such responsibility. In it, the company publishes information about demands it has received from governments to remove specific pieces of content, and whether or not it has met them.203 The activities of the industry should be transparent and using agreements and tools such as the EUs Safer Social Networking Principles develop age-appropriate privacy settings, easyto-use parental tools, safety controls and reporting mechanisms. As in Germany, Google Russia and Yandex are offering a safe (family) search facility that filters results; Google already offers this all over the world. The YouTube portal also has a safe search-engine in Russia as in other countries.204 Eventually, if ICT companies comply with the principles set out in codes such as the Global Network Initiative (GNI), countries that engage in filtering may find themselves beginning to suffer in terms of slower development and loss of foreign investment. Meanwhile, and despite the fact that the GNI is hampered by low participation, the demands of the Chinese Government may be seen almost as a crude form of economic protectionism.205 The state continues to play a prominent role in any attempts at self-regulation. The EU Kids Online survey recommends that, while industry self-regulation is the key to internet safety, the state should also play a strong controlling role in order to ensure that self-regulation is inclusive, effective and accountable. These concerns have been addressed above. In addition, the state should not be content solely to guide industry self-regulation: governments should promote online safety by providing youth- and child-welfare services with information, encouragement, resources and recognition.206 In such awareness-raising efforts, the EU Kids Online report urges all stakeholders to pay attention to the opinion of children themselves, especially those from vulnerable groups.

46

Conclusions and Recommendations

Next steps
The Russian Government has taken a piecemeal approach to regulating the internet. Law No 139-FZ approaches harmful internet content in a way that may potentially undermine ICT development, civil liberties and computer literacy. International best practice suggests, rather, that what is needed is a holistic approach to child safety that builds on the participation of all stakeholders, starting with state institutions but including industrial companies, NGOs, parents and, last but not least, children themselves. There is also a clear need for a framework law on the internet that would define principles underlying the states role in relation to the electronic media. These should include the supremacy of the constitutional and international principles of freedom of expression and access to information; they should also codify the principle of internet neutrality. Finally, they should adhere to the principle of the sharpest blade, meaning that any decision by a state or government body to block access to an internet resource must be finely tuned so as to do as little collateral damage as possible.

47

Conclusions and Recommendations

Notes

1 The law can be found at http://www.rg.ru/2012/07/30/ zakon-dok.html (accessed 20 Nov 2012). 2 See http://graph.document.kremlin.ru/page. aspx?1;1538732 or http://www.rg.ru/2010/12/31/detiinform-dok.html. (accessed 28 Aug 2012) 3 The European Commissions Communication on Illegal and Harmful Content of October 1996 distinguished between children accessing pornographic content for adults, and adults accessing pornography about children, while noting that both require nuanced legal and technical responses; Decision No 276/199/EC of the European Parliament and of the Council of Adopting a Multiannual Community Action Plan on Promoting Safer Use of the Internet by Combating Illegal and Harmful Content on Global Networks (accessed 20 Nov 2012 at http://merlin. obs.coe.int/iris/1996/10/article3.en.html). See also: C. Walker and Y. Akdeniz, The Governance of the Internet in Europe with Special Reference to Illegal and Harmful Content, Criminal Law Review, December 1998, pp. 5-19. S. Livingstone & M. Bober, UK Children Go Online: Survey 4 Y. Akdeniz, Who Watches the Watchmen? The Role of Filtering Software in Internet Content Regulation, in OSCE Representative on Freedom of the Media (ed), The Media Freedom Internet Cookbook (Vienna, 2004), pp. 101-25; J. Iannotta, Nontechnical strategies to reduce childrens exposure to inappropriate material on the internet. Summary of a Workshop (Washington, DC, 2001); M. Valcke, B. De Wever, H. Van Keer and T. Schellens, Long-term study of safe Internet use of young children, Computers and Education, 57 (2011), p. 1294. 5 D. Boyd and H. Jenkins, Myspace and Deleting Online Predators Act (DOPA), MIT Tech Talk, May 2006 (accessed 5 Jan 2012 at http://www.danah.org/papers/MySpaceDOPA. html); J. Wolak, D. Finkelhor, K. Mitchell, M. Ybarra, Online Predators and Their Victims, American Psychologist, 63/2 (2008), pp. 111-28; US Department of Health and Human Services, Fourth National Incidence Study of Child Abuse and Neglect (NIS-4): Report to Congress, 2010 (accessed 8 Jan 2012 at http://www.acf.hhs.gov/programs/opre/ abuse_neglect/natl_incid/nis4_report_congress_full_pdf_ jan2010.Pdf ); S. Wastler, The harm in sexting? Analyzing the constitutionality of child pornography statutes that prohibit the voluntary production, possession, and dissemination of sexually explicit images by teenagers, Harvard Journal of Law and Gender, 33/2 (2010), pp. 687702 (accessed 8 Jan 2012 at http://www.law.harvard.edu/ students/orgs/jlg/vol332/687-702.pdf ). 6 F. Lazarinis, Online risks obstructing safe internet access for students, The Electronic Library, 28/1 (2010), pp. 157-70 (especially pp. 160-4); U. Gasser, C. Maclay and J. Palfrey, Working Towards a Deeper Understanding of Digital Safety for Children and Young People in Developing

Nations, Berkman Center for Internet and Society at Harvard University/UNICEF, 16 June 2010, pp. 16-25 (accessed 16 Dec 2011 at http://cyber.law.harvard.edu/sites/cyber. law.harvard.edu/files/Gasser_Maclay_Palfrey_Digital_ Safety_Developing_Nations_Jun2010.pdf ). 7 M. Valcke, T. Schellens, H. Van Keer and M. Gerarts, Primary school childrens safe and unsafe use of the Internet at home and at school, Computers in Human Behavior, 23 (2007), p. 2844. 8 S. Livingstone and M. Bober, UK Children Go Online: Surveying the Experiences of Young People and Their Parents, London, 2005 (accessed 19 Nov 2012 at http:// www.lse.ac.uk/collections/children-go-online/UKCGO_ Final_report.pdf ). See also: S. Livingstone, M. Bober and E.J. Helsper, Internet Literacy Among Children and Young People: Findings From the UK Children Go Online Project, London, 2005 (accessed 27 Dec 2011 at http://eprints. lse.ac.uk/397/1/UKCGOonlineLiteracy.pdf ). Livingstone, Bober and Helsper (op. cit.) note that many of their respondents (1,257 individuals aged 9-19 in the UK) said they had viewed pornographic material deliberately (p. 21). 9 Yahoo! 2011 Online Safety Survey, October 2011 (accessed 28 Dec 2011 at http://epsolution.zenfs.com/ wpprod/14/2011/10/Yahoo-2011-Online-Safety-Report_ short-version.pdf ). 10 Norton Online Family Report, 2010, pp. 3, 8 (accessed 2 Dec 2011 at http://us.norton.com/theme. jsp?themeid=norton_online_family_report&inid=us_hho_ downloads_home_link_nofreport). Negative experiences include: the respondents computer becoming infected with a virus; offline meetings; viewing pornographic images; and a stranger trying to befriend the respondent through a social network. 11 S. Livingstone, L. Haddon, A. Grzig and K. Olafsson, EU Kids Online, September 2011, p. 22 (accessed 5 Dec 2011 at www2.lse.ac.uk/media@lse/.../EU_Kids_FinalReport_ Sept11.pdf ). 12 Ibid, pp. 28-9. 13 M. Valcke et al. (op. cit., 2007), p. 2839. See also Valcke et al. (op. cit., 2011), p. 1298. 14 State of Online Safety Report, Family Online Safety Institute and Global Resource and Information Directory, 2011, p. 56 (accessed 3 Nov 2012 at http://www.fosi.org/ images/stories/resources/State-of-Online-Safety-Report2011-Edition.pdf ). 15 http://www.npc.gov.cn/englishnpc/Law/2007-12/12/ content_1383869.htm and http://www.china.org.cn/english/government/207411.htm (accessed 8 Dec 2011). 16 L. Yuxiao, The course of building the legal system for the internet in China, China.org.cn, 8 November 2010 (accessed 8 Dec 2011 at http://www.china.org.cn/business/ 2010internetforum/2010-11/08/content_21296546.htm). 17 http://www.icclr.law.ubc.ca/china_ccprcp/files/ Presentations%20and%20Publications/47%20The%20 Current%20Situation%20of%20Cybercrime%20in%20 China_English.pdf (accessed 8 Dec 2011). 18 http://www.cecc.gov/pages/virtualAcad/index. phpd?showsingle=24396 (accessed 19 Nov 2012). 19 http://www.npc.gov.cn/englishnpc/Law/2007-12/13/ content_1384075.htm (accessed 8 Dec 2011). 20 http://ciirc.china.cn/about/txt/2006-06/08/content_124005.htm (accessed 8 Dec 2011). 21 State Council Information Office, The Internet in China, 8 June 2010, http://www.china.org.cn/government/

48

whitepaper/node_7093508.htm (accessed 8 Dec 2011). See in particular: Basic Principles and Practices of Internet Administration, http://www.china.org.cn/government/ whitepaper/2010-06/08/content_20207983.htm (accessed 8 Dec 2011). 22 Ibid. 23 On the TIB, see OpenNet Initiative, Turkey 2010, pp. 344-5 (accessed 11 Oct 2011 at http://opennet.net/sites/ opennet.net/files/ONI_Turkey_2010.pdf ); Y. Akdeniz, Report of the OSCE Representative on Freedom of the Media on Turkey and Internet Censorship, 2010, pp. 9-10 (accessed 27 Sep 2011 at www.osce.org/fom/41091). http://www.wipo.int/wipolex/en/details.jsp?id=11035 (accessed 19 Nov 2012). 24 http://www.wipo.int/wipolex/en/details.jsp?id=11035 (accessed 19 Nov 2012). 25 Reporters Without Borders, Online censorship now bordering on the ridiculous in Turkey, 29 April 2011 (accessed 11 Oct 2011 at http://en.rsf.org/turkey-online-censorshipnow-bordering-on-29-04-2011,40194.html); Internet filtering and changes to press law further limit media freedom in Turkey, says OSCE media freedom representative, 17 May 2011 (accessed 27 Sep 2011 at http://www. osce.org/fom/77587); M. Celikkafa and S.A. Diehn, Internet filter in Turkey sparks fears of censorship, Deutsche Welle, 21 November 2011; S. Ayhan, There is no internet censorship; however one million websites are banned, Milliyet, 23 May 2011 (accessed 22 Nov 2011 at http://cyberlaw.org.uk/ category/turkey/). 26 Erdem & Erdem, Key notes on the legal developments of July 2010 (accessed 22 Nov 2011 at http://www.erdem-erdem.av.tr/erdem-erdem. php?katid=12110&id=14492&main_kat=14499). 27 Turkish Weekly, Internet in Turkey: Safe or Censored? (accessed 5 Jan 2012 at http://www.turkishweekly.net/ news/129108/internet-in-turkey-safe-or-censored-.html). 28 http://ec.europa.eu/anti-trafficking/download.action;jsessionid=GN13Qr8NhB4GR6QH GJ20j7Pd1MfJpsJ0lvRyxdHvgnTTsh6psCmg!1807294630?nodeId=4399df87-9d24-41deadae-a9d186d07e19&fileName=Decision+2000_375+on+child+pornography_en.pdf&fileType=pdf (accessed 20 Nov 2012). 29 http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri =OJ:L:2004:013:0044:0048:EN:PDF (accessed 20 Nov 2012). 30 http://www.ispai.ie/docs/cepdir.pdf (accessed 6 Dec 2011). On the final compromise text, see http://www. statewatch.org/news/2011/jun/eu-council-sexual-exploitation-1st-reading-analysis-11987-11.pdf (accessed 6 Dec 2011). 31 http://europa.eu/rapid/press-release_IP-11-1255_ en.htm (accessed 6 Dec 2011). 32 http://www.statewatch.org/news/2011/jun/eu-councilsexual-exploitation-1st-reading-analysis-11987-11.pdf (accessed 6 Dec 2011). Murdoch & Anderson, p. 66. 33 http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri= OJ:L:2000:178:0001:0001:EN:PDF (accessed 6 Dec 2011). 34 S.J. Murdoch and R. Anderson, Tools and Technology of Internet Filtering, in R. Deibert et al. (eds), Access Denied: The Practice and Policy of Global Internet Filtering (Cambridge, MA: MIT Press, 2008), p. 59ff; V. Varadharajan, Internet Filtering Issues and Challenges, IEEE Security and Privacy (July/August 2010), pp. 62-3. 35 J. Zittrain and J. Palfrey, Reluctant Gatekeepers: Corporate Ethics on a Filtered Internet, in R. Deibert et al. (eds) Access Denied, p. 37 (accessed 20 Nov 2012 at http://

access.opennet.net/wp-content/uploads/2011/12/accessdenied-chapter-5.pdf ); Murdoch and Anderson, op. cit., p. 60ff; Varadharajan, op. cit., p. 63. 36 R. Faris and N. Villeneuve, Measuring Global Internet Filtering, in R. Deibert et al. (eds.), Access Denied, p. 14. 37 Varadharajan, op. cit., p. 63. 38 Murdoch and Anderson, op. cit., p. 66. 39 J. Palfrey, Four phases of Internet Regulation, 2010, pp. 10-1 (Accessed 4/12/11 at: http://www.law.harvard.edu/ faculty/faculty-workshops/palfrey.faculty.workshop.summer.2010.pdf ); J. Varon Ferraz, C. Affonso Sousa, B. Magrani, W. Britto, Content Filtering in Latin America: Reasons and Impacts on Freedom of Expression, Working paper manuscript, 2011, pp. 2, 45, 47-9. 40 T.J. McIntyre, Child abuse images and Cleanfeeds: assessing Internet blocking systems, p. 8 (Accessed 3/1/12 at: http://papers.ssrn.com/sol3/Delivery.cfm/ SSRN_ID1893667_code767856.pdf?abstractid=1893667 &mirid=1); Y. Akdeniz, To Block or Not to Block: European Approaches to Content Regulation, and Implications for Freedom of Expression, Computer Law and Security Review, 26/3 (2010), p. 270. 41 Faris and Villeneuve, op. cit., p. 15; Zittrain and Palfrey, op. cit., pp. 36-7, 48; S. Murdoch, R. Clayton and R.N.M. Watson, Ignoring the Great Firewall of China, I/S. A Journal of Law and Policy for the Information Society, 3/2 (2007), pp. 272-96; Varadharajan, op. cit., p. 63. 42 Ibid., p. 63-4. 43 Palfrey (op. cit., 2010), p. 11; Zittrain and Palfrey, op. cit., pp. 33-4; Murdoch and Anderson, op. cit., p. 68. 44 Ibid., pp. 67-8; Esselaar, What ISPs can do about undesirable content, Internet Service Providers Association, South Africa (ISPA), May 2008, p. 17ff (Accessed 8/11/11 at http:// old.ispa.org.za/files/ISP_undesirable_content.pdf ); Akdeniz (op. cit., 2004), p. 114; Akdeniz (op. cit., 2010b), pp. 263, 269; Palfrey (op. cit., 2010), pp. 11-2; McIntyre, op. cit., p. 20; M. Eneman, The New Face of Child Pornography, in A. Murray and M. Klang (eds), Human Rights in the Digital Age (London, 2005), pp. 231-2; Zittrain and Palfrey, op. cit., p. 34; Ofcom, Site Blocking to reduce online copyright infringement, 27 May 2011, pp. 5-6, 8 (accessed 12 Dec 2011 at http://stakeholders.ofcom.org.uk/binaries/internet/site-blocking.pdf ). 45 Murray, op. cit., 2007 46 For information, see http://www.europarl.europa. eu/sides/getDoc.do?pubRef=-//EP//TEXT+WQ+E-20108802+0+DOC+XML+V0//EN (accessed 20 Nov 2012). 47 For information, see http://wiki.openrightsgroup. org/wiki/Cleanfeed (accessed 20 Nov 2012). Cleanfeed uses hybrid technology that checks the destination port/ IP address; if this is found to be suspicious, the system will direct the traffic to a web-proxy that analyses HTTP requests. It suffers less from over-blocking because the proxy can be as selective as required (Varadharajan, op. cit., p. 64). 48 McIntyre, op. cit., p. 12; Murdoch and Anderson, op. cit., p. 68. 49 Ofcom, op. cit., p. 39ff. 50 D. Bambauer, Cybersieves, Duke Law Journal, 59/3 (2009), p. 477; D.C. Nunziato, How (not) to censor: Procedural first amendment values and Internet censorship worldwide, Georgetown Journal of International Law, 42 (2011), pp. 1136-41, 1149, 1154, 1156; S. Stalla-Bourdillon, Chilling ISPs When private regulators act without adequate public framework, Computer Law and Security Review, 26 (2010), p. 290. However, as Faris, Wang and

49

Palfrey note, there is an inherent trade-off to transparent filtering: Transparent government regulation that follows well-defined legal procedures is offset by reduced effectiveness in preventing users from being aware of and accessing sensitive content online. R. Faris, S. Wang and J. Palfrey, Censorship 2.0, Innovations, Spring 2008, p. 165 p. 177). 51 C. Marsden, Net Neutrality: Towards a Co-Regulatory Solution, London, 2010 (accessed 12 Dec 2011 at http://papers.ssrn.com/sol3/papers.cfm?abstract_ id=1533428&download=yes); C. Ahlert, C. Marsden and C. Yung, How Liberty Disappeared from Cyberspace: The Mystery Shopper Tests Internet Content Self-Regulation, 2004, p. 2 (accessed 11 Dec 2011 at http://pcmlp.socleg. ox.ac.uk/sites/pcmlp.socleg.ox.ac.uk/files/liberty.pdf ). 52 McIntyre, op. cit., p. 15. 53 There is a trade-off here, however. Two researchers note that if blocking were enshrined by law, it would become more, rather than less, pervasive: ibid., p. 14; M.L. Mller, Networks and States: The Global Politics of Internet Governance (Cambridge, MA: MIT Press, 2010). 54 Akdeniz, (op. cit., 2010b), pp. 265-6, Nunziato, op. cit., p. 1155; Stalla-Bourdillon, op. cit., p. 290; R. Deibert and N. Villeneuve, Firewalls and Power: An Overview of Global State Censorship of the Internet, in A. Murray and M. Klang (eds), Human Rights in the Digital Age, pp. 123-4. 55 McIntyre, op. cit., pp. 13-4. 56 Akdeniz, (op. cit., 2010b), p. 270. 57 C. Hunter, Internet Filter Effectiveness: Testing Overand Underinclusive Blocking Decisions of Four Popular Filters, Social Science Computer Review, 18/2 (2000), pp. 214-22; K. Mitchell, D. Finkelhor and J. Wolak, The Exposure of Youth to Unwanted Sexual Material on the Internet: A National Survey of Risk, Impact and Prevention, Youth and Society, 34/3 (2003), pp. 330-58; P. Greenfield, P. Rickwood and H. Tran, Effectiveness of Internet Filtering Software Products. CSIRO Mathematical and Information Sciences, 2001 (accessed 7 Jan 2012 at http://pandora.nla.gov.au/ pan/53049/20051005-0000/www.aba.gov.au/newspubs/ documents/filtereffectiveness.pdf ). 58 P.B. Stark, The Effectiveness of Internet Content Filters, 2007, pp. 12-4 (accessed 9 Jan 2012 at http://statistics. berkeley.edu/~stark/Preprints/filter07.pdf ). 59 Jugendschutz.net, Jugendschutz im Internet. Ergebnisse der Recherchen und Kontrollen. Bericht 2010, p. 15 (accessed 30 Nov 2011 at http://www.jugendschutz.net/ pdf/bericht2010.pdf ). 60 Akdeniz (op. cit., 2004), pp. 112-3; Akdeniz, (op. cit., 2010b) p. 270; Faris, Wang and Palfrey, op. cit., p. 165. 61 M. Kerr and H. Stattin, What Parents Know, How They Know it, and Several Forms of Adolescent Adjustment: Further Support for a Reinterpretation of Monitoring, Developmental Psychology, 36/3 (2000), pp. 366-80; T. Rooney, Trusting children: how do surveillance technologies alter a childs experience of trust, risk and responsibility, Surveillance and Society, 7, 3/4 (2010), pp. 344-55 (accessed 10 Jan 2012 at http://surveillance-and-society.org/ojs/index. php/journal/article/viewfile/trust/trust); G. Marx and V. Steeves, From the beginning: children as subjects and agents of surveillance, Surveillance and Society, 7, 3/4 (2010), pp. 192-230. 62 J. Nolan, K. Raynes-Goldie and M. McBride, The Stranger Danger: Exploring Surveillance, Autonomy, and Privacy in Childrens Use of Social Media, Canadian Children Journal, 36/2 (2011), pp. 24-32 (especially pp. 29-31). 63 http://www.acma.gov.au/webwr/_assets/main/

lib310554/isp-level_internet_content_filtering_trial-report. pdf. (accessed 27 Aug 2012). 64 http://www.inquisitr.com/17448/the-economic-cost-ofinternet-censorship-in-australia/ (accessed 20 Nov 2012). 65 https://www.bcgperspectives.com/content/interactive/ telecommunications_media_entertainment_bcg_e_intensity_index/ (accessed 20 Nov 2012). 66 BCG, Russia Online, pp. 15-9 (accessed 20 Nov 2012 at http://www.bcg.ru/). 67 URL: http://fom.ru/blogs/10119#link1 (accessed 27 Aug 2012). 68 Ibid. 69 Ibid. 70 Ibid. 71 Ibid. 72 For a comparable calculation for Australia, see http:// www.inquisitr.com/17448/the-economic-cost-of-internetcensorship-in-australia/. (accessed 27 Aug 2012). 73 http://www.abc.net.au/news/2008-10-24/the-highprice-of-internet-filtering/552148 (accessed 20 Nov 2012). 74 http://techwiredau.com/2008/10/interview-withmark-newton-of-internode-re-australian-internet-filter/. (accessed 27 Aug 2012). 75 http://otago.ourarchive.ac.nz/bitstream/handle/10523/1173/Peter_Brooking_Dissertation. pdf?sequence=4. (accessed 27 Aug 2012). 76 http://royal.pingdom.com/2010/11/12/real-connection-speeds-for-internet-users-across-the-world/. (accessed 27 Aug 2012). 77 First will be last, The Economist, 26 September 2002; The growth machine, The Economist, 16 May 2002; Knowledge is power, The Economist, 21 September 2000; Catch the wave, The Economist, 18 February 1999. 78 BCG, op. cit. pp. 1-2. 79 Speech to members of the United Russia Party, 28 May 2010, at http://ria.ru/society/20100528/239678643. (accessed 27 Aug 2012). 80 http://www.iwf.org.uk/resources/trends (accessed 30 Nov 2011). 81 OpenNet Initiative, United Kingdom, pp. 359-60 (accessed 30 Nov 2011). 82 http://www.w3.org/PICS/ (accessed 2 Jan 2012). 83 On the RSCAi, see Recreational Software Advisory Council Launches Objective, Content-Labelling Advisory System for the Internet, 28 February 1996 (accessed 4 Jan 2012 at http://www.w3.org/PICS/960228/RSACi.html). 84 P. Archer, ICRAfail. A Lesson for the Future, p. 4 (accessed 4 Jan 2012 at http://philarcher.org/icra/ICRAfail. pdf ). 85 J.P. Kesan and R.C. Shah, Shaping Code, pp. 28-34 (quotation: p. 34) (accessed 4 Jan 2012 at http://www.hks. harvard.edu/m-rcbg/Conferences/rpp_rulemaking/Kesan_ Shaping_code.pdf ). 86 See http://ec.europa.eu/information_society/activities/sip/projects/completed/filtering_content_labelling/ filtering/icrasafe/index_en.htm; and the final report http:// ec.europa.eu/information_society/activities/sip/archived/ docs/pdf/projects/is_r_3a.pdf (accessed 30 Nov 2011). 87 Archer, op cit., p. 4 88 Ibid. 89 For these, see http://www.w3.org/2009/08/pics_superseded.html; on POWDER in general, see http://www. w3.org/TR/powder-dr/ (accessed 4 Jan 2012). 90 See: http://i-sieve.com/powder/; http://philarcher.org/ powder/ (accessed 4 Jan 2012). 91 http://www.quatro-project.org/home ; http://i-sieve.

50

com/contentfiltering/index.php (accessed 1 Jan 2012); European Commission, Background Report on Cross Media Rating and Classification, and Age Verification Solutions, Safer Internet Forum, 25-6 September 2008, p. 14 (accessed 4 Jan 2012 at http://ec.europa.eu/information_society/ activities/sip/docs/pub_consult_age_rating_sns/summaryreport.pdf ). 92 http://www.quatro-project.org/scope (accessed 4 Jan 2012). 93 http://www.rtalabel.org/ (accessed 4 Jan 2012); Archer, op. cit., p. 4. 94 S.S. Lim, Regulatory Initiatives for Managing Online Risks and Opportunities for Youths the East Asian Experience, manuscript, 2011. See also technology from CA http://www.ca-store.jp/Default.aspx?sc_lang=ja-JP (accessed 7 Dec 2011); and NetSTAR http://www.netstarinc.com/eng/index.html (accessed 7 Dec 2011). 95 http://www.nmda.or.jp/enc/rating/rating_standard. html (accessed 7 Dec 2011); Lim, op. cit., 2011. 96 The website is http://kidscare.ocn.ne.jp/ (accessed 7 Dec 2011); see also http://www.japantoday.com/category/ technology/view/ocn-provides-children-with-saferinternet and http://www.ntt.com/csr_e/report2011/data/ st_ict.html (accessed 7 Dec 2011). NTT Com also runs the educational programs com KIDS and LoiLo. 97 http://www.ntt.com/csr_e/report2011/data/st_ict.html (accessed 7 Dec 2011). 98 Cf. Archer, op. cit., p. 22. 99 See http://www.pointdecontact.net/protectiondelenfance.aspx#controle (accessed 5 Dec 2011). 100 See http://www.pointdecontact.net/protectiondelenfance.aspx#controle (accessed 5 Dec 2011). 101 Akdeniz, (op. cit., 2010b), p. 262; Valcke et al. (op. cit., 2011), p. 1296; S. Kierkegaard, Cybering, online grooming and age-play, Computer Law and Security Report, 24/1 (2008), p. 41. 102 Akdeniz, (op. cit., 2004), pp. 118-9; Stalla-Bourdillon, op. cit., p. 290. 103 PointSmart.ClickSafe: Task Force Recommendations for Best Practices for Online Safety and Literacy, July 2009, pointsmart.org, p. ii (accessed 28 Dec 2011 at http://www. pointsmartreport.org/PointSmartReport.pdf ). 104 http://europa.eu/legislation_summaries/audiovisual_ and_media/l24030b_en.htm (accessed 6 Dec 2011). 105 http://europa.eu/legislation_summaries/audiovisual_and_media/l24030a_en.htm; http://eur-lex.europa.eu/ LexUriServ/LexUriServ.do?uri=OJ:L:2006:378:0072:0077:EN: PDF (accessed 6 Dec 2011). 106 For Decision No 276/1999/EC that gave SIAP impetus, see http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri =CELEX:31999D0276:EN:HTML. On Safer Internet plus program http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?u ri=CELEX:32005D0854:EN:NOT. On Safer Internet program http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ: L:2008:348:0118:0127:EN:PDF (accessed 6 Dec 2011). 107 http://ec.europa.eu/information_society/activities/ sip/docs/prog_evaluation/comm_final_eval_siap_en.pdf (accessed 6 Dec 2011). 108 European Commission, Proposal for a Decision of the European Parliament and of the Council Establishing a Multiannual Community Programme on Protecting Children Using the Internet and Other Communications Technologies, 27 February 2008, p. 3 (accessed 6 Dec 2011 at http://ec.europa.eu/information_society/activities/ sip/docs/prog_2009_2013/decision_en.pdf ); E. Peterson

and G.W. Ulferts, Government Control of Communications Technology, International Business and Economics Research Journal, 8/9 (2009), p. 21. 109 http://ec.europa.eu/information_society/activities/ sip/policy/programme/current_prog/index_en.htm (accessed 6 Dec 2011). 110 European Commission, A Digital Agenda for Europe, 26 August 2010 (accessed 6 Dec 2011 at http://eur-lex.europa. eu/LexUriServ/LexUriServ.do?uri=COM:2010:0245:FIN:EN. PDF). 111 Action 36 http://ec.europa.eu/information_society/ newsroom/cf/fiche-dae.cfm?action_id=194&pillar_ id=45&action=Action%2036%3A%20Support%20reporting%20of%20illegal%20content%20online2%20and%20 awareness%20campaigns%20on%20online%20safety%20 for%20; Action 40 http://ec.europa.eu/information_society/newsroom/cf/fiche-dae.cfm?action_id=198 (accessed 6 Dec 2011 and 20 Nov 2012). 112 Ibid. See also the Safer Internet site: http://www.saferinternet.org/web/guest/about-us (accessed 6 Dec 2011). 113 Action 37: http://ec.europa.eu/information_society/ newsroom/cf/fiche-dae.cfm?action_id=195&pillar_ id=45&action=Action%2037%3A%20Foster%20selfregulation%20in%20the%20use%20online%20services (accessed 6 Dec 2011). 114 Coalition to Make Internet a Safer Place for Children, Statement of Purpose of the Coalition, 1 December 2011 (accessed 29 Dec 2011 at http://ec.europa.eu/information_ society/activities/sip/docs/ceo_coalition_statement.pdf ). 115 On the efforts made by Facebook and MySpace to protect children from sexual predators, see: C. Chang, Internet Safety Survey: Who Will Protect the Children? Berkeley Technology Journal, 25 (2010), pp. 503-5. 116 Safer Networking Principles for the EU, 10 February 2009 (accessed 6 Dec 2011 at http://ec.europa.eu/information_society/activities/social_networking/docs/sn_principles.pdf ). 117 Compliance here is defined by the extent to which the services achieved the goals to which they had committed on their self-declarations. The full results are at http:// ec.europa.eu/information_society/activities/social_networking/eu_action/implementation_princip_2011/ index_en.htm. 118 Zittrain and Palfrey, op. cit., p. 106. 119 http://webarchive.nationalarchives.gov. uk/20100413151441/police.homeoffice.gov.uk/operational-policing/crime-disorder/child-protection-taskforce. html (accessed 30 Nov 2011). Home Office Task Force on Child Protection on the Internet, Good practice guidance for the providers of social networking and other interactive services 2008, (accessed 30 Nov 2011 at http://www. manchesterscb.org.uk/docs/Home%20Office%20Task%20 Force%20in%20CP%20on%20the%20Internet.pdf ). 120 http://www.chis.org.uk/about (accessed 30 Nov 2011). 121 http://www.education.gov.uk/ukccis/about (accessed 30 Nov 2011). 122 Office of Communications, UK code of practice for the self-regulation of new forms of content on mobiles. Review, 11 August 2008, p. 2 (accessed 30 Nov 2011 at http://stakeholders.ofcom.org.uk/binaries/research/medialiteracy/ukcode.pdf ). 123 http://ceop.police.uk/ (accessed 30 Nov 2011). 124 http://www.thinkuknow.co.uk/; http://ceop.police.uk/ About-Us/How-we-do-it/ (accessed 30 Nov 2011). 125 UKCCIS, ISPs Launch New Code of Practice on Parental

51

Controls, 1 November 2011 (accessed 30 Nove 2011 at http://www.education.gov.uk/ukccis/news/a00199819/ isps-launch-new-code-of-practice-on-parental-controls). 126 KJM, Zwei neue Selbstkontrollen fr das Internet: KJM erkennt FSK.online und USK.online an, 19 September 2011 (accessed 29 Nov 2011 at http://www.kjm-online.de/ de/pub/aktuelles/pressemitteilungen/pressemitteilungen_2011/pm_152011.cfm). 127 Freedom House, Freedom on the Net 2011: Germany, pp. 4-5 (accessed 29 Nov 2011 at http://www.freedomhouse.org/images/File/FotN/Germany2011.pdf ); Faris et al., (op. cit., 2008), pp. 169-70. 128 http://www.usispa.org/about-ussipa/founding-principles (accessed 8 Dec 2011). 129 ECPAT, Global Monitoring Report on the Status of Action against Commercial Sexual Exploitation of Children. Canada, 2006, p. 13 (accessed 2 Dec 2011 at http://www. ecpat.net/A4A_2005/PDF/Americas/Global_Monitoring_ Report-Canada.pdf ). 130 http://www.cpc.gc.ca/en/canadian-internet-childexploitation-cicec (accessed 2 Dec 2011). 131 http://www.iajapan.org/index-en.html (accessed 10 Dec 2011). 132 http://www.cgi.br/; http://www.saferinternetday.org/ web/brazil/home/-/blogs/sid-2011:-brazilian-agenda-forsid-2011;jsessionid=637ACE4BA313B029DE3DF5BEF863F A9B (accessed 19 Dec 2011). 133 http://www.prsp.mpf.gov.br/noticias-prsp/crimesciberneticos (accessed 19 Dec 2011). 134 Termo de Ajustameno de Conduta, (accessed 19 Dec 2011 at http://www.prsp.mpf.gov.br/noticias-prsp/crimesciberneticos/TACgoogle.pdf ). 135 http://www.abranet.org.br/; http://www.abranet.org. br/index.php?option=com_content&view=article&id= 54&Itemid=60; http://www.midiaindependente.org/pt/ blue/2008/07/424182.shtml (accessed 19 Dec 2011). 136 http://www.ispai.in/codeOfConduct.php (accessed 8 Dec 2011). 137 http://www.apjii.or.id/en/ (accessed 17 Dec 2011). 138 http://www.hivos.nl/esl/community/partner/50009010 (accessed 17 Dec 2011). 139 http://ispa.org.za/code-of-conduct/request-a-takedown/ (accessed 8 Dec 2011). 140 http://ispa.org.za/code-of-conduct/ (accessed 8 Dec 2011). 141 The ECT is at: http://www.info.gov.za/view/ DownloadFileAction?id=68060.html (accessed 8 Dec 2011). 142 http://www.isc.org.cn/ (accessed 8 Dec 2011). 143 http://www.isc.org.cn/english/Specails/Self-regulation/ listinfo-15321.html (accessed 8 Dec 2011). 143 Ibid. 144 Akdeniz (2010), p. 268. 145 http://www.gov.cn/english/2010-06/08/content_1622956_6.htm (accessed 8 Dec 2011). 146 Deibert and Villeneuve, op. cit., pp. 122-3; Eneman, op. cit., pp. 32ff, 37-9; R. Wong, Privacy: Charting its Development and Prospects, in A. Murray and M. Klang (eds), Human Rights in the Digital Age, pp. 147-61; M. Rundle and M. Birdling, Filtering and the International System: A Question of Commitment, in R. Deibert et al. (eds), Access Denied, p. 77ff; Zittrain and Palfrey, op.cit., p. 44ff. 147 http://www2.ohchr.org/english/law/crc.htm (accessed 8 Jan 2011). 148 http://www.un.org/en/documents/udhr/index. shtml#a19 (accessed 8 Jan 2011).

149 http://www.hri.org/docs/ECHR50.html 150 http://www.jugendschutz.net/rechtsextremismus/ (accessed 30 Nov 2011). 151 https://wcd.coe.int/ViewDoc.jsp?id=1266285 (accessed 8 Jan 2011). 152 Akdeniz, (op. cit., 2010b), p. 268. 153 https://wcd.coe.int/ViewDoc.jsp?id=1470045&Site=CM (accessed 8 Jan 2012). 154 http://www2.ohchr.org/english/law/ccpr.htm (accessed 8 Jan 2012). 155 Zittrain and Palfrey, op. cit., p. 45. 156 For Turkey, see Reporters Without Borders,: Online Censorship Now Bordering on the Ridiculous in Turkey, 29 April 2011 (accessed 11 Oct 2011 at http:// en.rsf.org/turkey-online-censorship-now-borderingon-29-04-2011,40194.html); OSCE, Internet Filtering and Changes to Press Law Further Limit Media Freedom in Turkey, 17 May 2011; M. Celikkafa and S.A. Diehn, Internet Filter in Turkey Sparks Fears of Censorship, Deutsche Welle, 21 November 2011 (accessed 22 Nov 2011 at http://www. dw-world.de/dw/article/0,,15543036,00.htm); S. Ayhan, There is no internet censorship; However one million websites are banned, Milliyet, 23 May 2011 (accessed 22 Nov 2011 at http://cyberlaw.org.uk/category/turkey). 157 S. Sengupta, Indias Courts Grapple With Web Censorship, New York Times, 14 January 2012 (accessed 15 Jan 2012 at http://india.blogs.nytimes.com/2012/01/14/ indias-courts-grapple-with-web-censorship/); H. Timmons, India Asks Google, Facebook to Screen User Content, New York Times, 5 December 2011 (accessed 15 Jan 2012 at http://india.blogs.nytimes.com/2011/12/05/india-asksgoogle-facebook-others-to-screen-user-content/). For criticism of Indias new IT rules, see: K. Tanna, Internet Censorship in India: Is It Necessary and Does It Work? (2004) (accessed 18 Oct 2011 at http://www.ketan.net/ INTERNET_CENSORSHIP_IN_INDIA.html); R. Lakshmi, Indias New Internet Rules Criticized, The Washington Post, 1 August 2011 (accessed 18 Oct 2011 at http:// www.washingtonpost.com/world/indias-new-internetrules-criticized/2011/07/27/gIQA1zS2mI_story.html); N. Ganapathy, New Indian Net Censorship Law Draws Criticism, Asianewsnet.net, 20 May 2011 (accessed 18 Oct 2011 at http://www.asianewsnet.net/home/news. php?id=19008&sec=1). 158 Nunziato, op. cit., pp. 1148-9 159 Akdeniz, (op. cit. 2010a), pp. 11-2. 160 Ibid., pp. 2, 13. 161 Freedom House, Freedom on the Net 2011, pp. 3-4 (accessed 27 Sep 2011 at www.freedomhouse.org/images/ File/FotN/Turkey2011.pdf ). 162 Correspondence with Engelliweb.com, 19 October 2011. 163 On blocking of useful sites and/or minority views, see Akdeniz, (op. cit., 2004), pp. 111-2. 164 Ibid., pp. 108-9, 115-8. 165 M. Eneman, Internet Service Provider (ISP) filtering of Child-Abusive Material: A Critical Reflection of Its Effectiveness, Journal of Sexual Aggression, 16/2 (2010), pp. 224-5, 234. See also S. Kierkegaard, To Block or Not to Block European Child Porno Law in Question, Computer Law and Security Review, 27 (2011), pp. 573-84 (especially pp. 581ff, 583-4). 166 Nunziato, op. cit., pp. 1145-6; UN Human Rights Council, 2011. 167 See the core commitments of the GNI at: http://www.

52

globalnetworkinitiative.org/corecommitments/index.php (accessed 30 Nov 2011). 168 Nunziato, op. cit., pp. 1128-9. See also Ofcom: http:// stakeholders.ofcom.org.uk/binaries/internet/site-blocking. pdf pp. 7-8; UN Human Rights Council, 2011. 169 McIntyre, op. cit., p. 21. 170 S.R. Morrison, What the Cops Cant Do, Internet Service Providers Can: Preserving Privacy in Email Contents, SSRN eLibrary, 2011 (accessed 9 Jan 2012 at http://papers.ssrn. com/sol3/papers.cfm?abstract_id=1729000). 171 Akdeniz (op. cit., 2004), pp. 101-2; Kierkegaard, op. cit., pp. 41-2; Valcke et al., (op. cit., 2011), p. 1296. 172 http://treaties.un.org/Pages/ViewDetails.aspx?src=TREATY&mtdsg_no=IV-11c&chapter=4&lang=en; http://www2.ohchr.org/english/ law/crc-sale.htm; for Article 242, see http://www.consultant.ru/popular/ukrf/10_35.html#p4224. (accessed 27 Aug 2012). 173 J.V. Ferraz et al., p. 49 174 RAEK, Opinion on Russian Federal Law of 29 December 2010 No 436 On the Protection of Children from Information Harmful to their Health and Development, 22 September 2011 (accessed 10 Jan 2012 at http://raec.ru/ times/detail/761/). 175 Selective filtering was already in operation. See R. Deibert and R. Rohozinski, Control and Subversion in Russian Cyberspace, in R. Deibert et al. (eds), Access Controlled. The Shaping of Power, Rights and Rule in Cyberspace (Cambridge, Mass., 2010), pp. 22ff, 26. 176 Prime, MGTS To Filter School Internet, 2 November 2011 (accessed 10 Jan 2012 at http://www.1prime.ru/ news/pressreleases/-106/%7B06878E60-829F-4814-AE2D0BD3FB3C9FB6%7D.uif ). 177 Ibid, p. 2. 178 J. Parry and M. Gorton et al., Internet content filtering. A report to DCITA, 4 April 2003, Project No CL129, Version 1.0. 179 N. Villeneuve, Why Block by IP address? 14 February 2005 (accessed 13 Jul 2012 at http://www.nartv. org/2005/02/14/why-block-by-ip-address/). 180 See also Zittrain and Palfrey, op. cit., p. 122; UN Human Rights Council, 2011; Family Online Safety Institute (FOSI) and Global Resource and Information Directory (GRID), op. cit., pp. 3, 56; Villeneuve, (op. cit., 2010), p. 66; Valcke et al., (op. cit., 2011), p. 1303; Safer Social Networking Principles for the EU, 2009, pp. 3-5. 181 T. Byron, Safer Children in a Digital World. The Report of the Byron Review, 2008, p. 206 (accessed 30/11/11 at: https://www.education.gov.uk/publications/standard/publicationdetail/Page1/DCSF-003342008#downloadableparts). 182 Idem, Do we have safer children in a digital world? A review of progress since the 2008 Byron Review, March 2010, p. 6 (accessed 30 Nov 2011 at http://dera.ioe. ac.uk/709/1/do%20we%20have%20safer%20children%20 in%20a%20digital%20world-WEB.pdf ). 183 Internet Safety Technical Task Force, Enhancing Child Safety and Online Technologies: Final Report of the Internet Safety Technical Task Force To the Multi-State Working Group on Social Networking of State Attorneys General of the United States, 31 December 2008, pp. 36, 39 (accessed 5 Dec 2011 at http://cyber.law.harvard.edu/sites/cyber. law.harvard.edu/files/ISTTF_Final_Report.pdf ). See also: PointSmart.ClickSafe, p. 24ff. 184 http://www.dni.ru/tech/2011/8/31/218008.html (accessed 10 Jan 2012)..

185 See also, for Ireland: D. OReilly and C. ONeill, An Analysis of Irish Primary School Childrens Internet Usage and the Associated Safety Implications, International Journal of Information and Communication Technology Education, 4/3 (2008), p. 47. 186 Livingstone et al. (op. cit., 2011), p. 44. See also Norton Online Family Report, 2010, p. 15. 187 Akdeniz, (op. cit., 2004), p. 114. 188 Ibid., p. 120. 189 Livingstone et al. (op. cit., 2011), p. 44. Jones and Finkelhor present a number of features that characterize effective awareness campaigns in L.M. Jones, D. Finkelhor, Increasing Youth Safety and Responsible Behavior Online: Putting in Place Programs that Work, A FOSI Discussion Paper, p. 9 (accessed 15 Dec 2011 at http://www.unh.edu/ ccrc/pdf/fosi_whitepaper_increasingyouthsafety_d9.pdf ). 190 http://www.google.ru/familysafety/tips.html; http:// detionline.com/helpline/about; http://ndfond.ru/menu/ about http://www.smeshariki.ru/ and http://www.smeshariki.ru/GameIndex.aspx (accessed 10 Jan 2012). 191 FOSI and GRID, op. cit., p. 56. 192 PointSmart.ClickSafe, op. cit., p. ii. 193 RAEK, 2011. 194 http://minsvyaz.ru/ru/news/index.php?id_4=42777 (accessed 10 Jan 2012). 195 An example of a self-regulatory agreement (on child abuse images) can be found at http://hostdeclaration.ru/; the websites for the hotlines are at http://old.saferunet.ru/ hotline/content.php/ (accessed 10 Jan 2012). 196 Stalla-Bourdillon, op. cit., p. 296. 197 Zittrain and Palfrey, op. cit., pp. 120-1. 198 C.M. Maclay, Protecting Privacy and Expression Online. Can the Global Network Initiative Embrace the Character of the Net? in R. Deibert et al. (eds), Access Denied, pp. 98-100. 199 E. Zuckerman, Intermediary Censorship, in R. Deibert et al. (eds), Access Denied, pp. 73-4, 80-3. 200 S. Kreimer, Censorship by Proxy: The First Amendment, Internet Intermediaries, and the Problem of the Weakest Link, University of Pennsylvania Law Review, 155 (2006), p. 11; Ahlert et al., op. cit., pp. 7-8; J.V. Ferraz et al., op. cit., p. 49. 201 See: L.T. Musielak, Google-ing China: An ethical analysis of Googles censoring activities in the Peoples Republic, undated (accessed 11 Jan 2012 at http://snl. depaul.edu/writing/Googleing%20China.pdf ); Faris, Wang and Palfrey, op. cit., p. 169ff; on Googles halting its cooperation with China, see M. Mellody, The Ethics of Google in China, 18 April 2010 (accessed 11 Jan 2012 at http://ethics. davidson.edu/wp-content/uploads/2010/01/Mellody-onGoogle-in-China.pdf ). 202 Lim, op. cit., p. 5. This instance notwithstanding, the state is beginning to recognize the need to collaborate with multinationals: Palfrey, (op. cit., 2010), p. 14ff. 203 http://www.google.com/transparencyreport/ (accessed 11 Jan 2012). 204 Google SafeSearch: http://www.google.ru/familysafety/tools.html; YouTube http://support.google.com/ youtube/bin/request.py?hl=en&contact_type=abuse; Yandex: http://family.yandex.ru/ (accessed 10 Jan 2012). 205 C.M. Maclay, Protecting Privacy and Expression Online. Can the Global Network Initiative Embrace the Character of the Net?, in R. Deibert et al. (eds), Access Denied, pp. 97-8; Faris, Wang and Palfrey, op. cit., pp. 180-1, 184. 206 Livingstone et al. (op. cit., 2011), p. 44.

53

Das könnte Ihnen auch gefallen