Sie sind auf Seite 1von 8

-The Power-

of the Algorithm:
The Politics of Search Engine Personalization

despina skordili

-16-

AbstrAct
In its early years, the internet was envisioned as a servant of public good, asserting democratic ideals by offering unlimited access to information and the possibility to interact with it. However, a closer examination on the architecture of the web showed that it is dominated by a battle for hierarchy, intensified by market competition. This paper is going to analyze this phenomenon by focusing on the politics of search engines. It is going to examine the development of search engine personalization to show that the algorithm is acquiring increasing power in shaping the architecture of the web, determining the users accessibility to online information. Its aim is to show that personalization is indicative of the dominance of corporate interests on the web, running counter to the ideals and values of democracy.

Keywords
search engine, personalization, web politics

IntroductIon
The basic architecture of the web in the early 1990s was accompanied with expectations that the internet would democratize society, offering equal and unlimited access to information. The organization of information as a web of nodes in which the user can browse at will (Berners-Lee, 1990) was envisioned as the advent of a revolution that would flatten society, unseat the elites, and usher in a kind of freewheeling global utopia (Pariser, 2011: 3). Aspirations concerning the formation of a habermasian public sphere seemed to be realized by the development of new communica-

tion technologies and technology futurists were looking at the internet as a carrier of those ideals which would foster democracy (Dean, 2003: 97). However, a closer examination of the process of linking showed that the internet is dominated by a quest for hierarchy, in contrast to the basic, democratic architecture of the web. As new media professor Richard Rogers notes, the mapping of the web at its hyperspace period showed that the process of linking is determined by politics of association: [t]he important insight of the 1990s was that websites (or webmasters) hyperlink selectively as opposed to capriciously. There is a certain optionality in link-making. Making a link to another site, -17-

not making a link, or removing a link may be viewed, sociologically or politically, as acts of association, non-association or disassociation, respectively (Rogers, 2012: 198). Soon, the dot.com hype of the late 1990s showed that the internet is a site of conflict, running counter to the aspects that would serve democracy. According to professor of political science, Jodi Dean, [m]arket competition as public good displaces attention from the actual antagonisms, the actual conflict going on in the world in various forms and spaces. The Net is one of the spaces where this conflict rages in full-force. When we talk about the Net as a public sphere, we displace attention from this conflict (Dean, 2003: 103).

-18-

Following the evolution of politics on the web, one can notice that soon a new agent entered the game of hyper-linking, showing the increasing power of the algorithm in the politics of association: [s]earch engines, portals and catalogues took over the linking responsibilities making searches faster and less surprising, describes internet artist Olia Lialina (Lialina, 2005). Indexing and rating mechanisms caused a significant switch in the basic architecture of the web, as search engines seemed, on the one side, to lead certain pages towards the top and, on the other side, to render others almost invisible to the user. With directories and engines, the web became a space of expert and device-authored lists, where the politics of making the list became the concern, writes Richard Rogers (Rogers, 2012:202). As he explains, researchers soon took up the politics of inclusion and exclusion (Rogers, 2012: 202-203), in an attempt to examine how search engines favor some pages against others, keeping a large portion of content hidden in the dark side of the web. At the same time, the web content was growing rapidly. According to network theory researcher Albert- Laszlo Barabasi, in contrast to earlier impressions that search engines contained everything that was out there on the web, research about indexing mechanisms showed that the biggest part of web pages cannot be found at all through any search engine (Barabasi, 2002: 164-165). It was, therefore, becoming more profit-

able to enhance the algorithm that selects the best page from the search engines already enormous database than to go deeper into the Web (Barabasi, 2002: 165). One of the technologies that search engine companies developed in order to deal with the problem of selecting the most suitable result for a search query is search engine personalization. Certain filter devices analyze data including our previous search history, our location and other factors, and direct us immediately to the web page with the most suitable content possible for a particular query. While promising to help us find our ways in as sea of information overload, search engine personalization seems to intensify the processes of inclusion and exclusion, giving companies more control over our accessibility to information. In his 2011 book The Filter Bubble: What the Internet is Hiding from You, internet activist Eli Pariser examines the dangers of web personalization, arguing that it is threatening our privacy and anonymity (Pariser, 2011: 6), narrowing down the information we come into contact with and, thus, preventing us from the formation of new ideas that could change the world (Pariser, 2011:15). It seems as if the search engine algorithm is gradually gaining more and more control in shaping the architecture of the web. This article of the special issue of the Journal of Network Theory, preoccupied with the ways new technologies influence the transmission of ideas, is going to focus -19-

on the role of search engine personalization in our accessibility to online content. It is going to examine how this technology intensifies the politics of inclusion and exclusion of online information. It will be argued that personalization reinforces the market mechanism of the web, running counter to the values and ideas that once envisioned the internet as a servant of democracy. First, this essay will give a more detailed overview of the politics of search engines. Then, it will focus on the more recent development of search engine personalization, explaining how this technology works, as well as its implications for web politics. Special attention will be given to Eli Parisers concept of the filter bubble.

GAteKeePers of InforMAtIon
Computer scientist Tim Berners-Lee had envisioned a world of data stored on server machines, and client processes on the same or other machines (Berners-Lee, 1990). The World Wide Web was created between 1989 and 1991 at the CERN laboratory by Berners-Lee and other computer scientists (Ippolita Collective, 2009:6), following his proposal of organizing information through the Hyper Text Transfer Protocol. Soon, search engines were developed in order to facilitate indexing and information retrieval on the web. Dominant notions in popular discourse showed that it was thought

which are, therefore, the implications of this technology for the democratized transmission of information on the web? how are the politics of inclusion and exclusion influenced when search engine personalization enters the ranking game?

that search engines could help us navigate through each and every dark corner of the web: [c[omments like If you cant find it using AltaVista, its probably not out there or HotBot is the first search robot capable of indexing and searching the entire Web were routine (Barabasi, 2002: 162). However, it soon became clear that mapping the whole web was impossible for any search engine (Barabasi, 2002: 163). The aspirations of search engine companies for mapping the whole web were expressed through a fierce competition in trying to index as many web pages as possible (Barabasi, 2002: 164). Although Google did not join this competition as a commercial enterprise until the late 1990s, it managed to get ahead of the coverage game and soon came to be and still is today- the most popular search engine. As Ippolita Collective explain in their book The Dark Face of Google (2009), the main innovation contributed by the search engine is that it was the first to pay importance in interpreting queries according to the users needs: it did no longer show sites according to their degree of proximity with regard to the query, but showed them in a correct order, that is conforming to the users expectations. The first link provided should then correspond to the exact answer to the question asked, the following ones slowly receding from the core of the search question (Ippolita Collective, 2009: 8). But where do the expectations that the internet would democratize knowledge fit

into this picture? One price we pay for the ease of searching Google is a lack of transparency, says media scholar Marlene Manoff (Manoff, 2010: 390). However exciting it may seem for the user to be able to navigate rapidly through the web, we need not forget that the web has come to be a field of market competition. Search engine companies are commercial enterprises and the demands of corporate interests can render them gatekeepers of information. Apart from the fact that the largest part of the web remains hidden in the dark, unindexed by search engines, there is also a ranking game taking place, with important consequences both for page owners and for users. And marketers realized soon enough that, in order to win this game, they need to cooperate with the search engine companies (Rogers, 2012: 206). A whole field of optimization strategies was already developed in the mid-1990s, as a result of the web page owners attempts to rank as high as possible in search engine results: [o]n the one end of the spectrum, practices that make reasonable use of prima facie reasonable heuristics, help designers to optimize their webpages expected ranking when they are is legitimately relevant to the person searching. On the other end of the spectrum some schemes allow Web designers to manipulate, or trick, the heuristicsschemes such as relevancy (or keyword) spamming, where webpage designers trick the ranking algorithm into ranking their pages higher than they deserve to be ranked by means of keyword stuffing, invisible text, -20-

tiny text, and so forth (Introna, 1999:14). This battle for the highest ranking has important implications for the user. [W]hat people (the seekers) are able to find on the Web determines what the Web consists of for them. And we allindividuals and institutions alikehave a great deal at stake in what the Web consists of, state Introna and Nissenbaum, who at the time were preoccupied, among other researchers, with the politics of search engines (Introna, 1999: 2). This field of research was aiming at providing answers to questions concerning how certain websites were led toward the top of results and others were rendered almost invisible to the seeker. Optimization strategies seemed to have a strong influence in these politics of inclusion and exclusion, by sinking the dark web even deeper and making it difficult for the users to discover less popular, smaller websites (Rogers, 2012: 206). In contrast to the early hopes that the internet would give us unlimited access to information that was previously a privilege of a few people, as well as the capability to interact with it, it became apparent that [d]igitization does not lead in any simple or straightforward way to the democratization of knowledge (Manoff, 2010: 390). The search engine algorithm functions as one of the most important contemporary gatekeepers of information. Therefore, when we type certain words in a search engine box, are we really the ones to decide the content which we desire to stumble upon? Do the politics of inclu-

sion and exclusion narrow our accessibility to new ideas? Could it be stated that the algorithm is acquiring more and more control in our online experience? In order to answer these questions we need to take a closer look to the more recent developments in the search engine technology and, more specifically, to personalization.

seArch enGIne PersonALIZAtIon - tailoring our online experience


The rapid increase of web content at the information age of the 2000s has intensified the information overload phenomenon that has been discussed from the early years of the history of new media. In the Web 2.0 era, websites are designed through an architecture of participation (O Reilly, 2005), using certain defaults which engage the users into the active creation of content. This development has lead to a phenomenon of data explosion, where users have to constantly browse for content which is valid or suitable to their interests. We are honored to be invited by the Machine to submit our opinions and preferences, describes Geert Lovink in his book Networks without a Cause (Lovink, 2011: 25). There must be at least some good value content out there as a reward, after we feed the databases (Ibid.). In the users quest for good value

content, web personalization has risen as a technology which offers easier navigation on the web and faster access to relevant information. Web personalization can be defined as any action that adapts the information or services provided by a Web site to the needs of a particular user or a set of users, taking advantage of the knowledge gained from the users navigational behavior and individual interests, in combination with the content and the structure of the Web site (Eirinaki, 2003: 1-2). F or example, web data are being extracted by the users history of visited websites, by their search history or by previous online purchases. This information is being analyzed in order for the website content to be customized and structured according to the users individual needs. Search engine personalization, as mentioned above, is almost as old as the history of the World Wide Web. However, when Google announced in December 2009 that its personalized search would be expanded to signed-out users in order to provide [them] with the most relevant results possible (Horling, 2009), personalization became an important topic in scientific and in popular discourse, raising concerns about the intrusion of corporate interests in the transmission of information on the web. In theory, Ippolita Collective explain, filters merely serve to make the query process faster and more conform to individual requests. They are even necessary, technically speaking. Their usage, however, shows to -21-

which extent it is easy, for a party actually in position of dominance as regards to search services, to profit in a commercial sense of the data at its disposal, without much consideration to the privacy of its users (Ippolita Collective, 2009: 53). Which are, therefore, the implications of this technology for the democratized transmission of information on the web? How are the politics of inclusion and exclusion influenced when search engine personalization enters the ranking game?

- Intensifying Access Inequality


When network theory researchers Barabasi and Bonabeau attempted to map the World Wide Web, they expected to observe a randomly distributed network, with most web pages having a similar amount of links, based on their assumption that people follow their unique interests when deciding what sites to link their Web documents to (Barabasi, 2003: 52). However, they discovered that a small minority of web pages had a very large number of links, while more than 80% of web pages had very few. They realized that the Web is one of many networks, from different fields, which are dominated by a very small amount of nodes which have a tremendous amount of connections, while the vast majority of the other nodes have very few links. Those networks are characterized as scale-free, in

the sense that the dominant nodes have the potential to a seemingly unlimited number of connections, and they are distinguished from random networks, where the distribution of links is more democratic. In scale-free networks, the two researchers explain, new nodes tend to link to the more popular ones, overlooking the nodes with the fewest connections. This leads to a rich get richer phenomenon: as new nodes appear, they tend to connect to the more connected sites, and these popular locations thus acquire more links over time than their less connected neighbors. And this rich get richer process will generally favor the early nodes, which are more likely to eventually become hubs (Barabasi, 2003: 55). As far as the politics of association of search engines are concerned, the rich get richer phenomenon is not a plain metaphor. Although personalization is promoted as a technology which facilitates the users navigation through the web (Horling: 2009), we should not forget that Google and other search engine companies are guided by corporate interests. The web has developed to be a constantly evolving marketplace and marketers use web personalization to anticipate the needs of customers. At the same time, media scholars have expressed their worries about the care-

lessness with which we willingly share our private information online, in order for it to be commercialized by companies. Today, young people dont mind so much that they share their friends lists, conversations, and navigational habits not only with their acquaintances but also the companies who interpret much of this data, says media theorist Trebor Scholz (Scholz, 2008). With these firms (and possibly government bodies) as daily confidantes, latent possibilities for total control have opened up. In 2011, Eli Pariser described the dangers resulting from this power we willingly offer to search engine companies, stating that what is good for consumers is not necessarily good for citizens (Pariser, 2011: 18). According to the concept of the filter bubble, which he analyzes in his homonymous book, the power acquired by the search engine algorithm isolates us inside the frames of a bubble, narrowing down the information we can find online, since we can only stumble upon content which agrees to the user profile created for us by the web companies. When we type certain words into a search box, the search engine cannot browse through its huge amount of data indexed in its database in order to provide us with relevant results as quickly as possible. Therefore, filters are there to make a

drastic selection of the network nodes to be looked at in order to exclude some and valorise others and their associated linkages (Ippolita Collective, 2009: 52). And although Google do not reveal their personalization algorithm, it would not be unreasonable to assume that corporate interests are playing an important role in defining the information presented to us by the search engine: one can, without falling into unjustifed vilification, easily conceive that within a system already biased by approximations caused by filtering, further filters could be added to add, or maneuver into a better position of visibility, those returns that go with paying advertisements, or which carry some doctrinal message (Ibid.). It could be said that search engine personalization creates an illusion of free choice, offering us the capacity to define what kind of information is more suitable to our interests, while in fact there is a large amount of paid advertisements presented to us as relevant results. Therefore, it becomes apparent that search engine personalization intensifies the rich get richer process of the web by sinking deeper into the dark side those pages which cannot keep up with the internet market battle.

Soon, the intrusion of financial interests turned it into a site of conflict, dissolving ideas which envisioned the internet as a servant of public good. When it became apparent that search engines cannot possibly index the whole content of the web, favoring certain web pages and excluding others, web owners, trying to get ahead in the market game, got into a battle for achieving the highest ranking in search results. As this article has attempted to show, the development of web personalization is pushing us even further away from the ideals and values of democracy, enclosing us inside the frames of a bubble, where we get the idea that the filtered content we get in contact

with is all that exists out there. Although promising to help us find our ways in a sea of information overload, personalization is in fact intensifying the rich get richer process of web page ranking, filtering the information we get in contact with according to corporate interests. These findings add to discussions arguing for the gradually increasing control acquired by the algorithm in shifting the democratic architecture of the web. As Eli Pariser states in The Filter Bubble, what personalization has actually given us is a public sphere sorted and manipulated by algorithms, fragmented by design, and hostile to dialogue (Pariser, 2011: 164).

the internet was once envisioned as a place for democratic dialogue, which would realize aspirations for the formation of a public sphere.

-22-

bIbLIoGrAPhy
Barabasi, Albert-Laszlo. 2002. The Fragmented Web. In: Linked: The New Science of Networks, 161-178. Cambridge,Massachusetts: Perseus Publishing. Barabasi, Albert-Laszlo, and Eric Bonabeau. 2003. Scale-Free Networks. Scientific American (288): 50-59. Berners-Lee, Tim.Robert Cailliau. 1990. WorldWideWeb: Proposal for a hypertexts Project. http:/ /www.w3.org/Proposal. html Dean, Jodi. 2003. Why the Net Is Not a Public Sphere. Constellations 10 (1): 95-112 Eirinaki, Magdalini, and Vazirgiannis Michalis. 2003. Web Mining for Web Personalization.ACM Transactions on Internet Technology3, no. 1: 1-27. Horling, Brian, and Matthew Kulick, 2009. Personalized Search for everyone. Google Official Blog. http:/ /googleblog.blogspot.nl/2009/12/personalized-search-for-everyone.html Introna, Lucas and Helen Nissenbaum. 1999. Sustaining the Public Good Vision of the Internet: the Politics of Search Engines, Princeton University, Center for Arts and Cultural Policy Studying: 1-41. http:/ /www.princeton.edu/~artspol/workpap/WP09%20-%20Introna%2BNissenbaum. pdf

Ippolita Collective. 2009. The Dark Face of Google. Trans. Patrice Riemens. Ippolita Net: Ippolita. Lialina, Olia. 2005. The Vernacular Web, presentation at A Decade of Web Design conference, Amsterdam. http://art. teleportacia.org/observation/vernacular/ Lovink, Geert. 2011. Networks without a cause: a critique of social media. Cambridge, UK: Polity Manoff, Marlene. 2010. Archive and Database as Metaphor: Theorizing the Historical Record. Portal:Libraries and the Academy 10 (4): 385398. OReilly, Tim. 2005. What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software. OReilly Net. http:/ /www.oreillynet.com/pub/a/oreilly/ tim/news/2005/09/30/what-is-web20.html. Pariser, Eli. 2011. The filter bubble: what the Internet is hiding from you. New York: Penguin Press. Rogers, Richard. 2012. Mapping and the Politics of Web Space. Theory, Culture & Society 29 (4-5) (July 1): 193219. Scholz, Trebor. 2008. Market Ideology and the Myths of Web 2.0. First Monday 13 (3). http:/ /firstmonday.org/htbin/cgiwrap/ bin/ojs/index.php/fm/rt/printerFriendly/2138/1945

-23-

Das könnte Ihnen auch gefallen