Sie sind auf Seite 1von 36

hannes.rollin@gmx.

de

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET


HANNES ROLLIN Interweaving a Close Examination of the Bitcoin Digital Cash Project Abstract. Everything is connected not merely people and machines, but the concepts of distributed wealth as well. Digital cash, distributed storage, open source systems and online voting all depend on one another and inuence each other, while none of these can be understood without basic knowledge of networks, cryptography and economy...

1. Instead of an Introduction: Does Humor Belong to Science? A number of years ago, the American musician and poet Frank Zappa publicly posed the question to himself and to the philosophically inclined fraction of his listeners: Does humor belong to music? This question then, or more likely the examples which Zappa employed to deliberate his question, introduced a period of intense headache for legal cencors, who were obviously not paid for their sense of irony. The question was never a regular question for Zappa, but a working hypothesis he put forward and pursued to verify, and in his relentless spirit he himself settled the issue with the seminal Bobby Brown, which was, and unambiguisly so, generally thought of as funny in its total lack of self-irony. Of the lyrical ego, I mean. And now, I pose the same question in the realm of serious science, where the censorship is hard-wired in the minds of the key characters. Or have you ever giggled1 reading a scientic paper? I daringly adhere to scientic methodology, but I am sick of scientic writing. When I want to tell you, formerly known as the reader, something, I will announce that with the keyword I, not, as is usually done, with a third-person reference to this author or a transference of responsibility via rst person plural. Adding to this, I will of course be sure to underpin my brilliance with extensive use of Latin, French and German terms, as if there was no precise English counterpart. What I will not do, I have to apologize, but this is a matter of personal aesthetics, is to write in an easy linear fashion. I do interrupt others, so I may well interrupt myself. If you come without formal academic training (in humor-eradication), you might be puzzled by the number of literature references, such as [17], scattered, or rather cluttered, throughout the text. There is a simple formula:
Date: June 27, 2011. 1 Try [82].
1

HANNES ROLLIN

The more references, the better and more deeply conceived the scientic merits of the present paper are. Hence, logically, I quoted a bulk of papers without ever reading more than the abstract (which is a smart word for a banal summary, and it does not very well conceal the fact that most papers get not very concrete after that), and often not even that. Never mind, they all do that. Quoting and citing is still considered chic in the academic world; in some articial environments quoting rules are, as if Google never came to existence and literature research was still conducted in dusty underground archives composed of rotten books and magazines, rather strict. If you come across a [cite] tag, I merely forgot to insert a link to some scientically empowered conrmation of an opinion of mine. Perhaps there was none.
Examples, calculations and straying elaborations I will imprison to shrunk paragraphs like this one, hoping to relax the eye and ease the ow of reading. Yes, mark this as a revolutionary moment. Not many scientists at all tried to be easy to read, never mention accessible to a larger population. Nevertheless, I cannot completely relinquish maths and code when discussing the matters I will discuss, but you will certainly not need a Masters degree in number theory, although that may help at times. Code examples will come as ad hoc invented pseudocode, which means mostly that the simple part is on my side I choose functions as highlevel as I desire, and implementation issues I utterly ignore. Do not hack that code into an existing Visual Studio project: It might run.

It is a common feature of scientic papers that the authors get undressed and uncover their motivations, which I, try as you might, will not do. This convention is just another dogma I will not bow down to, for I had no part in creating it. The rest of the paper is structured as follows: Nay, never will I return to that kind of indulging scienticism. There are great things happening out there in the digital world, which is not so far away as to be properly called an outside opposed to our mental inside. We are digital. As world-wide primary energy production de facto achieved its, yes, all-time maximum, bumping uncomprehensively on some plateau, banks and governments evolved to be recipients of severe (and well-deserved) mistrust, tearing down the vortex the entire idea of protecting and nurturing central authorities, more and more participants, from single users to giant grid hosters, of the mighty internet come together and form new networks, Internets within the Internet, intentional communities of the digital kind, for a variety of reasons as plentiful as the variety of these networks itself. What, exactly, are those new types of networks, which are not that new anyway? Who are the driving and pushing forces behind? Where is all that leading to? And why is it that precisely the most courageous and advanced projects entail the most brutal downfall? I am talking here of the digital cash experiment Bitcoin [14, 15, 16, 17, 18], which is not by many perceived as an experiment (which it just is), since it involves real money, and real money can, as we all have experienced too many times, turn adults, who never wanted to grow up, into really serious zeitgenossen;

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 3

anyway, Bitcoin was possibly the key trigger event for the writing of this very document, since it comprises a number of critical features, unheard of in this combination: Almost total lack of central control, maximum anonymity of transactions, maximum openness of the concept, self-organization in an ur -democratic sense, yet connected to the ordinary money market and attempting to achieve world dominance as a means of payment. When pondering digital cash, you naturally stumble upon related issues such as cryptography, distributed computing, open source software, online voting and legal, eh, problems. Thus, we will elaborate all that. Yet have a glance just one layer deeper, and you become aware that technical and scientic progress cannot be abstracted from ones weltanschauung, just as your weltanschauung cannot be understood without taking into account the zeitgeist, and, better hidden but much more importantly, your menschenbild. What is, in your belief, a man or a woman able to? How much protection does he need before himself? How well will she become what she is meant to be? What is the nature of punishment? Of the state? When it comes to money, creation and participation, you cannot separate technical discussions from moral and ethics anymore. No, you cant. In true accordance to these my words, I will now lay down my premises, such that you may understand the lter through which I perceive the world. I believe, that in principle everyone wants to develop optimally, and, even stronger, everyone knows in a dark and half-consious manner, where his personal path should and would go. Henceforth, I am undoubtedly part of the suspicious, suspicious to the ordinary power circles, of course, milieu, where the rm conviction is held, that the government is the best which governs the least. For if, I might redundantly add, you regard and treat people essentially dumb and unreliable and self-destructive, you unwittingly render them thus, as every father and mother will have to conrm. Yet I am not a blind follower of market economy such as to maintain, among other credos, that success is a result of sensible selection criteria like quality, which it is of course is not; simply remember the painful years of evolution of Microsofts Windows operating system it was market leader long before it could catch up with reliable operating systems such as Linux and Mac OS. In the real world, obviously, matters become tremendously complex, since for instance people are situated on a great variety of stages measured on the scale of humanitys development itself. Some guys are barely medival in their attitude, while others, sometimes hilariously so, appear utterly selfcontrolled and conscious and could immideately populate some sophisticated Star Trek Utopia. Hence the power of (distributed) intentional communities I cannot help myself but believe in the power of truth, consciousness and cunning tactics. Those evil forces, which, by the way, are always within ourselves and projected or induced or attracted to the outside, can grant sorts of instant satisfaction, but they lack the property to inform you with a sense of achievement in the moment of your death.

HANNES ROLLIN

2. Peers in a World of Crowds, Grids and Clouds There are a number of movements toward distributed computing and storage, which can, and not unexpectedly so, I presume, be divided into bottom-up and top-down movements. Top-down examples (I will compactly summarize crowdsourcing, cloud computing and grid computing) are entirely conceived and devised by large companies as another means of increasing income, viz. concentrating ever more wealth and inuence in the hands of a few. Bottom-up examples, in contrast, are to be seen as spontaneous eruptions of grass-roots movements, often introduced by a handfull of people acting out as citizens, not as members of businesses or organizations, and if these projects hit the spirit of many in their capacity to ll a void, they are rapidly adopted by great numbers of men and women, giving the whole thing a mostly self-organizing and basically unpredictable character, as is the case with Bitcoin. Buttom-up approaches are generally implemented as peer-topeer (P2P) networks, circumventing ordinary centralized structures, which always need some trusted third party nobody really likes to trust. Banks. Global companies. Governmental organizations. Anyhow, let us begin with commercial modes of collaboration to understand where the big boys are heading. 2.1. Crowdsourcing. Ever been in need of telephone support and surprisingly connected to a call center on another continent? Well, this is outsourcing in the digital age. The argument goes like this: When telephone calls between any two places on earth are cheap and practically without perceivable delay, call centers do not need to be located in close proximity to the customer. Just move them to the place where labor is cheapest, and even better yet, leave the set-up of these call centers to the locals to further externalize costs. India is a good place, since its IT infrastructure is alright, at least in the big cities, of which there is a number, and India provides multitudes of English-speaking and digitally enabled young adults, you may say [cite]. As data transfer rates grow and bandwidth prices slip to the basement, companies also start to think about moving (manual) data processing to the folks, to whom they refer in a, I must append, derogative manner as crowds. The same is true for professional attempts at nancial exploitation of social networks want to be a Facebook spammer? Note that crowdsourcing refers to the transfer of labor not to another company, but to masses of individuals, which reminds me somehow of early industrial age. Back then, the foundation of our current proud (yet decaying) digital societies has been built by way of extremely cheap labor under precarious circumstances. Now, exposing their view of human nature, many rms utilize crowdsourcing to get projects done, that have been profoundly splitted to small particles to be worked at by the single partcipant, who neither is allowed an overview of the whole nor is he equipped with responsibility and freedom of decision. This could be dierent, though. As I will explain in the section on open source,

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 5

people are indeed able to self-organize around a more or less well-dened common ground and thereby achieve rather complex goals often without being paid. Yet, increasing intrusion of businesses into the opensource sector is not entirely desirable (for me), since assimilation and commercialization of the communitys work is always waiting around the corner. It appears to the learned eye, that crowdsourcing aims at a true globalization, nay, unication of (digital) labor markets. Of course, businesses have an interest in buying labor at the cheapest price still guaranteeing an accetable level of quality (is that so?), and this is, on the other hand, an opportunity for information workers with low income requirements, for whatever reason be they self-sucient gardeners in upstate New York or Malaysian students. 2.2. Grid Computing. Grid computing is a whole dierent story. Computing grids are basically computer networks specically designed and constructed for high performance (in terms of speed, reliability, security...), for a certain pre-dened task (climate simulation, physical experiments) by a certain coalition of resourceful participants, mostly governments, global companies and universities. Currently, many grids are bypassing the usual internet infrastructure (as is the case with CERNs Large Hadron Collider) for a number of reasons, most of which are closely related to the immense bandwidth requirements and to poor performance of the transmission control protocol (TCP) under high performance circumstances. TCP is the basis of all ordinary internet communication. Yet, in the realm of high performance computing, the rules are dierent. High performance computing usually employs giant bandwidths and even more vast amounts of data much is send for longer periods of time. This is measured via the bandwidth-delay product, which again imposes a lower bound on the buer size.
Imagine two workstations in a grid connected with extravagant 100 GB/sec optics, which is a realistic number, and a delay of half a second. The product is 50 GB, which is the minimum buer required on both sides to prevent data losses.

The next problem with TCP is, that ordinarily extensive CPU and memory operations are performed on both sides, therefore the probability of package loss and time-outs is comparably high. Finally, a few bulk servents 2 share tremendous bandwidth, which is quite contrary to, say, private peer-to-peer networks. There are attempts at improved versions of TCP such as HighSpeed TCP and a couple of novel protocols [62]. A common approach to integrate the omnipresence of the internet with the enormous capabilities of a grid is a so-called three tier architecture: An intermediate secure server leads the negotiations between a client (normally
2 Servent is an articial word composed of server and client to clarify that an entity plays both roles. We will meet that term again in the section on peer-to-peer networks for obvious reasons.

HANNES ROLLIN

represented by a web browser) on one side and the supercomputers on the other side. Once in the mid-90s, the term grid computing promised computing power for everyone, but no commercial provider appeared to this day. If you are interested in further details, please concern yourself with recent lecture notes such as [4] and the forum [50]. There is an old school, home user and possibly low budget version of grid computing, notably its older, yet smaller, brother: Cluster computing [96]. A cluster is just a number of computers plugged together (via LAN, Ethernet or Internet) to act as an integrated whole, that is, as one machine. A little while ago, it appeared trendy to outperform conventional supercomputers with clusters made of second-hand workstations. There is, however, a less strict denition of grid, namely any connection of computers for the purpose of a distributed solution of tasks that might otherwise be too slow or even computationally infeasible to solve, such as SETI@home [108], IBM smallpox research [110] and GIMPS [45]. In my opinion, the term grid is ill-conceived in these instances, for they are built upon a classic client-server architecture with little to zero communication between the clients [cite]. Rather, I would propose to move SETI@home et al. to the category of crowdsourcing, since this is what happens, with only two dierences: 1. Instead of human labor, these projects capture (supposingly unused) CPU labor; and 2. mostly, clients dont get paid this is likely to change completely, if people at large realize the cumulative power of their computers: The 4.5M contributors of SETI@home add up to 15 Teraops, which outruns IBMs premium supercomputer ASCI White by 25% (the Bitcoin network exceeds 120.000 Teraops money is an incentive). Alas, whenever money is involved, people feel intrigued to cheat, this to prevent demands the smartest of researchers and developers [36]. I would not be too harsh with hackers and exploiters, for they could be regarded as well paid colleagues, whose job it is to detect weak points and thus enlarge our understanding of networks and cryptography. 2.3. Cloud Computing. Now, cloud computing received plenty of media attention in recent years, because cloud is a cool metaphor for some remote part of the Internet you can perceive but you cannot really grasp. There is a funky lab at Berkeley, the Reliable Adaptive Distributed Systems Laboratory (RAD lab, [97]), which gets even more funky when you look at the list of its founders, including such splendid names as Google, Microsoft, IBM, Facebook, Amazon and Siemens. What is the common denominator of these companies? All of these companies have expanded to a ridiculous size, make ludicrous amounts of money, have incredibly over-provisioned data centers scattered all over the planet and do not know when its enough. There you have the key enabler of the whole cloud computing hype: spare capacity of the big sh. Essentially, its the SETI@home idea on the level of Google and Amazon. A relatively recent survey by RAD lab [5] denes cloud computing as a combination of Software as a Service (SaaS) and Utility

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 7

Computing minus private clouds (clusters, basically), and highlights the advantages both for users and cloud hosts, the users bonuses being: (1) There are no visible hardware constraints, hence there is seemingly no need to plan ahead, which is a great buering device for uctuating demand you dont have to build large data centers like Google to serve peak demand, nor do you have to disappoint potential customers who cannot be admitted in those times. (2) Cloud users do not need to commit to real hardware: They can start virtually any IT business from an iPad, so to speak, and leave computation and storage to the cloud (meaning, of course, to that data-hungry bunch of Google, Microsoft, Amazon etc.). (3) Pay as you go: You pay only what you need in terms of CPU cycles, disk storage and bandwidth, thereby ecient and concise use is encouraged. Nice. (4) Cost associativity: This means merely, that 1000 hours of one machine cost pretty much the same as one hour of 1000 machines. Also nice, if you are tighlty scheduled and liquid at the moment. A few cloud providers are already there and, not very surprisingly, names begin to repeat: Amazon EC2 (low-level access for the developer), Microsoft Azure (mainly .NET-bound) and Google AppEngine (caged and over-managed). RAD lab [97] present a primitive formula that helps to decide whether to move your operations from your own data center (you got one, dont you?) to the cloud. There are a number of obstacles to overcome3, but tremendous means are directed towards tackling them. The most fun I had was when I learned that the fastest and most reliable method to transfer data Terabyte-wise over large distances is FedEx. Yes. This is trivially explainable by the fact, that FedEx sends 10 TB just as fast as 100 TB or 1000 TB the more data, the more compelling the shipping option becomes. Jim Gray [49] experimented a bit and found only one disk failure in 400 trials of sending harddisks by conventional mail.
This is actually excellent. Imagine you send three identical disks via three distinct services, each with a failure probability of p = 1/400, then the probability of all three disks being corrupted amounts to p3 . Now, if you repeat this process n times, the number of total failures is binomially distributed with probability p3 and expected value np3 . When is this expected value greater than one? Precisely if n > 1/p3 = 6.4 107 , that is, in 64 million shippings you expect one complete failure of all disks. If that is too soon the case, send four disks. Of course, if this becomes general practice with sensible data, the bad guys will start disguising as UPS drivers or just buy UPS.

To be entirely just, as I sometimes strive to be, I have to mention at least one open source cloud computing system targeted at an academic audience:
3 Availability of service, data lock-in, data condentiality and auditability, data transfer bottlenecks, performance unpredictability, scalable storage, bugs in large distributed systems, scaling quickly, reputation fate sharing, software licensing [97].

HANNES ROLLIN

Eucalyptus [88], which is implemented as an Infrastructure as a Service (IaaS), meaning not much more than its inherent ability to give developers complete access to virtual machines combining various, perhaps distributed, physical resources. Yes, virtual machines (VMs) will have the time of their life in the forecasted cloudy days. In the eyes of its pursuers and advocates, cloud computing is t to lift computing to the realm of basic utilities, joining water, electricity, gas and telephony there [21], and scientists are, naturally, intrigued when considering the giant computing power available at cheap money from simple interfaces, especially since popular (expensive) software such Matlab and Mathematica has nowadays built-in cloud capabilities [42]. We have seen a steady evolution from simplistic client-server architecture, cluster computing, grid computing to, nally, the holy grail of cloud computing, and in contrast to the evolution of species, all of these techniques co-exist, partly in awkward ways, in the very same ecosystem known as the Internet. It should be clear by now, and may even further be claried by a direct grid vs. cloud comparison such as [41], that the good old days of personal computing power (which is, after all, empowering the single citoyen) are bound to end in favor of those silly tablets depending on remote services that come under equally silly names. Ahh. They removed the keyboard, I sometimes muse, not merely for aethetical reasons, but to maximize the consumer-producer ratio. 2.4. Peer-to-Peer Networks. Peer-to-peer networks, as intentional connections of users, or servents, having the same rights and being both clients and servers, have been around for a long time now, their rst popular eld of employment being multiplayer games, one of the main drivers of innovation anyhow. The basic structure is always the same and very easy to understand: There is a logon site4, where a fresh user gets information of all or (better) a reasonable fraction of currently online users and their IP addresses. He then, leaving the logon site to its own devices, directly connects to the addresses he is equipped with and does whatever the network encourages: sharing les, anonymously trading digital cash or shooting one another. When the user leaves the network, the logon site is usually informed and marks his status as oine (you know Skype [10]?). The advantages are instantly clear: (1) There is no single point of failure (excepting the logon site): When Johns computer in San Diego crashes during a peer-to-peer gaming session, his fellows in Denmark and Canada can continue shooting each other as if nothing has happened. (2) No ordinary central repository and high performance server needed: CPU, disk and bandwidth demand of the logon server is marginal and can be accomplished by a cheap shared server in many cases.
4

Have many, since this is the most exposed part of any peer-to-peer network.

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 9

(3) No central authority necessary. Here it becomes interesting: Due to its distributed and inherently self-organizing nature, anyone who desires so can start a simple peer-to-peer network and manage it indepently. Anyone can, more or less, depending on the originator, enter and leave the network at will. (4) You can, as is a pretty recent development, treat servers or grids as members of a peer-to-peer network. There are shy attempts at research in that direction, which goes by the name peer-to-peer computing [43], but I will not concern myself with that. As with every promising technology, we do not get away without a few serious challenges: (1) Peer-to-peer networks are able to penetrate the remotest corners of the Internet, thus an arbitrary peer-to-peer network can easily grow to a tremendous size. If communication needs or data generation of the network are high, we hace a problem. (2) Lack of centralization and hierarchy implies equal distribution of power in a political sense how can we achieve fair networks? (3) Continually entering and leaving users constantly change network topology (who is connected to whom), and thus change content and load of the network, making routing non-trivial.
Most peer-to-peer networks base their communication on ooding, which means that user requests are transmitted over the entire network Babylonian conditions! This, by protocols modelled after Gnutella v0.4 [46], is overcome by adding a time-to-live value to each query, which is decreased on each hop, but 1. the time-to-live value must be suciently high to ensure that you nd your mp3 le in the network, thus peer-topeer network sizes are restricted to less than 100.000 [106]; and 2. some networks need a maximum of conrmations from users (such as Bitcoin). In an optimal case, where no query is posted to a node who as already received that query (the attentive reader immideately recognized the network topology: a tree!), there are 2(n 1) messages, namely the edges of the tree counted twice, passed back and forth between n nodes. This poses serious limitations to the Bitcoin projects ambitions, as I will explain perhaps later. Some networks denote super nodes at logon time, elected for their superior recources, which are utilized as miniature servers for their immideate surroundings. Thereby, a secondary hierarchy is added (to the otherwise at network) and scalability, which here refers to enlarging the network, is increased. You get an even ner hierarchy, if you link the number of connections to, say, bandwidth or CPU speed of each peer. But nevertheless, a hierarchically organized network is usually dominated by a few highly connected members, who are vulnerable to malicious attacks, requiring sophisticated distributed recovery methods still under research [67]. There is interesting research [83] comparing peer-to-peer networks with complex adaptive systems (CAS), which are used in biology and the like to model the behavior of collections of simple autonomous agents interacting in simple manners (like ants), yet showing complex and often unpredictable behavior as a whole.

10

HANNES ROLLIN

When this CAS model is utilized to construct peer-to-peer frameworks, developers might be able to embed desirable global behavior such as adaption, self-organization and resilience without actually coding those properties keyword: swarm intelligence. The alternative are uncomfortably complex routing protocols [63].

How do you know when to create a peer-to-peer network? There are three simple conditions to check [11] (sorry for calling the participants nodes): (1) Is there a number of nodes with some pair-wise dependencies? (2) Does each node depend on many, or better most, nodes? (3) Can those dependencies be reduced to at most a few give and take modes? If you answered all questions with yup, you are ripe to set up a peerto-peer network. In April 2001, an open source project called JXTA [60], promoted by Sun Microsystems, was launched, which provides open peerto-peer protocols. If you are more of an experimenter, you could also try out Anthill [6], a CAS-based peer-to-peer framework. 2.5. Peer-to-Peer as a Tool of Revolution. Yet, the most interesting perspectives open up when you regard peer-to-peer as a direct, and threateningly direct, counterdesign to cloud computing. The latter aims at further centralization of (computing) power, hence political power, while the former empowers the people. And believe me: Peer-to-peer networks will not remain restricted to lesharing and gaming. As fast as you can spell Bitcoin, the message went around the world, that computing power implies political power. Financial systems, political systems, knowledge systems, legal systems have always been an illusionary ction, or semi-conscious agreements kept alive by condence of a majority, as occasionally we are remembered by minor breakdowns. But since these systems went digital, there is no longer a reason not to present better systems and issue them for competition; we have the means right before hour eyes: It is peer-to-peer technology.
Nevertheless, mere computing power is not enough the will to achieve a political goal is the pivot, as for instance Bitcoins desire to replace all conventional currencies.

3. An Unscrupulously Quick Introduction to Cryptography As soon as the data in peer-to-peer systems is a little more sensitive than prosaic highscores and the like, cryptography becomes a major issue, meaning all the techniques entailed by the need of communicating in the presence of some real bad guys, who are, for phonetic reasons, I presume, in cryptographic literature generally named Eve, having a special taste for eavesdropping, network weaknesses and immense computing power5. Communication then always happens between two persons called Alice (A) and
You could employ Google AppEngine to crack Googles mainframe access, couldnt you? Remark: Footnote humor is underestimated.
5

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 11

Bob (B), also a convention since times immemorial. My short account is mainly inspired by Ron Rivests (to whom we owe the R in RSA) accessible lecture notes [101] and Goldwassers more recent book-sized lecture notes [48] on cryptography, both from MIT. 3.1. Cryptographic Concepts. Strictly spoken, cryptography is a part of discreet mathematics, yet I am most interested in its application to the digital environment. It begins with fear. And, as everyone informed about successful attacks will agree, you cannot be too paranoid when it comes to computer security. Hence, you verbalize your fears, resulting in some security policy, which is then realized by existing or newly developed cryptographic concepts, formally known as security mechanisms. These should be implemented as to guarantee a sucient degree of condentiality, integrity and availability of the system, as desired. I may inform you, that security mechanisms do not only involve number theory and fancy algorithms, but also social aspects (tell everyone how to pick a safe password and when to change) and physical aspects (tape your backup USB stick underneath your desktop). As no security system is eternally and unalterably failsafe, it is most important, though at times underestimated, at times shamefully neglected, to integrate some auditing or logging tool. When Eve was in there, we want to know precisely how she managed to do that and precisely what she did. If some dimwitted script kids crashed our server with a variant of a distributed denial of service (DDoS) attack imagine two hundred children evoking the breakdown of an ice cream vendor by all wanting to be served rst , we want to retrace times and places of the originators etc [101, 1. lecture]. Unlike Shannon [109] and other early, mostly mathematically inclined, cryptographs, who presumed Eve to have unlimited computational power, which is just hilarious, modern cryptography merely expects Eve to operate at the upper region of the state of the art and a little above to compensate for future hardware development.
This compensation has generally been underestimated, although Moores law and similar statements indicated (temporary on a nite planet) exponential growth of computation speed. For that reason, once widely accepted algorithms have become obsolete, including MD2, MD4, MD5, Panama Hash, DES, ARC4, SEAL 3.0, WAKE, WAKE-OFB, DESX (DES-XEX3), RC2, SAFER, 3-WAY, GOST, SHARK, CAST-128, Square.

Thus, an encryption system is deemed safe if it is computationally infeasable (meaning annoyingly slow) to break it. At the same time, nothing is gained if not encryption is fast and ecient. This is the conict that has to be endured. The process of encryption itself is intuitively clear: Some useful message m is, by way of an encryption function f , turned to a cypthertext c, the latter being absolutely useless to Eve. More formally [48]: (1) It must be hard for Eve to reconstruct m from the cyphertext. (2) It must even be hard for Eve to reconstruct mere parts of m from the cyphertext.

12

HANNES ROLLIN

(3) It must even more so be hard for poor Eve to detect or extract simple yet interesting facts about message trac. (4) All of the above must hold with high probability. 3.2. Cryptographic Hash Functions. Hash functions are the solution to a whole familiy of questions apparently unrelated: (1) How do I nd a long text in a database, when I dont want to perform a tedious full text search? (indexing) (2) How do I save a login password on the local harddisk without revealing it to Eve, who might have installed a trojan? (password hashing) (3) How do I know that Alices message to Bob hasnt been changed on the way to Bob? (message authentication) And many more. Hash functions, of which the cryptographic ones are a famous subset, achieve all that with a surprisingly simple feature: A hash function h takes a string of almost any length (restricted by concrete implementations) and maps it to a xed length bitstring. h : {0, 1}<2
64

{0, 1}256

This h, it could be SHA256, the hash function used by Bitcoin, maps all binary strings of less than 264 digits to a 256bit string. Thus, to answer question (1), if you save a text to the database, also save its hash value in a dedicated column. When you search a text, simply search for its hash value. This is also known as the dictionary problem, primordial ooze of hashing. Question (2) is equally quickly solved: When the user enters his password, it is hashed and compared to the value in the password le, where the password has been saved in its hashed (hence compressed) version. Question (3) is no more complicated: Alice just appends a hashed version of the message to the message itself. Bob then merely hashes the message and compares with the hash value provided by Alice (which is known as message authentication code, or MAC)6. Hash functions should distribute every possible message evenly over the output space, and little changes in the message m should result in massive changes of the hash h(m). There are two important formal concepts related to these desired features: Weak collision resistance. For a given hash value h(m1 ), it should be hard to nd another message m2 such that h(m1 ) = h(m2 ) (if this is not the case, Eve may nd an access-granting password without
6Disclaimer: This simplied method relies upon Eve not having knowledge about the hash function. If she had, she might just catch the message, produce her own and append the hash of the new message and send that to Bob. A workaround is to set MAC = h(m||k), that is, to append a shared private key to the message before hashing.

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 13

ever knowing yours...7). Hard in this case, I might again redundantly add, is usually dened in terms of feasability. The fastest method to nd m2 should be key space search, otherwise known as brute force attack, where Eve succesively tries out all possible input messages. Strong collision resistance: For a given hash function h, it should be hard to nd m1 and m2 such that h(m1 ) = h(m2 ). Practically the entirety of all known hash functions are derived by the so-called Merkle-Damg construction [81]. This means, we start o with ard a compression function g, which takes two blocks of, say, 256bit and muddles them to one block of 256bit, such that each bit is mixed with every other (the infamous avalanche eect). g : {0, 1}256 {0, 1}256 {0, 1}256 Then, the hash function is just an iteration of these: split the message m to blocks of 256bit, do some padding if the last block is too short, and execute g in an iterated fashion; f (m) = g(mk , g(mk1 , g(mk2 , . . . , g(m3 , g(m2 , m1 ))))) (assuming k blocks). This way of quick hashing is, sadly, not nearly as secure as was believed [29]. Yet, all the scientists could come up with was hotxing Merkle-Damg to suit their (updated, I suppose) denition of ard security. There is not yet a good monolithic hash function around, maybe you or I should propose one. 3.3. Cryptographic Challenges. You can use cryptographic hash functions to construct so-called cryptographic challenges, that is, you present an answer (the hash) to the user and let her compute the question (a message that is hashed to the desired hash, namely a collision). Since the fastest way to do that is (allegedly) key space search, you can give a pretty exact guess how many operations are needed in average, cant you?
Suppose a 10bit hash, for didactic purposes, hence any input is mapped to a 10bit string, of which there are about 210 = 1024. Now, if we assume, as is generally done to make matters handy, the range of the hash function to be uniformly distributed with respect to its domain (meaning: a set of uniformly distributed strings of the key space is hashed to a uniformly distributed set in {0, 1}10 ), then you can conclude that one out of 1024 strings of the key space gives the desired hash (p = 1/1024). Dene the (binomially distributed) random variable X as the number of collisions in n trials. Thus P(X > 0) = 1 P(X = 0) = 1 (1 p)n If you want at least one collision with a probability greater than 50%, you need to compute
7Adobe strangely constructed its PDF docs encryption such that not the password, but the hash of the password is used for encryption. Thereby, I was able to crack several 64bit-encrypted PDF by key space search. Nowadays, longer keys are (generally) used.

14

HANNES ROLLIN

log 0.5 710 log 1 p Thus, you need to evoke the hash function at least 710 times, until the probability of a collision exceeds 50%. Note that we have nothing said explicitly about the key space! Our tacit premise, however, was an innite key space. The number of hash function calls depends thus not on the key space size, but on the hash size entirely. The greater the hash size, the smaller p and the greater again n becomes. And generalized: To nd a k-bit collision in one trial, we expect a probability of p = 1/2k . The expected value of X is np, and if we want this expected value to be greater than 1, we need n > 2k trials. Thence, the average eort is exponential in the size of the required collision. P(X > 0) > 0.5 n >

Modern hash functions give strings of 256bit or more, which makes it infeasable to generate a collision. Thus, people are generally content with a partial collision, meaning: Give me a string whose hash corresponds to the rst, say, 64bit of this hash here; for instance, I want a hash of 64 leading zeros. This technique was proposed by Back [7] as a proof-of-work to ght e-mail spam: If every sender has to perform such a proof-of-work as a kind of payment to the mailserver, no script can send thousands of mails just like that. Sadly, it has been proven that proof-of-work proves not to work [71].
This is mostly due to economic factors: Spammers buy robot time and calculate their expected gain with repect to the number of sent e-mails and average hits (protable recipients, that is). If you want to block those activities, proof-of-work has to be more costly then the expected gain, requiring enormous barriers for all the average e-mail users [ibd]. The second reason is, that computing power varies greatly from second hand cell phones to game developer workstations (and your private cluster, you may add).

3.4. Symmetric Encryption (Shared Private Key Systems). Symmetric encryption means just, that your encryption function f , the shared private key, is its own inverse: f = f 1 Therefore, if Alice sends a cyphertext c = f (m) to Bob, he can simply decrypt by computing f (f (m)) = f 1 ((f (m)) = m and the other way round. A presumably didactically useful, but nevertheless impractical example is the one-time pad, where Alice and Bob agreed beforehand on a list of shared private keys, which are used one by one such that none is used twice. This is one of the few constructions allowing complete security in theory. In practice, you wouldnt want to apply one-time pads for at least three reasons. (1) How do you share the pad? Encrypt with another one-time pad? Ha, ha. The only way for Alice to communicate something privately to Bob is to whisper it in his ear.

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 15

(2) If more than two fellows share a pad, the entire system of secret communication is temporarily disclosed, if one of the pads is stolen by Eve before anyone noticing it. This happened several times in WWII, when allied forces managed to capture German submarines, including pad. (3) How do you ensure that Eve didnt corrupt the encrypted message? Maybe you just didnt use the correct code page in your pad? Despite all these disadvantages, which hold for any shared private key system, there is one strong advantage: Symmetric encryption is supremely helpful if no communication is involved. If you want to conceal some les before the eyes of your boss, who has admin access to your system, or you want to encrypt your digital wallet le on your USB stick (which I strongly suggest), you can simply do that with one of many strong symmetric encryption algorithms8. They are pretty secure, fast and easy to use, and you eortlessly nd open source libraries such as Crypto++ [31], pre-implemented functions in .NET and ready-to-use encryption applications. 3.5. Asymmetric Encryption (Public Key Systems). The plain idea here is that the encryption function f is unequal to its inverse9: f = f 1 and both functions f and f 1 should be easy to construct together but hard to invert. The term public key systems comes from the following nifty construction: Both Alice and Bob construct a pair of keys for themselves; for instance, Bob designs a public key fB , which he publishes, and a private 1 key fB , which he keeps secretly (maybe even symmetrically encrypted). Alice does the same. Now, if Alice wants to send a message to Bob, she just encrypts it with Bobs public key fB , which is publicly available: fB (m) is sent to Bob
1 Observe the elegant fact that only Bob has the means, namely fB , to decrypt messages encrypted with his public key: 1 fB (fB (m)) = m But public key encryption contains just another nice feature: Authentication, meaning a digital signature basically. Maybe encryption is not an issue, but ownership of the message (which might be a work of art published online). Then Alice could just encode her message, read carefully, with her 1 own private key fA . Now, everyone is able to decrypt that message with the use of Alices public key: 8AES (Rijndael), RC6, MARS, Twosh, Serpent, CAST-256, IDEA, Triple-DES (DES-EDE2 and DES-EDE3), Camellia, SEED, RC5, Blowsh, TEA, XTEA, Skipjack, SHACAL-2 9Hence asymmetric.

16

HANNES ROLLIN

1 fA (fA (m)) = m

But when you have decrypted Alices message thus encrypted, you know for sure that it had been initially encrypted by Alice and by Alice alone10. The best of all is: You can combine both ways to encrypt and sign your message, in either order. Imagine, Alice sends
1 fA (fB (m))

to Bob. Anyeve can get as far as fB (m), since fA is public, but not fur1 ther: fB is Bobs private property, well protected on a hardware-encrypted smartcard. Thus, only Bob can get to m:
1 1 1 fB (fA (fA (fB (m)))) = fB (fB (m)) = m

Some suggest to rst encrypt and then sign (to prevent unnecessary computation when the signature is invalid), others like to sign rst and then encrypt as to make the signature invisible depends on the context. There are a number of famous public key schemes, for example RSA, DSA, ElGamal, Nyberg-Rueppel (NR), Rabin, Rabin-Williams (RW), LUC, LUCELG, DLIES (variants of DHAES), ESIGN, yet RSA is probably the most famous one.
RSA relies on the fact that it is way easier to multiply two large prime numbers than to factor their product, when the primes are unknown. If you devise a (probably probabilistic) algorithm achieving this factoring in polynomial time, well, a tremendous amount of software will have to be reimplemented the entire Internet, it appears, is built upon public keys tossed this way and that.

Yeah, and thats about everything there is to say about cryptography. On less than ve pages. If you digested bravely all of this section, or if you didnt have to read it because you knew all that already, but then you would probably not be reading this, well, then you are perfectly equipped for the mind-boggling challenges that you will stumble upon in the next section, which nally introduces some of those dangerously enabling applications of peer-to-peer technology. 4. Distributed Wealth in a Distributed World 4.1. Open Source Development. As with many things under the skies, we have some vague feeling of knowing what exactly open source software is, but as soon as we are compelled to clarify our view, we have to admit: We do not know that much. So, under that premise, what, actually, is open source software? Who is using it, who is making it, what are the obstacles and opportunities? A good starting point is a recent metastudy (study of
10Warning: Eve could still hack the public key directory, put her own public key there,

named Alice, and publish under Alices name henceforth. Nothings ever that simple.

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 17

studies) by Crowston et al. [30], from which I drew most of the recources cited in this section. First of all, as the name implies, it is software that comes with its own source code, such that anyone may examine, steal, modify or redistribute the hard work of others to various degrees, depending on the according license (and there is a veritable zoo of licenses, from laisser-faire BSD license to self-propagating copyleft licenses as GPL, which intend to perpetrate the open source ideology par force) , which appears, possibly, strange to someone educated in free market ideology and survival of the richest. Open source software is often free of charge. Supposedly 87% of US businesses use software of that peculiar kind [120], and most of you equally, for instance Linux Apache Web Server Mozilla Firefox OpenOce The Gimp Compilers/Interpreters such as for Perl, Python, C (gcc) sendmail, bind Eclipse Blender eGroupware, openCRX Googles Android

And many more. An incredible amount of open source software is to be found out there. You cannot even substract it from the current electromagnetic civilization it came to my ears, that some space satellites run a Linux of sorts11. The advantages are obvious: Cheap or free software, often no company involvement (although that particular movement is gaining momentum rapidly again, for obvious reasons; keywords: crowdsourcing, cleanwashing), open discussions on security and usability, the comfortable feeling of belonging to the good ones. Due to its inbuilt, caused by the urge for utmost precision, slowliness of research, combined with the speed and exibility of open source development (reportedly exponential growth of the number of projects [47]), not much is scientically known about the very people behind those numerous projects [105]. There are, as of 2007, more than 800.000 programmers involved [117], mostly in the United States and, with a much higher percentage of national populations, in the UK, France, Germany and, not to forget, Finland, homeland of Linus Torvalds. Note: Open source contributors come from rich countries. Programming is either their hobby (which is, clearly, understatement, for open source programmers regularly achieve world-wide impact, not being a common property of a hobby) or their employer is resourceful enough to pay them for doing open source development
11http://www.linuxjournal.com/article/7767

18

HANNES ROLLIN

you understand that the employer must be way above the ordinary ght for economic survival. In my personal opinion, the open source world came to existence as a countermovement against overpriced (and occasionally poor) software, that is, the rst popular open source projects were clones of already existing commercial software. Reason: You need only a few computers, operated by guys with guts and brains, to rebuild just about any software, while physical products require exceedingly complex fabrication and international collaboration and unimaginable amounts of energy and money ever tried to rebuild a Xeon processor or a BMW? As is detailedly outlaid in [30], most developers are rst drawn to open source by a personal need for some software, be it for nancial reasons (there were much less 3D modellers, if we hadnt Blender and everyone interested had to buy or download (a ripped buggy version of) 3D Studio Max or Cinema 4D) or functional reasons. Most contributors stay only a short while, but others make a kind of career, including a certain sequence of steps: (1) Bug reports and feature requests. These rst make the newby known to the core development team (CDT); besides, anyone can do that. (2) Bug xes. If the contributor is better known and involved for a time, he may successfully propose bug xes. (3) Feature contribution. When you have successfully xed some bugs, you may well propose new features or functionality, which is implemented after careful examination of CDT members. (4) In the long run, you may become part of the CDT, responsible for releases and coordination of contributing members. However, this last step into the inner circle hinges completely on personal appointment by the CDT12. How are open source projects organized? I mean, internationally dispersed nerds have to be held together somehow. Most projects use sites like SourceForge13. Large-scale projects as Firefox and Linux mostly have their own infrastructure, some relying on Linus Torvalds git, a fast version control tool (things get highly non-trivial, when several developers work on the same le in dierent locations). But all the complexities can be boiled down to just three core functionalities an open source hub must possess:

12It appears to be way easier to start your own open source project than to enter CDT

of an existing one. Something is not quite right here. 13SourceForge at sourceforge.net is part of Geeknet, which is listed on stock exchange, and hosts reportedly around 300.000 projects, such as the audio editor Audacity and the online game engine Arianne RPG.

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 19

(1) A reliable, secure and version-safe code repository (hosted on a central server as github 14 or stored in a distributed peer-to-peer network; see the section on distributed storage below). (2) Some tools for communication and coordination (you can get away with mailing lists and wikis). (3) A bug tracking database, according to the principle nd bugs once [58]. Despite all the obviously successful projects, doubts might come to your mind, questions, whether quality can be guaranteed under such, say, unsafe circumstances. Well, if you have not read the introduction, you should do that. Otherwise, I merely remind you, that humans are possibly much more clever and fair than their reputation would tell. Big companies are infamous for their long-term planning of software projects (that turn out having to be refactored multiple times anyhow), but software is not an industrial product you can specify beforehand and then produce with an army of specialists according to those mind-killing specications. There is a recent German book, entitled Software entwickeln mit Verstand (software development with brains) [35], where it is neatly explained, that, as software itself is a problem-solving tool, software development is an ever new problem-solving process, thus developers have to create preliminary representations, play around with that, showing it to the users, and then, now better understanding the problem, improve their representation, play around with that... Software development is a cyclic process which can never, I repeat: never, be planned or specied before, since the result is basically unknown. It is not a production process. It is a quest. And now, I presume, you will have a beginning of an understanding of the beauty of the open source concept, as it is built around the humble search for an unkown solution. More popular projects have vast numbers of contributors and testers, where testing essentially means usage of a prerelease (Raymond put it right: Release early, release often. [99]) followed by bug reports and feature requests. A small CDT, often less than a handfull of developers, keeps it all together. And maintenance, conversely to huge software companies, is not merely eternal bugxing and releasing of dubious service packs, but reinvention, a continual re-thinking and re-adaption of where the project is moving. There is empirical evidence, that open source developers are much more actively inclined to reuse code, from single lines to entire classes and DLLs, than commercially employed developers [52]. A Microsoft-sponsored report found distributed coding contributing no more failures than localized development [12] to the nal project. Other researchers [111] dene a mean

14github.org ostensibly hosts more than 2M repositories. That enterprise gives you

free repositories for open source and charges proprietary pojects. Rightly so!

20

HANNES ROLLIN

developer engagement (MDE) as an indicator of agility (a development paradigm forcing ever-increasing development speed) and found this MDE to stabilize at high values for years for most projects. So, research is slowly beginning to acknowledge what everyone experienced who switched from Internet Explorer to Firefox: Most open source software projects are more stable, faster and provide a better look-andfeel than their centralistically conceived commercial counterparts. Besides, many open software projects close a gap that has been left by the industry because there was no prot in it. One more thing: The open source movement anticipates a major shift in the perception of innovation, which was largely producer-driven troughout the industrial era (consisting mainly of a linear process: research, development, production, diusion) but in the digital world, other rules govern. I spoke of cyclic processes. If we could manage to subdivide it into suciently many orthogonal [58] modules, just about any digital project can be realized [54]; not only software projects, but also digital works of art, grassroots think tanks and entire revolutions. Yep.
Talking of revolutions, it comes to my mind, that open source projects are organized comparable to oligarchies or even dictatorships, where one or a few have absolute power. If these are brilliant, ne, but otherwise the founding fathers may hinder the projects lift-o themselves. Why not developing a generic open source management system, such that version control, bugtracking and communication are integrated with democratic features I mean, electing the core development team (designing mandates) and voting for central changes (direct democracy)? If all that is implemented in a peer-to-peer fashion, the technological and sociopolitical eects of such a system should not be underestimated...

4.2. Online Voting. Most researchers, at least to my limited knowledge, study online voting as a part of e-governance, denoting the implementation of democratic intent [98] into some government homepage, hehe. While a couple of countries have installed highly secured voting computers at the polling stations, only a single country Estonia has adopted a system that allows voters to vote from their PC at home [23], authenticated by an ID card and a suitable card reader. While some argue it unfair (those poor ones without Internet access!) and insecure, others call it a hallmark of democratic participation, depending on their career interests, I guess. Some even dare to call online voting a stillborn voting technology [112], or, a bit more intelligent, pajama voting [38]. Nevertheless, I am not so much interested in governments and their doings. I mean, you can vote for a lot of things, cant you? You can elect a core developer team of an open source project, you can vote for candidate changes in an (does it exist yet?) open arts project, you can vote for (or against) a protocol change of the Bitcoin project, etc. While in the open source discussion the technical aspect is over-emphasized, the discourse on online voting sometimes lacks such foundation.

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 21

Protocol changes of the Bitcoin project are, as far as I know, not subject to democratic elections, but a matter of the small Round Table of approved contributors. Change that!.

I give a condensed version of Rivests desiderata for an online voting protocol [101]: (1) One (authorized) person, one vote. (2) Votes are anonymous. (3) Verication of the count (this is not possible for ordinary elections, remember Gore vs. Bush). (4) No voting receipts (prevents selling of votes). (5) There is a deadline. (6) Everyone can be a candidate. (7) Voting system authenticates itself to the voter. (8) System is ecient, scalable, and robust. Try to disconnect your mind from the imagination of polling stations and secure servers in some CIA basement. Imagine how a voting system might look like on the heterarchy of a peer-to-peer network, instead. More questions arise: Who determines timing of elections? The king? The system? Where, precisely, is the voting system located? Of course, these questions have to be pondered anew for any distinct voting system. For a (democratic) open source management system, you could dene election intervals on project setup. When these settings have to be changed later, a majority decision may be in place. The voting system is of course located everywhere, that is, each vote is counted on every participants computer. Thus, the uniqueness of the votes can easily be checked.
I may remark that the problem of unique votes is similar, very similar, to the problem of double spending in digital cash systems (you know how to copy a text le?). As I will explain later, peer-to-peer networks are especially predilected for fraud detection of that kind.

Voting systems normally demand that the voters are known by their real identity real in the sense of the specic environment (who are you really, then?) , but the votes themselves remain secret. Your identity could be the login name of the open source management system. Weighing your vote by the number (and maybe rating) of your approved contributions would serve as an incentive not to start multiple accounts. Multiple identities are really a problem for voting systems, but real contributions are a human proof-of-work that is hard to fake if the user data is saved and veried distributedly. A (rst) peer-to-peer voting procedure:
1 (1) The system generates a voting key pair K0 , K0 and announces the beginning of elections. (2) Each user with identity idn gives a vote vn , which could contain the numbers of the candidates for the core development team, and

22

HANNES ROLLIN

appends a xed-length random number rn (the so-called salt 15) to his vote: vn ||rn . (3) The local client of user idn automatically encrypts the vote with K0 , 1 then signs it with Kn . (4) The encrypted and signed vote with identity attached:
1 Kn (K0 (vn ||rn ))||idn

is spread across the network. (5) After some time, most users have a (hopefully complete) list of encrypted and signed votes on their computer. (6) When the deadline is due, all the local client programs strip the votes of their identity and signature16 (by applying the corresponding public keys and then deleting the identities) and randomize the new list: K0 (v1 ||r1 ), K0 (v2 ||r2 ), . . . is anonymous. (7) Now, again on each participating computer, the votes are decrypted 1 via K0 , the private key of the system, and the random numbers are removed. The votes are binned, counted and compared. When a vast majority agrees upon the results, new authorization schemes are automatically spread across the network. The observant reader has, of course, noted the unclaried point: Where 1 1 do K0 and K0 come from, and where is K0 saved? These questions are not easily solved. One possibility is to advise (who?) several clients to generate parts of the key pair, those being copied for redundancy (this could be achieved with MDS codes, which I introduce in the next section). The 1 public key is assembled immediately, but creation of K0 must somehow be coupled to a majority key request. You see, there is yet work to do! Advanced multi-party protocols are introduced, for instance, in [48]. 4.3. Distributed Storage. There are two trivial kinds of distributing data within a (peer-to-peer) network, both awed: (1) Everyone locally saves what he desires this is typical for lesharing systems. Works well for mainstream content, but marginal and rarely requested items are possibly undiscoverable (or transferable only at darn slow rates). (2) Everyone locally saves everything. Excellent for highly critical content in small networks to ensure resilience and robustness, but you wouldnt want to do that within huge networks or with masses of
15If two votes are the same, they are encrypted to the same cyphertext, since K is 0

deterministic. We dont want to reveal information about the votes before counting, thus the salt. A similar technique is used to make password hashing more secure. 16During this process, unauthorized and doubly voting users are busted naturally, a majority is needed to verify this and impose sanctions. At the same time, everyone gets feedback that his vote was counted.

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 23

data. The strength of distributed storage is precisely relief of the single users. These examples shall highlight the major conict of distributed storage: We want both availability (as in 2.) and convenience (as in 1.). The common approach is to introduce generative communication, where processes cooperate and compete for the use of shared resources [20].
As you might or might not know, the Bitcoin network makes extensive use of shared resources, since, rst, the entire transaction history of all bitcoins is stored on the network, and, second, every available node is (automatically) coerced to participate in the collective verication of ongoing transactions17. Third, new bitcoins are created by cryptographic proof-of-work [86], which persuades people with appropriate means to invest in fast hardware, sometimes resulting in entire mining pools. These pools are supposed to become the super nodes of the future to guarantee fast verication of transactions (for which they are paid). Thinking about scalability, however, produces a strangely piercing headache. More about that later.

Italian researchers have pondered upon how to extend existing distributed protocols as Linda (central repositories accessed by various clients) and Lime (for mobile clients) towards peer-to-peer contexts [20]. This is brand new matter. Major keywords are: Context transparency: Servents do not know where precisely some datum is located, its just there. Replicable Data: The network replicates data to improve availability and resilience (resistance against failures and attacks). Anywhich, this kind of auto-replication must be restricted to prevent extreme data spread as in 2. above.
A way to do that are the famous maximum distance separable (MDS) codes, where (given integers k < n) the data block to be stored is split into k parts. Then, via MDS magic (for example, ancient Reed-Salomon codes [100]), these k parts are incorporated into n coded packages, distributed on n nodes, such that, here comes the trick, any k out of these n packages are sucient to reconstruct the k parts of the actual data. Then, the system is resilient against up to n k nodes breaking down. Naturally, single repairs are cheaper than complete recovery [34]. For instance, if you use an (128,8) code, the data block D is split into D1 , D2 , . . . , D8 and coded into C1 , C2 , . . . , C128 . Now, only the Ci are saved on the local harddisks of 128 users. If any user wants D, he has to contact just eight users of the 128 mentioned ones (each of whom only transfers 1/8 of D). Even in the spectacular case of 120 random nodes going oine at the same time, the remaining eight, whoever they are, can together rebuild D (if desired) or, as should be immediately done, rebuild D1 , . . . , D8 and compute an new MDS code to
17You actually have to kill the bitcoin client of you want your CPU cycles for your own

purposes, just exiting is not enough.

24

HANNES ROLLIN

be handed around. In this fashion, quick disaster recovery is possibile.

Scope restriction: To alleviate the load of each servent, he should be able to limit his scope (in dierent ways for dierent actions), meaning the depth of connections he is concerned with.
I talked of the time-to-live (TTL) concept before. I think, it is sensible to give each servent the sovereignty to dene TTLvalues for each incoming and outgoing activity. For instance, if your TTL equals 3 for a specic incoming activity (say, transcation verication in the Bitcoin network), all requests that passed more than three peers before reaching you are immediately discarded. As of now (June 2011), Bitcoin servents are implicitly forced to serve and ood the entire dataspace, which is annoying even in this early stage of the experiment.

4.4. A Short Story of Money. There we are, nally, at the heart of all concerns money! No, Im kidding. Money concerns me less than, for instance, the fulllment of my destiny. But this is a dierent topic. Nevertheless, there is a reason why money is attracting so much attention. Disclaimer: I do not know much about economy but who does, anyway? Printing money to pay my own debts is something even I could have come up with. If they had asked me, however, I think I would have proposed qualitative easing, that is, coupling the dollar to primary energy availability and the amount of newly printed (or destroyed) money to the development of that energy availability, since, in my humble opinion, money is, instead of being just a conveniently accepted convention, stored energy, or rather: Money is the promise of stored energy. Think about that. Whenever you buy something, you buy either energy directly (gas, rewood, electricity, human and animal labor, food), or you buy things that incorporate energy in their making, transport, construction of according factories (chip factories cost nowadays several billion dollars translate that to energy!) and the, sometimes, tremendous education of the researchers and constructors behind those things. And the more advanced a thing you want to posses, the more energy is incorporated therein [89]. Think of hexa-core processors. OK. Now, central banks are infamous for their urge to issue new money every year. They say, its important for economic growth. What do they mean by that? Actually, as many of you may be aware of, they dont just give the money away, ah, but they give credits at rst to ordinary banks, who, on the next step, give credits to ordinary enterprises and people. For a majority of debitors to be able to pay o their debt, which amounts, thanks to interests, to a much higher sum than they were initially supplied with, there must be a considerable increase of wealth in average, which is of course due to an increase in labor and production, since wealth is not equal to the amount of dollars in circulation, but to the amount of goods and services available.

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 25

Next step. Goods and services must increase steadily. How do they do that? You can increase eciency, which is protable when you are still pretty inecient, but this gets fairly costly later on. No, most importantly, energy ows have to increase! Thus, very much simplied (but true nevertheless), the rate of ination (newly printed money, essentially) mirrors the hope, I say: hope, of future development of the energy ows (primary energy, that is mainly oil, coal, natural gas and nuclear power, in this order). Dont put to much condence in renewables, for they cannot sustain a consumer society [115]. I know Im moving on thin ice now, but veriable truth is: Oil production, amounting to 60% of the worlds energy supply and 98% of worlds fuels used in transportation (gas, diesel, kerosine...), is basically at since 2005, bumping a bit and never since reached the maximum of June 2008, when, what a coincidence, the (maybe last) nancial crisis hit18. Some say, to prevent a threat of hyperination, currencies should again be coupled to gold reserves. Gold reserves, my! What is the value of gold, really? You cannot eat it, you cannot drive your car with it, you cannot connect it to the Internet... Its only value, in my opinion, is its indestructability and its beauty19. What happens if your money is a piece of paper corresponding to a fraction of the gold reserves? In theory, you have almost no ination, since printing new money demands acquisition of new gold, which is a slow business, one hears. The core problem is: If energy supply increases, since, for instance, a new kind of energy source is brought into business, then goods and services grow and grow, but the gold reserves wont grow that fast, money gets scarce, and you have a dreadful deation. On the other hand, if noone comes up with cold fusion in time and oil production, as many geologists propose20, goes down, economic performance decreases less production, less labor, etc. and we experience a dreadful ination without increased amounts of money! But all this is so obvious that something must be utterly wrong, or why did nobody implement real energy ows into money printing policies? 4.5. Digital Cash. If you are new to these matters, you may think to yourself: Alright, we have digital cash for years now online banking, credit card transactions, PayPal...
18If you are interested in a scientically sound introduction, check out richardheinberg.com and theoildrum.com. If you are more keen on intellectual sarcasm concerning our blindness in these matters, follow kunstler.com. 19OK, some report to have healed bone cancer and suicidal depression with (literally) homeopathic doses of gold, and I clearly perceive the mythic and archetypical halo that makes gold so precious and desirable. But, fellow alchemists, I have to remind you: Aurum nostram non est aurum vulgi. Gold is a symbol of something else, something probably unknowable. As long as we are unaware of that, we run after gold like maniacs. 20Visit the Association for the Study of Peak Oil at http://www.peakoil.net.

26

HANNES ROLLIN

Then I would have to tell you, well, although all of these systems allow money transactions in some digital and remote fashion, they are far away from the properties of cash. Online banking and the like are based upon the assumption that you have a regular bank account; if you havent, you are pretty much stigmatized and unable to book a ight online or to use PayPal. Furthermore, ordinary online money transactions lack a distinguished feature of regular cash: Anonymity. If you want to sell a stolen car or buy drugs, you, naturally, expect used and unsorted dollar bills as a means of payment, for credit card transactions are directly linked to your bank account, which again is directly connected to your identity as a citizen of a particular state. Hence, no anonymity. The rst, in my knowledge, who ercly pursued the idea of anonymous digital cash, was David Chaum [27] in 1985. Yet, he still couldnt disconnect his mind from the central role of banks. Nowadays, there are at least ve dierent kinds of digital cash systems and four kinds of digital cash itself, provided by a surprising number of enterprises, lead probably by Russian WebMoney (an answer to the Russian banking collapse in 1998), which provides 200.000 cash-in terminals in and around Moscow (yes, you dont need a bank account!) and reports more than 12M accounts, agents and customers in 8000+ cities and 70 countries and 59.000 places where you can fund a z-purse, WebMoneys digital wallet. Constance Wells [123] gives a neat summary of economical and legal issues thereof. Somehow, I dont like the idea of WebMoney, which is replicating the old banks in a one-to-one fashion, hence I will, for the rest of this essay, restrict my scope to Bitcoin, a novel fast-growing peer-to-peer digital cash project already mentioned, rst described in 2008 by Nakamoto [86]. To begin with, I merely assemble desirable properties of digital cash, gained as a union set of [48, 79, 101], and elaborate their implementation in the Bitcoin project. Token-based (not account-based).
This is clearly the case with Bitcoin. You dont have, in contrast to WebMoney, for instance, a Bitcoin account, but you have a digital wallet, literally a le named wallet.dat. Though being extremely convenient, it also bears the risk of wallet theft. Thus, have your wallet encrypted, at least, and stored on several devices.

Anonymous.
Bitcoin is as anonymous as digital cash can be. Payer and payee know only each others public keys and IP addresses, and noone else knows more, since no central authorities banks, governments, enterprises are involved.

Easy to use.
Actually, you just enter the amount and payees Bitcoin address (a public key of sorts) into the Bitcoin client, press pay, and o you go.

Transferable between users.


Transfer and regular payment are not distinguished welcome to the peer-to-peer world!

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 27

Portable.
Yes, you can copy or move your wallet.dat to your iPhone or USB stick or what-you-like.

Innite duration (until destroyed).


If you have a decent backup policy, your wallet might last as long as the digital age. And you have, if I got that right, the possibility to destroy your own coins. But why would anyone do that?

Divisible (transaction of fractions of bitcoins).


Bitcoin has that, too. There is a ne inbuilt divisibility (up to 8 decimal places), and oating point precision allows for many more, although there is still a lively debate about that in the forum [15].

Non-repudiation (you cannot withdraw your money once you paid).


The coins are physically removed from your wallet and added to the payees wallet. If you now replace your wallet with an older version and try to spend those coins again, this is known as double spending.

No double spending.
A community eort guarantees this. A set of new transactions is ooded through the network, and each node who received such a block tries to nd a proof-of-work a nonce (number used once) that, appended to the transaction block and the hash of the previous block, is hashed to a string with a certain number of leading zeros. One or a few nodes will nd this nonce rst and broadcast it to all nodes, who accept the block if no transaction in that block has been doubly spent. Then, the hash of the accepted block is used as input for the next proof-of-work. Reportedly, the accumulated CPU power of all the honest participants makes it close to impossible for some bad guys to redo all those proofs-of-work, the longer the system works and the more powerful the honest part of the network is. But this redoing would be necessary to achieve double spending! The diculty of that proof-of-work (the number of leading zeros required) is automatically adapted according to the speed of block generation, naturally, to compensate for hardware development (which can be quite surprising) and uctuating (cumulative) CPU power of the network [86].

Secure transfer.
Public key encryption. If a coin did not reach its designated receiver, it counts as not spent. To ensure secure payment, the transaction can be secured with an escrow mechanism: They money is transferred, but locked, and as soon as the ordered drugs or books reach the payer, he sends a key to the payee to unlock the money, so to speak.

(Un-)traceability.
Some want payments to be traceable, Rivest for instance [101], to capture drug retailers and terrorists and wallet thieves, others argue, that, while people do plot crimes behind closed doors, nobody would argue to force the populace to keep their doors open. A sovereign citizenship has its price [123]. Bitcoin transactions are practically untraceable. In principle, IP addresses can be retrieved from the logs, which again lead to the culprit with considerable eort of cyberforensics. But try to go to police, telling them your bitcoins were stolen! The focus of peer-to-peer

28

HANNES ROLLIN

philosophy rests upon individual responsibility and honesty of a vast majority. If you dont share that view at least a bit, you are denitely at the wrong place.

Cannot be forged.
Yes, good question, how do those bitcoins come into existence, anyhow? They are user-generated. If you do a certain proof-of-work, this is itself a new coin. The diculty is high, and you must precisely calculate the speed (measured in hash function invocations per second) per energy consumption ratio of your computing device. It turned out, that certain openCL-enabled nVidia graphic cards are excellent in this respect. Amazingly, the price of these graphic cards went up sharply in the last months [cite]. This notion is brilliantly conceived, I believe. Discouraging forgery by allowing minting! Wow.

Control of cash amount.


It will be hard to believe for the newcomer, but the total number of bitcoins is a priori xed at 21M. I imagine a line like #define MAX_COINS 21000000 somewhere in the source code, but I couldnt nd it yet. You see, we (the bitcoiners) have a few problems here. First, as I explained in connection with gold, there is deation imminent, disabling bitcoins as a general means of payment and inviting speculators, who, together with geeks and downright exploiters, comprise the set of early adopters of the Bitcoin project. If bitcoins gain value just like that, people think, we must buy now and we must buy much. Then, as the value increases, a few big boys sell large amounts of bitcoins, get rich, cause the value to fall and many ordinary people also sell, with losses, their bitcoins. Now, the speculators come back into business, buy bitcoins, the price rises... The second problem, in my opinion, is that this maximum number of coins is pre-specied and can thus spontaneously be changed (increased!) by the core development team. Who are they? Are they to be trusted?

Small transaction fees.


Great hopes are put into digital cash as to make micro-payments feasable, both in terms of transaction speed and transaction fees. In general, bitcoin transactions are free of charge, but rather slow, and you can buy quick processing of your transactions from those mining pools mentioned elsewhere. Thus, speed and the desire for small transaction fees contradict each other.

Universal.
Bitcoins are universal in the sense that you can always redeem your coins and get real dollars or euros for a small conversion fee. This redemption (what a word!) is at the moment processed by just one Japanese company, that determines the exchange rate in an authoritarian style. This is not good. I strongly recommend many independent companies to participate in the conversion business for two reasons: 1. A single marketplace for bitcoins is an easy target for government restrictions. 2. A single marketplace is an easy target for hackers. Bitcoins are not universal in the sense that they are (accidentally?) linked to the major ordinary (awed) currencies, that will go on to cause so much pain. And bitcoins ip my notion of an energy-linked means

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 29

of payment: They are symbolic of wasted energy (and not of available energy).

Fast and scalable21.


Speed matters. Visa processes several thousands of transactions per second, and as far as I can see, Bitcoin aspires to move into that realm. But, if a transaction is broadcasted (I prefer the term ooded ) to the entirety of the nodeship, we get two problems: Imagine a certain second when 1000 transactions are attempted in a network of 100.000 nodes22 (I am simplifying matters vastly). Then, rst, each node is entrusted with 1000 proofs-of-work (which sucks if you use your computer for more than mere websurng you a gamer? Multimedia artist? Programmer? Webserver?). Second, 1000 blocks are transmitted to and from 100.000 nodes, then we have a lower bound of 200.000.000 transactions (probably much more due to package loss and complicated network topology) which amounts to, assuming an average delay of 10ms, a waiting time of almost 24 days! And I didnt even take bandwidths and procession times into consideration. This is a major drawback of the current Bitcoin verication procedure (which is otherwise incredibly clever). It could, maybe, hopefully, be overcome by introducing scope restriction, which I explained in the section on distributed storage. Then, each node could dene its horizon and thus limit the maximum of trac and computation it is involved in (though precautions must be taken to prevent network disintegration we dont want the Chinese government to run a disconnected mining pool). And, luckily, there is a number of monstercomputers out there running only Firefox. Maybe we can learn from researchers who deal with peer-to-peer support for massively multiplayer games [68] they face scalability issues as well.

No adverse social eects (Rivest).


Bitcoin allows generation of wealth, if you will, and a few advantages can make you part of a novel elite: (1) You are clever and educated enough to understand all of the above, which alone makes you part of a small elite of, say, a few millions worldwide. (2) You can invest in fancy new hardware. (3) You have access to reliable and cheap (or even free) supply of electricity. As you can see, Bitcoin magnies the already immense inequality between the rich and educated, living within a highly integrated digital infrastructure, on the one hand, and all the rest on the other hand. Within these elites, however, a power shift is possible from the ordinary power circles towards individuals with sharp minds. The second problem is the total exclusion, contrasting WebMoney, of all those who do not possess a bank account. If Bitcoin or one of its
21A true global digital cash must be usable by hundreds of millions to be universal. 22At the end of June 2011, there were ca. 9.000 transactions per 24 hours (one trans-

action every 10 seconds).

30

HANNES ROLLIN

descendants is to compete physical cash, then a way of directly converting money to bitcoins must be found such as those cash-in terminals in Moscow.

Unit-of-value freedom (Matonis).


The economist and libertarian Jon W. Matonis in 1995 wrote an interesting short essay on digital cash and monetary freedom [79]. To the list of desired properties digital cash should have, he adds unit-of-value freedom. What the hell is that? OK, we all know that you can buy and sell digital cash for dollars, and giving you less dollars for a bitcoin than you can buy one for is an incentive to stick with bitcoins. But how, exactly, is the price of a bitcoin determined? They say, by demand and supply, but that doesnt tell me much. I read it thus: Bitcoins are (still) illusionary tokens traded in all major currencies, a toy of speculators who failed in real life, but not a currency in its own right. A currency of the future must be backed (not by debt, though) Matoniss suggestions are ...equity mutual funds, commodity funds, precious metals, real estate, universal merchandise and/or services, and even other units of digital cash. Anything and everything can be monetized [ibd, emph. mine]. You already know my favourite backing: Energy supply. It works like this: The amount of cash is coupled to another item (energy, WebMoney, gold...), preferably one with economic importance, and the price is determined by the market. Dont get me wrong: I do have a sense for the beauty of the number 21.000.000.

Competition.
There are a few digital cash systems around, see [123] for a summary. The trick, then, is to start hundreds of digital cash systems and let them compete. This is an incentive for enterprises as well as grassroots movements as Bitcoin to think hard about fast, reliable, secure, convenient and economically sound digital cash systems, and we have already acquired a feeling that these properties are partly contradictory. Bitcoin is a beginning, but not the end. Imagine quantum money [84]!

Condence.
I shall point out that condence in a means of payment is not a result of a rational analysis of the specic security and economic aspects. Condence, rather, is an eect of habit and a long history of mostly good experiences. You do put your credit card into just about any slit with a Visa sticker nearby, dont you? The adoption of digital cash is, for now at least, not built upon condence, but upon a painfully perceived void in the digital economy. Bitcoin, WebMoney et al. ll a gap. This gap is now closing, favoring online poker rooms, commercial animal porn sites and small-scale dealers of weapons. This, together with the (not quite unexpected) volatility of its value, makes Bitcoin a touchy subject for ordinary folks. But this could change! If the dreadful scalability issue is solved and a sensible monetary backing is found, Bitcoin v.2 promises to be a revolutionary tool not only for e-commerce, but also for ordinary money transactions, such as paying your rent, buying a car or a newpaper. It is apt to outperform Western Union in its ability for free worldwide money transfers.

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 31

5. Accountability vs. Privacy and Speed vs. Reliability After all that, you have deserved some idle chat. I have written at some (compressed) length about four major applications of peer-to-peer technology: Open source systems, online voting, distributed storage (and computing) and digital cash, taking a close look at the Bitcoin project. The amount of desired privacy varies in these systems, and, generally spoken, has to be balanced with the amount of necessary accountability. More precisely expressed, privacy and accountability are the extreme ends on the same axis [19]. The same is true for speed and reliability. We have the mathematical and computational means to make a network arbitrarily reliable, but at a high price: A hefty slowdown (try not to think of hardware costs); compare Bitcoins scalability drawbacks. I tried to organize the big four in a coordinate system:
Speed

Distributed Storage

Open Source Management

Privacy

Accountability

Digital Cash

E-Voting

Reliability

The big four all have the potential to inuence one another: Open source projects can realize digital cash systems, distributed storage networks and online voting. Digital cash can be used to conveniently pay for distributed storage, remote open source collaborators of rare talent and to bribe online voters in realtime. Online voting is a suitable way to elect core development teams of open source projects, to alter monetary backing or transaction protocols of a grassroots digital cash environment and to elect ministers

32

HANNES ROLLIN

in distributed storage networks, who could play a special role in the administration of those networks. Distributed storage, lastly, saves as a failsafe repository for open source projects, digital cash transaction histories and online voting logs. Distributed computation can verify le integrity, code versioning and Bitcoin transactions but dont exaggerate! As you may have recognized, there are a lot of open questions. In terms of security and network protocols, the topics of this essay repeatedly touched the fringe of the scientically known world. Moreover, to be able to use these concepts in their full might, you should delve deeply into politics and economy. At the peak of specialization, generalists are needed again to integrate those innumerable tiny little splinters of knowledge out there. Thanks for reading, and keep it on! 6. Acknowledgements My acknlowledgements go to Oliver Kommnick, who sparkled my interest in this fascinating intersection of society and technology; to Satoshi Nakamoto, inventor of Bitcoin, whose paper [86] is a wearisome read; to the vibrant Bitcoin community [15] and to all those scientists, programmers and experimenters out there, trying to make this world a place full of funny little machines. 7. P.S. For the law enforcement and malicious hacker dudes here is how to shut down Bitcoin; any would do: (1) Shut down all logon servers (there are not many) (2) Shut down all exchange servers (there is just one as of June 2011) (3) Shut down the Internet (4) Shut down electricity To circumvent (1), users had to manage IP addresses manually and sending e-mails like Hey Doug, whats your IP today?, which is just a dreadful imagination. I know of no protocol coping with the dynamically changing IP addresses without a logon server. (2) is the Achilles heel. No digital cash is of any use if it can neither be bought nor redeemed. The End.

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 33

References
[1] Most papers are freely available via scholar.google.com. [2] Mark S. Ackerman: Privacy in Pervasive Environments: Next Generation Labeling Protocols, 2004. [3] Alessandro Acquisti et al.: Countermeasures Against Government-Scale Monetary Forgery, 2009. [4] Giovanni Aloisio et al.: Grid Computing on the Web Using the Globus Toolkit, 2010. [5] Michael Armbrust et al.: Above the Clouds: A Berkeley View of Cloud Computing, 2009. [6] Ozalp Babaoglu et al.: Anthill: A Framework for the Development of Agent-Based Peer-to-Peer Systems, 2001. [7] Adam Back: Hashcash A Denial of Service Counter-Measure, 2002. [8] Rajesh Balan et al.: mFerio: The Design and Evaluation of a Peer-to-Peer Mobile Payment System, 2009. [9] Endre Bangerter et al.: A Cyptographic Framework for the Controlled Release of Certied Data, 2010. [10] Salman A. Baset et al.: An Analysis of the Skype Peer-to-Peer Internet Telephony Protocol, 2006. [11] D. Bertolini et al.: Designing Peer-to-Peer Applications: an Agent-Oriented Approach, 2010. [12] Christian Bird et al.: Does Distributed Development Aect Software Quality? An Emprical Case Study of Windows Vista, 2009. [13] Peter Bisson et al.: The Global Grid, McKinsey Quarterly, 2010. [14] http://bitcoin.org/ [15] http://forum.bitcoin.org/ [16] http://www.bitcoinmoney.com/ [17] http://bitcoinwatch.com/ [18] http://bitcoinweekly.com/ [19] Mike Burmester et al.: Accountable Privacy, 2003. [20] Nadia Busi et al.: Towards a Data-Driven Coordination Infrastructure for Peerto-Peer Systems, 2010. [21] Rajkumar Buyya et al.: Cloud Computing and Emerging IT platforms: Vision, Hype and Reality for Delivering Computing as the 5th Utility, 2009. [22] Bogdan Carbunar et al.: Conditional E-Payments with Transferability, 2011. [23] Alec Charles: The Electronic State: Estonias New Media Revolution, 2009. [24] Aw Yoke Cheng et al.: Risk Perception of the E-Payment Systems: A Young Adult Perspective, 2011. [25] Xiangguo Cheng et al.: A New Approach to Group Signature Schemes, 2011. [26] Brian Chess et al.: Software Security in Practice, 2011. [27] David Chaum: Security Without Identication: Transaction Systems to Make Big Brother Obsolete, 1985. [28] N.M. Mosharaf Kabir Chowdhuri et al.: A Survey of Network Virtualization, 2008. [29] Jean-Sebastien Coron et al.: Merkle-Damg Revisited: How to Construct a ard Hash Function, 2007. [30] Kevin Crowston et al.: Free/Libre Open Source Software Development: What We Know and What We Do Not Know, 2010. [31] http://www.cryptopp.com/ [32] Kamalika Das et al.: A Local Asynchronous Distributed Privacy Preserving Feature Selection Algorithm for Large Peer-to-Peer Networks, 2010.

34

HANNES ROLLIN

[33] Todd Davies et al.: Online Deliberation, 2009. [34] A.G. Dimakis et al.: A Survey on Network Codes for Distributed Storage, 2011. [35] Jorg Dirbach et al.: Software entwickeln mit Verstand, Book, 2011. [36] Wnliang Du et al.: Uncheatable Grid Computing, 2010. [37] Cynthia Dwork et al.: Pricing via Processing or Combatting Junk Mail, 1993. [38] Jeremy Epstein: Internet Voting, Security and Privacy, 2011. [39] Mandana J. Farsi: Digital Cash, Masters Thesis, 1997. [40] Qinyuang Feng et al.: Voting Systems with Trust Mechanisms in Cyberspace: Vulnerabilities and Defenses, 2009. [41] Ian Foster et al.: Cloud Computing and Grid Computing 360-Degree Compared, 2009. [42] Armando Fox: Cloud Computing Whats in It for Me as a Scientist?, 2011. [43] Lenoidas Galanis et al.: Processing Queries in a Large Peer-to-Peer System, 2010. [44] Alan Gelb et al.: Cash at Your Fingertips: Biometric Technology for Transfers in Resource-Rich Countries, 2011. [45] The Great Internet Mersenne Prime Search, http://www.mersenne.org/prime.htm. [46] The Gnutella Protocol Specication, v0.4. http://dss.clip2.com/GnutellaProtocol04.pdf, 2000. [47] R.A. Ghosh: Economic Impact of Open Source Software on Innovation and the Competitiveness of the Information and Communication Technologies (ICT) Sector in the EU, 2006. [48] Shafi Goldwasser et al.: Lecture Notes on Cryptography, 2008. [49] Jim Gray: Distributed Computing Economics, 2008. [50] http://www.gridforum.org/ [51] Christian Grothoff: An Excess-Based Economic Model for Resource Allocation in Peer-to-Peer Networks, 2003. [52] Stefan Haeflinger et al.: Code Reuse in Open Source Software, 2008. [53] Jorg Helbach et al.: Code Voting with Linkable Group Signatures, 2008. [54] Eric van Hippel et al.: Open, Distributed and User-Centered: Towards a Paradigm Shift in Innovation Policy, 2010. [55] Christopher D. Hoffman: Encrypted Digital Cash Transfers: Why Money Laundering Controls May Fail Without Uniform Cryptography Regulations, 1997. [56] Tad Hogg et al.: Multiple Realtionship Types in Online Communities and Social Networks, 2008. [57] Xiangpei Hu et al.: Are Mobile Payment and Banking the Killer Appes for Mobile Commerce?, 2008. [58] Andrew Hunt et al.: The Pragmatic Programmer, Book, 2008. [59] A.M. Anisul Huq: Can Incentives Overcome Mylicious Behavior in Peer-to-Peer Networks?, 2009. [60] http://www.jxta.org/ and http://spec.jxta.org/v1.0/docbook/JXTAProtocol.html [61] Information technology Laboratory: Secure Hash Standard, 2008. [62] Suresh Jaganathan et al.: A Study of Protocols for Grid Computing Environment, 2011. [63] A.D. Joseph et al.: Tapestry: An Infrastructure for Fault-Tolerant Wide-Area Location and Routing, 2001. [64] Sam Joseph: NeuroGrid: Semantically Routing Queries in Peer-to-Peer Networks, 2010. [65] Ari Juels et al.: Security of Blind Digital Signatures, 1997. [66] V. Kalaichelvi et al.: Secured Single Transaction E-Voting Protocol: Design and Implementation, 2011. [67] Pedram Keyani et al.: Peer Pressure: Distributed Recovery from Attacks in Peerto-Peer systems, 2010.

EVERYTHING IS CONNECTED DISTRIBUTED WEALTH ON THE INTERNET 35

[68] Bjorn Knutsson et al.: Peer-to-Peer Support for Massively Multiplayer Games, 2004. [69] Maximilian Kogel: Towards Software Conguration Management for Unied Models, 2008. [70] Kaoru Kurosawa et al.: Universally Composable Undeniable Signature, 2010. [71] Ben Laurie et al.: Proof-of-Work Proves Not to Work, 2004. [72] Chris Lesniewski-Laas et al.: Whanau: A Sybil-Proof Distributed Hash Table, 2010. [73] Ralf Lindner et al.: Electronic Petitions and the Relationship between International Contexts, Technology and Political Participitation, 2008. [74] Zhangye Liu et al.: P2P Trading in Social Networks: The Value of Staying Connected, 2010. [75] Stefan Lucks: Design Principles for Iterated Hash Functions, 2004. [76] Anna Lysyanskaya et al.: Group Blind Digital Signatures: A Scalables Solution to Electronic Cash, 1998. [77] Scott D. Mainwaring et al.: From Meiwaku to Tokushita! Lessons for Digital Money Design from Japan, 2008. [78] Ronald J. Mann: Adopting, Using and Discarding Paper and Electronic Payment Instruments: Variations by Age and Race, 2011. [79] Jon W. Matonis: Digital cash and Monetary Freedom, 1995. [80] Sarah Meiklejohn et al.: ZKPDL: A Language-Based System for Ecient ZeroKnowledge Proofs and Electronic Cash, 2010. [81] Ralph W. Merkle: Method of Providing Digital Signature, 1979. [82] Peter Bro Miltersen: Universal Hashing, Lecture Note, 1998. [83] Alberto Montresor et al.: Towards Adaptive, Resilient and Self-Organizing Peer-to-Peer Systems, 2010. [84] Michele Mosca et al.: Quantum Coins, 2009. [85] Daniel A. Nagy: On Digityl Cash-Like Payment Systems, 2007. [86] Satoshi Nakamoto: Bitcoin: A Peer-to-Peer Electronic Cash System, 2008. [87] V.D. Nandavadekar: D-Commerce A Way for Business, 2010. [88] Daniel Nurmi et al.: The Eucalyptus Open-Source Cloud-Computing System, 2009. [89] Howard T. Odum: Environment, Power and Society for the 21st Century, Book, 2001. [90] H. Oros et al.: A Secure and Ecient Oine Electronig Payment System for Wireless Networks, 2010. [91] Saurabh Panjwani et al.: Usably Secure, Low-Cost Authentication for Mobile Banking, 2010. [92] Abishek Parakh et al.: Online Data Storage Using Implicit Security, 2009. [93] Haejung Park: Various Aspects of Digital Cash, Masters Thesis, 2008. [94] Chris Peikert et al.: Lower Bounds for Collusion-Secure Fingerprinting, 2003. [95] Adrian Perrig et al.: SAM: A Flexible and Secure Auction Architecture Using Trusted Hardware, 2002. [96] G.F. Pfister: In Search of Clusters, Book, 1998. [97] UC Berkeley Reliable Adaptive Distributed Systems Laboratory, http://radlab.cs.berkeley.edu/. [98] Rathee et al.: E-Governance: Promises and Challenges, 2011. [99] E.S. Raymond: The Cathedral and the Bazaar, 1998. [100] I. Reed et al.: Polynomial Codes Over Certain Finite Fields, 1960. [101] Ron Rivest: Lecture Notes on Cryptography, 1997. [102] A.W. Roscoe et al.: Reverse Authentication in Financial Transactions, 2010. [103] Timothy Roscoe et al.: Transaction-Based Charging in Mnemosyne: A Peer-toPeer Steganographic Storage System, 2010.

36

HANNES ROLLIN

[104] Vipin Saxena et al.: A Data Mining Technique for a Secure Electronic Payment Transaction, 2010. [105] W. Scacchi: Understanding the Requirements for Developing Open Source Software Systems, 2002. [106] Rudiger Schollmeier et al.: Routing in Mobile Ad Hoc and Peer-to-Peer Networks. A Comparison, 2010. [107] Jean-Marc Seigneur et al.: Trust Enhanced Ubiquitous Payment without too Much Privacy Loss, 2004. [108] SETI@home: The Search for Extraterrestrial Intelligence Project, http://setiathome.berkeley.edu/. [109] C.E. Shannon: A mathematical theory of communication, 1948. [110] The Smallpox Research Grid, http://www-3.ibm.com/solutions/lifesciences/research/smallpox. [111] Diomidis Spinellis et al.: Evaluating the Quality of Open-Source Software, 2008. [112] Charles Steward III: Voting Technologies, 2011. [113] Marc Stiegler et al.: Introduction to Waterken Programming, 2010. [114] Domenico Talia et al.: How Distributed Data Mining Tasks Can Thrive as Knowledge Services, 2010. [115] Ted Trainer: Renewable Energy Cannot Sustain a Consumer Society, Book, 2007. [116] Gregory D. Troxel et al.: Enabling Open-Source Cognitively-Controlled Collaboration Among Software-Dened Radio Nodes, 2008. [117] B. Vass: Migrating to Open Source: Have No Fear, 2007. [118] Girraj K Verma: Probable Security Proof of a Blind Signature Scheme over Braid Groups, 2011. [119] Matthew Wall et al.: Picking Your Party Online an Investigation of Irelands First Online Voting Advice Application, 2009. [120] S. Walli et al.: The Growth of Open Source Software in Organizations. A report, 2005. [121] Janna-Lynn Webeer et al.: Usability Study of the Open Audit Voting System Helios, 2009. [122] Peng Weibing: Research on Money Laundering Crime under Electronic Payment Background, 2011. [123] Constance J. Wells: Digital Currency Systems: Emerging B2B E-Commerce Alternative During Monetary Crisis in the United States, 2011. [124] Dominic Widdows et al.: Semantic Vectors: A Scalable Open Source Package and Online Technology Management Application, 2008. [125] Lizhen Yang et al.: Cryptoanalysis of a Timestamp-Based Password Authentication Scheme, 2001. [126] Lamia Youseff et al.: Toward a Unied Ontology of Cloud Computing, 2008. [127] Yuliang Zheng: Digital Signcryption, 1997. [128] Yingwu Zhu: Measurement and Analysis of an Online Content Voting Network: A Case Study of Digg, 2010. *

Das könnte Ihnen auch gefallen