Sie sind auf Seite 1von 6

Realising the Full Potential of the Web

Tim Berners-Lee, Director of the World-Wide Web Consortium Based on a talk presented at the W3C meeting, London, 1997/12/3

Abstract The first phase of the Web is human communication though shared knowledge. The second side to the Web, yet to emerge, is that of machine-understandable information. The original dream The Web was designed to be a universal space of information. There are a lot of sides to that universality. You should be able to make links to a hastily jotted crazy idea and to link to a beautifully produced work of art. You should be able to link to a very personal page and to something available to the whole planet. The first goal was to work together better. While the use of the Web across all scales is essential to the concept, the original driving force was collaboration at home and at work. For me, the forerunner to the Web was a program called Enquire, which I made for my own purposes. I wrote it in 1980, when I was working at the European Particle Physics Lab (CERN), to keep track of the complex web of relationships between people, programs, machines and ideas. In 1989, when I proposed the Web, it was as an extension of that personal tool to a common information space. Re-enter machines The second part of the dream was that, if you can imagine a project (company, whatever) which uses the Web in its work, then there will be an map, in cyberspace, of all the dependencies and relationships which define how the project is going. The computer renters the scene visibly as a software agent, doing anything it can to help us deal with the bulk of data, to take over the tedium of anything that can be reduced to a rational process, and to manage the scale of our human systems. Where are we now?

The Web you see as a glorified television channel today is just one part of the plan. It is not surprising that the most rapid growth was in public information.. Intranet, Web use is coming back into organizations. There is also a limit to what we can do by ourselves with information, without the help of machines. Search engines flounder in the mass of undifferentiated documents that range vastly in terms of quality, timeliness and relevance. We need information about information, metadata, to help us organize it. The World Wide Web Consortium - W3C The Consortium exists as a place for those companies for whom the Web is essential to meet and agree on the common underpinnings that will allow everyone to go forward. Protocols are the rules that allow computers to talk together about a given topic. When the industry agrees on protocols, then a new application can spread across the world, and new programs can all work together as they all speak the same language. This is key to the development of the Web.

Where is the Web Going Next?


Avoiding the World Wide Wait One reason for the slow response you may get from a dial-up Internet account simply follows from the all you can eat pricing policy.But today there some things we can do to make better use of the bandwidth we have, such as using compression and enabling many overlapping asynchronous requests. There is also the ability to guess ahead and push out what a user may want next, so that the user does not have to request and then wait. It would be better if the system, the collaborating servers and clients together, could adapt to differing demands, and use pre-emptive or reactive retrieval as necessary.

Data about Data - Metadata There should be a common format for expressing information about information (called metadata), for a dozen or so fields that needed it, including privacy information, endorsement labels, library catalogues, tools for structuring and The

Consortium's Resource Description Framework (RDF) is designed to allow data from all these fields to be written in the same form, and therefore carried together and mixed. A browser will be able to get an assurance, before imparting personal information in a Web form, on how that information will be used. Search engines will be able to take such endorsements into account and give results that are perceived to be of much higher quality.

The Web of trust The Web of trust will be a set of documents on the Web that are digitally signed with certain keys, and contain statements about those keys and about other documents. Web of trust does not need to have a specific structure like a tree or a matrix. Hypertext was suitable for a global information system because it has this same flexibility: the power to represent any structure of the real world or a created imagined one. The W3C's role in creating the Web of trust will be to help the community have common language for expressing trust. The Consortium will not seek a central or controlling role in the content of the Web.

Oh, yeah? So, signed metadata is the next step. When we have this, we will be able to ask the computer not just for information, but why we should believe it. Imagine an Oh, yeah? button on your browser. "Oh, yeah?", you think. You press the Oh, yeah? button. You are asking your browser why you should believe it. Those documents will be signed. Data about things The most important aspect of it is that it is machine-understandable data, and it may introduce a new phase of the Web in which much more data in general can be handled by computer programs in a meaningful way. The Enquire program assumed that every page was about something. When you created a new page it made you say what sort of thing it was: a person, a piece of machinery, a group, a program, a concept, etc. Not only that, when you created a link between two nodes, it would prompt you to fill in the relationship between the two things or people.

HTML is a language for communicating a document for human consumption. SGML (and now XML) gives structure, but not semantics. A crying need for RDF The increasing amount of information on the Web was an incentive for people to get browsers, and the increasing number of browsers created more incentive for people to put up more Web sites. These servers typically had access to large databases - such as phone books, library catalogues and existing documentation management systems. They had simple programs which would generate Web pages on the fly corresponding to various views and queries on the database. This has been a very powerful bootstrap as there is now a healthy market for tools to allow one to map one's data from its existing database form on to the Web. Now here is the curious thing. There is so much data available on Web pages, that there is a market for tools that reverse engineer that process. These are tools that read pages, and with a bit of human advice, recreate the database object. Even though it takes human effort to analyse the way different Web sites are offering their data, it is worth it. It is so powerful to have a common, well defined interface to all the data so that you can program on top of it. So the need for well defined interface to Web data in the short term is undeniable. What we propose is that, when a program goes out to a server looking for data, say a database record, that the same data should be available in RDF, in such a way that the rows and columns are all labelled in a well-defined way. That it may be possible to look up the equivalence between field names at one Web site and at another, and so merge information intelligently from many sources. This is a clear need for metadata, just from looking at the trouble libraries have had with the numbers of very similar, but slightly different ways of making up a catalogue card for a book.

Interactive Creativity

I want the Web to be much more creative than it is at the moment. I have even had to coin a new word - Intercreativity - which means building things together on the Web. I found that people thought that the Web already was interactive, because you get to click with a mouse and fill in forms! I have mentioned that better intuitive interfaces will be needed, but I dont think they will be sufficient without better security.

It would be wrong to assume that digital signature will be mainly important for electronic commerce, as if security were only important where money is concerned. One of my key themes is the importance of the Web being used on all levels from the personal, through groups of all sizes, to the global population. When you are working in a group, you do things you would not do outside the group, You share half-baked ideas, reveal sensitive information. You use a vernacular that will be understood; you can cut corners in language and formality. You do these things because you trust the people in the group, and that others won't suddenly have access to it. To date, on the Web, it has been difficult to manage such groups or to allow one to control access to information in an intuitive way.

Letting go So, where will this get us? The Web fills with documents, each of which has pointers to help a computer understand it and relate it to terms it knows. Software agents acting on our behalf can reason about this data. They can ask for and validate proofs of the credibility of the data. They can negotiate as to who will have what access to what and ensure that our personal wishes for privacy level be met. The world is a world of human beings, as it was before, but the power of our actions is again increased. The Web already increases the power of our writings, making them accessible to huge numbers of people and allowing us to draw on any part of the global information base by a simple hypertext link. Now we image the world of people with active machines forming part of the infrastructure. We only have to express a request for bids, or make a bid, and machines will turn a small profit matching the two. Search engines, from looking for pages containing interesting words, will start indexes of assertions that might be useful for answering questions or finding justifications. I think this will take a long time. I say this deliberately, because in the past I have underestimated how long something will take to become available (e.g. good editors in 12 months). Now we will have to find how best to integrate our warm fuzzy right-brain selves into this clearly defined left-brain world. It is easy to know who we trust, but it might be difficult to explain that to a computer. After seeding the semantic Web with specific applications, we must be sure to generalise it cleanly, leaving it clean and simple so that the next generation can learn its logical concepts along with the alphabet.

If we can make something decentralised, out of control, and of great simplicity, we must be prepared to be astonished at whatever might grow out of that new medium. Its up to us

One thing is certain. The Web will have a profound effect on the markets and the cultures around the world: intelligent agents will either stabilise or destabilise markets; the demise of distance will either homogenise or polarise cultures; the ability to access the Web will be either a great divider or a great equaliser; the path will either lead to jealousy and hatred or peace and understanding. The technology we are creating may influence some of these choices, but mostly it will leave them to us. It may expose the questions in a starker form than before and force us to state clearly where we stand. We are forming cells within a global brain and we are excited that we might start to think collectively. What becomes of us still hangs crucially on how we think individually.

Das könnte Ihnen auch gefallen