You are on page 1of 53

Technologies for Personal and Peer-to-Peer (P2P) Knowledge Management*

Eric Tsui
Financial Services, Computer Sciences Corporation (CSC), North Sydney, Australia and School of Business Information Technology, RMIT University, Melbourne, Australia

The great majority of the Knowledge Management (KM) and search tools on the market are server-based enterprise systems. As such, they are often designed top-down, centralised, inflexible and slow to respond to change. There has been numerous articles published on the role of IT and KM systems in organisations but there is a lack of research into KM tools for individuals and server-less KM tools/systems. By adopting a bottom-up approach, this research focusses on tools that assist the Individual Knowledge Worker (IKW) who, in todays competitive knowledge-based society, has a constant need to capture, categorise and locate/distribute knowledge on multiple devices and with multiple parties. Furthermore, knowledge sharing between IKWs often extend across organisational boundaries. As a result, personal KM tools have very different characteristics to the enterprise KM tools mentioned above. At the group level, the impact of Peer-to-Peer (P2P) computing on Knowledge Management has been specifically identified as file sharing, distributed content networks, collaboration, and search. Potential applications for P2PKM systems include, among others, E-Learning in higher and distance education, real time collaborations and battle simulations in defence, collaborative product development, business process automation, and E-business payment systems. By including key findings from earlier work recently completed by the author and others on the landscape of enterprise KM systems, this paper presents a holistic view of the (commercial) KM technologies at three key levels of focusses individual, group and organisational. This paper concludes with critical issues and the impact of PKM and P2PKM technologies on enterprise computing. Keywords: Knowledge Management Tools, Portals, Collaboration, E-Learning, Intelligent Agents, Search, Personal Knowledge Management, Peer-to-Peer, Organisational Memory, File Sharing

* This paper is a CSC Leading Edge Forum (LEF) Technology Grant report. A PDF version of this paper and the corresponding presentation can be downloaded from

The information, views and opinions expressed in this research paper constitute solely the author's views and opinions on the subject of technologies for Knowledge Management and Peer-to-Peer (P2P) Computing and do not represent in any way CSC's official corporate views and opinions. The author has made every attempt to ensure that the information contained in this research paper has been obtained from reliable sources. CSC is not responsible for any errors or omissions, or for the results obtained from the use of this information. All information in this research paper is provided "as is," with no guarantee by CSC of completeness, accuracy, timeliness or of the results obtained from the use of this information, and without warranty of any kind, express or implied, including, but not limited to warranties of performance, merchantability and fitness for a particular purpose. In no event will CSC, its related partnerships or corporations, or the partners, agents or employees thereof be liable to you or anyone else for any decision made or action taken in reliance on the information in this research paper or for any consequential, special or similar damages, even if advised of the possibility of such damages.

Technologies for Knowledge Management

From an organisational perspective, people, process and technology are commonly regarded as the three fundamental components underpinning the success of any Knowledge Management (KM) program. People and cultural issues, in particular, are seen as the two crucial factors in determining the adoption and sustainability of any enterprise-wide Knowledge Management System (KMS) (whether technical or not). Cultural issues may include, but not limited to, the norms and values shared by individuals and groups, as well as trust between peers in an organisation. Up to now, technology has been generally perceived as an enabler in supporting the various KM processes i.e. capturing, categorising, storing, searching, and distributing. The research described in this paper is not meant to re-align or further emphasis the role of technologies in KM but it does shed light on what are some of the impacts of disruptive technologies1 on next generation knowledge capturing and sharing among workers and organisations in the new economy. Several researchers have previously reported on the landscape of Enterprise KM (EKM) tools. For example, Tsui (2002a) provides a taxonomy of such tools by tracing the origin (e.g. artificial intelligence, information retrieval, search) of the tools, their problem solving capabilities, and their evolution in recent years. Strengths and weaknesses of these EKM tools are also identified (loc. cit.). Wenger (2001) outlines a comprehensive roadmap of the collaboration tools on the markets and Binney (2001) defines a KM spectrum that serves to classify organisational information and knowledge systems into a continuum of functional categories. In practice, it is common to find organisations making use of one or more of the following (technical) systems and concepts to support their KM efforts: Knowledge Maps Taxonomies Enterprise search engine e-collaboration tools Information repositories Expert Systems Data Mining / Knowledge Discovery systems Case-based Reasoning / Question-Answering tools (for Helpdesk and/or Contact Centres) E-Learning and/or Learning Management Systems (LMS) Enterprise Information Portal Intellectual Capital (IC) measurement tool

Nearly all of the above categories of systems are designed for enterprise-wide deployment (or deployed to support one or more enterprise-wide systems) and as such, they are often centralised (i.e. installed on one or more designated corporate servers and access via thin clients), adopt a top-down design, and take considerable efforts to design, build/evaluate, deploy and maintain. These tools are designed for use in organisations that have a reasonably well established IT infrastructure (in terms of network connectivity, governance model and scalability). On many occasions, these tools are often beyond the reach of Small to Medium size Enterprises (SMEs) due to their ad hoc/simplified IT infrastructure and/or a lack funds for IT investment. At times, these tools are also seen to be inflexible in coping with external requirements and change. Most noticeably are the need to align these tools/systems to support the activities of IKWs in various business processes, the deployment of these tools/systems across organisational boundaries, and the speed of deployment.

1 According to James Skinner, principal computer scientist and LEF associate at CSC, A digital disruption is a technology so innovative that it has the potential to completely change the way we do business. These "disruptive technologies" can put some industries out of the marketplace entirely, while simultaneously introducing new industries. Examples include the transistor and its impact on the vacuum tube and CDs' impact on record albums.

In contrast with most of the KM research, which are targeted at the organisational level (e.g. strategy development, KM processes, technical infrastructure for KM systems, taxonomy building, content management, portal, measurement of intellectual capital), the authors work focusses on technologies that assist Individual Knowledge Workers (IKWs) to practice Personal Knowledge Management (PKM) as well as assesses the impact of Peer-to-Peer (P2P) computing on new forms of group-based collaborations and problem solving. The goals of the research are to redress the imbalance of current KM research and to identify the impact on KM by several emerging technologies. Furthermore, a bottom-up approach is also adopted to understand the role and align the use of IT to support KM. (Although the focus this paper is on technologies for KM, the importance of people (and cultural issues) and processes should never be underestimated. In fact, as outlined in a later section, success in practicing PKM requires an individual to make some habitual change(s) and to overcome certain internal barriers.) Personal KM is a very important but largely under-explored area. Anecdotal evidence indicates that the competency of IKWs PKM skills has a direct effect on the content of the knowledge base in an enterprise Knowledge Management System (KMS). According to Barth (2001c), Knowledge Management cannot succeed unless every knowledge worker takes personal responsibility for what he or she knows and doesnt know. In addition, academics have long been advocating, in order for organisations and individuals to remain competitive in the new economy (where creativity and innovation are two common terms in todays business world), the importance of practising PKM and the need to collaborate among knowledge workers are paramount. Alley (1998) has summarised this view as On a more individual level, modern workplace culture has come to demand regular and widespread collaboration between co-workers. By definition, knowledge workers are knowledgeable, only to the extend which they have access to other knowledge workers. One feature of the knowledge workplace is that it demands ever more skills, knowledge, and methods than any one individual can carry alone. In order for a knowledge worker to succeed he must have access to peers, so that complimentary skills and knowledge can be shared, and so that a richer spectrum of creativity can be mobilized for solving the complex problems at hand. Similarly, up to now, the topic of Peer-to-Peer Knowledge Management has been very much underexplored. Searches on the Google Internet search engine2 using the strings Knowledge Management and Peer-to-Peer Computing returned 692,000 and 10,600 hits respectively in April 2002. However, only 18 hits were found when Peer-to-Peer Knowledge Management was used as the search string. Although this research explores the impact of P2P computing on Knowledge Management, it does not cover the wider area of P2P computing (e.g. grid computing). In particular, instant messaging and copyright issues associated with P2P file sharing are excluded from this study. Furthermore, Personal Information Management (PIM) tools3 (e.g. personal organisers, data banks, contact management systems etc.), with the exception of E-Mail management tools, are also excluded from this study.

KM strategies and processes

In order to better appreciate the concepts and technologies described later in this paper, it is important to summarise the two predominant and high level approaches to designing and deploying EKM applications the Codification and Personalisation approaches. The Codification (or Product-based) approach is about designing the process and the technology to help consolidate reuseable assets into one or more designated repositories. Sophisticated search engines and/or taxonomy tools are then used to generate indices for the stored assets as well as to locate them based on, in most cases, a menu and keywords-based input. (Increasingly, PUSH and PULL capabilities are also used to disseminate relevant information to interested parties in a more pro-active way than merely user-initiated searches.) This approach places a greater emphasis on the use of technology, especially search engines, and is suited for managing assets that have a high reuseable rate. The Personalisation (or Process-based) approach, on the other hand,
URL= On the topic of technologies for PIM tools including scanning, OCR tools, bookmark/URL addresses management (Richardson and Barry, 1999) and more, please refer to Smith (2001), Barth (2001f) and Jones and Thomas (1997).
3 2

focusses specifically on the people and cultural issues by fostering virtual groups or knowledge communities. Technologies are also used to support the sharing of (both tacit and explicit) knowledge in such groups and communities but it is deemed to play a secondary role compared to locating and connecting people together for knowledge sharing, collective problem solving and decision making. In a very general sense, the codification approach is more suited to situations where work tasks are similar and existing assets can be adapted (within reasonable bounds) for reuse. In contrast, the personalisation approach is appropriate for situations where the bulk of the critical knowledge in an organisation is tacit, work tasks are reasonably unique, and it is not feasible to reuse knowledge assets from task to task without significant adaptations. The two approaches are complimentary; it is not unusual for organisations to adopt a combination of the two approaches in deploying KM application(s) (Tiwana, 2000) (Tsui, 2002a) (Gunnarsson and Lindstrom, 1999) (Apostolou et al, 2000). One of the most controversial areas in Knowledge Management is about the definition itself. Depending on the experience, background, seniority of a person and the context of the discussion, there can often be multiple (and sometimes conflicting) interpretations of KM. For example, to the CEO or managing director, KM is commonly interpreted as the measurement and tracking of intellectual capital in the firm. To a middle level manager, KM can mean the consolidation of best practices and /or the enhancement of customer services. At the operational level, better management of knowledge can lead to less down time, higher quality and productivity. Among the seminal work in the field, there are three common ways to define KM. The first approach is to split the term Knowledge Management into Knowledge and Management and then address them separately. Under this approach, the relationship between knowledge and data, information and wisdom (the famous Information Pyramid) is explained together with the word Management (which typically means Planning and Control). The second approach to defining KM is to focus on the measurement of Intellectual Capital and more specifically on ways to transform hidden/untapped potential in an organisation into tangible business benefits. The third approach, and is also the one that adopted in this paper, is to outline the processes involved in managing knowledge. Such processes include, among others, the creation, classification, indexing, search, distribution and valuation of knowledge for reuse. Together, these KM processes represent a continuum of activities that operates on the knowledge dimension of daily work. Examples of the three types of KM definitions can be found in Tsui (2002b)4. Although the personalisation and codification are enterprise-based approaches to KM, they are also, as illustrated in later sections of this paper, applicable for PKM and peer-based collaborative systems (where peers communicate with each other directly without relying on a corporate infrastructure). This is especially so when it comes to the search function and the collaborative aspects of online communities. Similarly, due to a strong alignment of the role of IT in KM, a process-based definition of KM is also very appropriate for understanding the scope and limitations of various technologies in supporting PKM.

Personal Knowledge Management

In the last decade, at least two factors have hastened the need for knowledge workers to practice KM at the individual level. Firstly, the worlds (rapid) transformation into the new (knowledge-based) economy has given birth to a new kind of workers5. Compared to their counterparts in the old economy, these workers are more likely to be self-employed (or contractors), their decisions are almost all knowledge-based, their
Among the various KM processes reported, the knowledge retirement (or archiving) process is the least studied. The ever escalating Infoglut problem means that knowledge bases of KMS will continue to grow at an alarming rate. Despite advances in search and categorisation tools, the (overall) accuracy of search results will continue to decline unless smart algorithms are developed to generalise, combine, re-organise and archive outdated (chunks of) knowledge. 5 As Knell (2000) stated, Three transformations the waves of white collar downsizing in the 1980s and 1990s, the rise in the proportion of value added by skilled workers and the explosion of IT have combined to create a fertile breeding ground for a new kind of workers.

work tasks are far less structured (or routine), and they are less loyal to their employer(s) (Knell, 2000). Knell refers to this new breed of workers as the free workers. Increasingly, in the last 2-4 years, there are more and more workers switching from job to contract or contract to contract than job to job (loc. cit.) Secondly, for EKM initiatives to be successful, among other factors, it is important that Individual Knowledge Workers (IKWs) are competent at managing knowledge at the personal level. These factors are further elaborated below. Free workers in the new economy are undoubtedly a valuable asset to the society have but organisations have so far failed to attract and retain such talents. According to Knell (2000), One technical company in Californias Silicon Valley estimates it cost them an average of US$125,000 when an employee leaves. In a similar vein, Bill Gates has reflected that if twenty of Microsofts key people were to leave, the company would risk bankruptcy. Knells survey has revealed that only one in four executives strongly agree that their companies attract highly talented people, only 16% of executives believe their companies know who their high performers are, and just 10% of executives believe that they can retain nearly all their high performers. From these statistics and the fact that todays workers are increasingly switching from job to contract or contract to contract, it appears that IKWs are dealing the implications of new economy more swiftly and skillfully than (most) organisations.6 In the new economy, work tasks have become far less unstructured and routine than before. Todays knowledge workers often need to sift through large repositories of information from a diversity of sources, share discussions with other workers (e.g. colleagues, customers, partners, associates), and formulate, whether individually or collectively, knowledge-based decisions. This is especially so for workers who belong to one or more of the knowledge-intensive industries (e.g. legal, consulting, engineering, marketing, information technology, financial services etc.). These requirements and more have given a whole new meaning to the everyday term called workplace. As Caldwell (2001) rightly pointed out The impact (of the new economy) is on the workplace (i.e., people and how they work). The focus is on knowledge as the primary source of capability and competitive advantage. Knowledge work is becoming the primary work style in enterprises, and knowledge workers are becoming the key element in the workforce. Fewer jobs are well-structured or well-defined. Instead, knowledge work is defined at the point of need by the issues, problems or opportunities that arise The workplace must provide an environment in which the knowledge worker can most effectively perform his or her job: enterprise strategies, direction and goals; processes for analysis and decision making by individuals and teams; and a means to collaborate with the appropriate people to accomplish the task. The workplace is no longer just a physical location; it has become a blend of physical and virtual spaces in which work is undertaken. It is evident from the above discussion that, to better equip knowledge workers in the new economy, competent KM at the personal level, both in terms of skills and technological support, is of paramount importance. As Jon Sidoli of Knovis Communications stated in Barth (2000), There is a personal paradigm shift that says I am more of a node on the network than anything else. Work is getting more fluid and serial. Whether or not he is an employee, the IKW needs to look at himself as an enterprise of one in a community of many. It is what you know, who you know and what they know. Personal Knowledge Management (PKM) is a collection of processes that an individual needs to carry out in order to gather, classify, store, search and retrieve knowledge in his/her daily activities. Activities are not confined to business/work-related tasks but also include personal interests, hobbies, home, family and leisure activities. Other definitions also exist. For example, Frand and Hixon (1999) define PKM as a conceptual framework to organize and integrate information that we, as individuals, feel is important so
Other alarming statistics revealed in Knell (2000)s report include only 28% of employees will remain with their current employer if they are offered higher pay elsewhere, 93% of managers have doubts about their employers goals and interests, and nearly half of the self-employed people are managers or professionals. Furthermore, the average 32year-old American has worked for nine different firms, 65% of US workers say employers are less loyal to them than five years ago, and 78% of US middle managers say employees are less loyal to them than 5 years ago.

that it becomes part of our personal knowledge base. It provides a strategy for transforming what might be random pieces of information into something that can be systematically applied and that expands our personal knowledge. Although the differentiation is not well defined (and probably never will be), current research into PKM can be grouped into two primary areas one focus is on skills for PKM and another focus is on technologies for PKM. 3.1 Personal Knowledge Management (PKM) skills

Up to now, research into PKM skills focusses on the skill set and habits that IKWs need to develop (and refine) in order to yield quality and timely decision making and problem solving at work, study or leisure activities. Some of the noteable work in this field has been summarised below. Dorsey (2000)s opinion is that Personal Knowledge Management should be viewed as a set of problem-solving skills that have both a logical or conceptual as well as physical or hands-on component. These are skills that will be required for successful knowledge work in the twenty-first century. Seven core PKM skills have been defined and they are retrieving, evaluating/assessing, organising, analysing, presenting, securing, and collaborating around information (loc. cit.). The last skill (i.e. collaborating around information) has a technological aspect to it as it involves, among other things, skills needed to diligently handle E-mail, video conferencing and other types of collaboration systems. Hyams (2000), on the other hand, has developed the following list of PKM skills by focussing on a broader range of factors (e.g. time management as well as the infrastructural and organisational aspects of work) than the (predominantly) information aspect that Dorsey (2000) has adopted (see above): Time control Workplace wellness Speedy reading, notation and research Document structuring Information design Target writing Processing infrastructure Filtering techniques

A third set of PKM strategies has been outlined by Skyrme (1999)7. They have been further adapted based on the authors experience and listed below: Clarify ones information needs Develop a sourcing strategy Decide on preferences for information to be PUSHed and PULLed Decide on how and when to process information Set criteria for what needs to be filed/saved Create a personal filing system with a compatible structure to ones work activities and areas of knowledge Consider breaking up a piece of information and create indices for multiple purposes Review the value and the index (or indices) of stored information on a regular basis

Frand and Hixon (1999)s three-tier approach to PKM covers both the skills and technological aspects. The basis of the approach is that firstly, one has to develop a mental map to depict the working knowledge.

Skyrme (1999) actually refers to these strategies as Personal Information (PIM) strategies.

(Some researchers refer this as a knowledge map.) Secondly, an organisational structure needs to be created to facilitate the location of both personal and professional information. Thirdly, appropriate technologies are needed as organic/enabling tools to organise and extend the personal memory, as well as to synergise and process ideas for effective problem solving and decision making. In terms of filing information into a computer system8, three fundamental approaches (chronological, functional and rolebased) have been recommended and the pros and cons of these approaches have also been critically assessed (loc. cit) From the above discussions, it is clear that PKM skills involve more than just an IKWs technological competencies. The practice of PKM skills is considered to be more of a personal habit or preference than any kind of required activities. Compare to EKM efforts, which have, among others, Knowledge is Power and cultural barriers as two of the major obstacles for success, the practice of PKM is not hindered by these two factors although, according to the authors experience, it has its own set of Critical Success Factors. Some of these factors are overcoming internal resistance, build trust with peers (in order to receive relevant and trustworthy information in a timely manner), and be adventurous in trying out various technologies. As a topic of growing interest and importance to knowledge workers, several universities, including Millikin University9, University of Pretoria10 and the University of California at Los Angeles (UCLA)11, have integrated the teaching and practice of PKM skills into their course curriculum and conduct research into the skills and systems for PKM (Pienaar, 1990). Training vendor The Virtual Wizard also offers a distance learning course on PKM skills for coaches, entrepreneurs and virtual assistant12. In the authors opinion, this is particularly encouraging. 3.2 Challenges in practicing PKM

Even with the PKM skills tabulated above (which assist in the formulation of sourcing strategies, research skills, time management, filing and presentation techniques), IKWs are still faced with enormous pressure to practice PKM. Reasons being in the new economy, knowledge workers constantly need to 1. 2. 3. 4. Locate the right information quickly the strategic window that allows a person (and the organisation too for that matter) to action on a piece of information or respond to an opportunity is rapidly decreasing. Stay abreast with business and technology trends this means that an IKW wants to receive only relevant information (and the context) in a timely manner. Be constantly switching between learning and practicing as knowledge work in the new economy is becoming more and more unstructured and unpredictable. Create new knowledge and be innovative merely reusing an existing solution is not a viable business strategy nowadays. Increased competition and advances in technologies have highlighted the need for knowledge workers to be creative/innovative i.e. combining existing knowledge to generate new business opportunities or a proven solution to one or more existing problems. Maintain communications and build trust among peers Knowledge workers need to understand the competencies, interests and needs of other peers. Knowledge workers should be both a collector and a sharer of knowledge.


On the topics of record management and file system, Wilson (2001) outlines a personal electronic filing system and explores the feasibility and challenges of the system to support PKM on a long term basis. 9 URL= 10 URL= 11 URL= 12 URL=

However, in practice, from the informational and technological perspectives, IKWs are faced with the following challenges in managing everyday knowledge: Data, information and knowledge may come in a structured (e.g. database fields), semi-structured (e.g. texts and diagrams in an E-Mail message), or unstructured (e.g. handwritten note) form and in various formats (e.g. hard copy, video, picture, texts, voice message etc.). As todays knowledge workers often use multiple electronic devices for communications, planning and recording purposes, captured information may be stored, or even duplicated, on multiple locations and devices (e.g. laptop, desktop, home PC, PDA, voice recorder). Needless to say, this phenomenon leads to additional problems in keeping track of the latest/master version and in re-composing the original information. It is extremely difficult, if not impossible, to predict the arrival of a piece of Information. It may arrive at anytime (e.g. at work, at leisure, during travelling, at home, at study) The sheer volume of the information that an IKW received every day the Infoglut problem continues to get worse! Despite knowing that the needed information may be stored in one or more of the electronic devices, it is often difficult to retrieve it. There are two reasons for this. Firstly, nearly all search and indexing tools are text-based and, as such, an exact (or near exact) match of the search term(s) is required in order to locate a piece of information. Contextual and background knowledge that accompany the original information are often not captured. Secondly, indices may not be truly representative (or predictive) as they are generated based on how the information was originally presented and not how it will be retrieved/re-applied (Tsui, 2000a) (Kaplan, 2000b).

All of the above are difficult challenges. Many of these issues are currently being studied by researchers in the information systems, information retrieval and knowledge management (Tsui, 2000b) areas. Although the technologies presented in later sections of this paper address some of the above problems (especially in the contexts of organising, synchronising, and searching of information) at the personal and group levels, there are still, nevertheless, significant limitations and fall well short of a universal solution to these problems. 3.3 Technologies for PKM

As discussed above, EKM systems are top-down systems and as such, the design of the systems has to be, among other things, generic (to the needs of most organisations) and scaleable. In many cases, these design criteria have made it difficult for EKM systems to support specific knowledge-intensive activities (e.g. the capturing, filtering and combination of knowledge from various sources, mind/concept mapping, classification and indexing of personal information) at the individual level. Among researchers and practitioners working on PKM, there is a strong agreement that EKM will not succeed unless IKWs have the necessary skills and tools to practice KM at the personal level. This section discusses technologies for PKM - the various types of tools available for IKWs to practice KM at the personal level. PKM tools, by its very definition and nature, are dramatically different from EKM tools/systems. As bottom-up systems, PKM tools can be easily installed by the user on a laptop/desktop/handheld device running a standard operating system (e.g. Windows 95, 98, NT, 2000, XP). Furthermore, these tools can operate standalone and/or in conjunction with the Internet; they require no programming effort to configure nor a corporate networking infrastructure to operate, and should be very inexpensive (say less than US$250 per copy) or free. From an organisational perspective, personal adoption rate of any EKM system may vary enormously. Despite the fact that workers are realising value in knowledge sharing among peers and the power of collaboration systems (typically enterprise portals, e-collaboration tools, workflow systems), much of the captured/harnessed knowledge is actually retained by personal systems (e.g. local drives and directories).

When complemented by PKM tools and provided that there is knowledge exchange (see a later section) at the organisational, group and individual levels, an organisational memory13 (Morrison, 1997) (Van Heijst et al, 1996, 1997) can be extended to the knowledge workers level representing an enormous gain in any organisations knowledge base. Besides, the adoption of PKM tools also assists the overall acceptance of enterprise KM programs. As Barth (2001c) pointed out, Implemented by one knowledge worker at a time, bottom-up tools and techniques demonstrate immediate and explicit benefits in terms of increased productivity and improved morale and build momentum to overcome the technological and sociological barriers to top-down, enterprise-wide KM initiatives. On the technology side, despite an extensive search by the author during the course of this research, there appears to be a scarcity of survey/research on the use of PIM and PKM tools by workers. Although this research focusses on PKM tools as opposed to PIM (Personal Information Management) tools, it is interesting to note that Jones and Thomas (1997) conducted a survey of PIM tools in 1997 and identified the popularity of the following PIM tools among office workers at that time: Tool/Item % of respondent To do lists 60 Address books 45 Personal organizers 40 Pocket diaries 35 Appointment books 15 Personal Digital Assistants (PDAs) 10 Surprisingly, PDA is the only electronic device being included in the above survey. Barth (2000) presented an excellent list of PKM tools available in 2000. Corporate mergers, fall of the dotcoms, competitions, and the current economic hardship have had a considerable impact on the original landscape of PKM tools reported by Barth (2000)14. As part of this research, the author has studied the original list and, together with other input (Tsui, 2000a), extended/adapted it to produce new categories of PKM tools. Table 1 depicts an alignment of these categories with common knowledge processes:
Table 1. Alignment of PKM tool categories with Knowledge Processes

PKM tool category Index/Search Meta-Search Associative Links Information capturing and sharing Concept/Mind mapping E-Mail management, analysis and Unified

Knowledge Process Creation Codification / Representation X X X X X X X

Classification / Indexing X X X X X

Search & Filter X X

Share / Distribute X X X

In the Americas, this term is commonly referred to as Organizational Memory (OM) or Organizational Memory Information System (OMIS). In Europe and Australia, the equivalent term is Corporate Memory. An organisational memory represents the collective knowledge and wisdom of an enterprise. Typically, the key elements of an OM compreise of a fluid/dynamic combination of the systems, processes, best practices, and all the know-how knowledge possessed by its employees. 14 A more updated list of PKM tools can be found at URL=


Messaging Voice recognition Collaboration and synchronization Learning


A description of each of the above categories of PKM tools is given below: Index/Search - Similar to the indexing of information hosted on Web sites, these tools index the local or networked drives. Various types (e.g. keyword, full-text, natural language, Boolean search etc.) of searching are supported. As PKM tools, these software often place a limit on the number of pages or documents (or amount of information in GB) that can be indexed for a free or personal license. EnFish is a vendor that offers products in this category (as well as products that are targeted at enterprises). EnFish Find, as the name suggests, finds and groups together relevant information, people names, document, E-Mail messages, and links to Web sites. (At the enterprise level, EnFish Enterprise supports an organizations collaborations with external parties (e.g. customers, vendors and associates) using online communities, discussion spaces with moderation (via workflow) and PUSH capabilities. Meta-search It is well known that no single search engine is perfect and there are still large amount of information on the Web that are not indexed and hence cannot be located by many search engines. Brake (1997) points out that even when all the major Web search engines are combined, they still only index less than half of all the pages on the Web. Reason being spiders, the very program used by search engines to scan for Web pages and their content, only crawl Web sites that are commonly accessed, pages that have link(s) from other sites, and pages that are specifically registered. In the last few years, the above situation has deteriorated further with an even less proportion of Web pages being indexed by the search engines. In order to (partially) redress this problem and to eliminate the (duplicated) effort in re-entering the same search string to other search engines, this category of tools populates a search request to several (popular) search engines, combine the results, eliminate the dead and duplicate links, rank the resultant list (according to a set of pre-determined criteria) and present the (combined) result to the user. Examples of meta-search engines are,, and also provides an engine for comparing prices and product information. Associative links Originated from the information/library services areas and by using pop-up menus or showing hypertext when a document is being edited, these tools act as an online dictionaries, thesaurus or hyperlinks to selective topics on the Web. Atomica, a firm that markets both enterprise and personal products, offers Atomica Personal (free to download) in this category. InMagics DB/TextWorks is another product in this category and it also supports many of the features described in the next category of tools. However, it is unsure based on information from the InMagic Web site whether the pricing of its products qualifies them to be PKM tools (i.e. less than US$250 per copy). Information capturing and sharing With the amount of information an IKW encounters, it is often necessary to copy and paste (or drag and drop) information from various Web pages or documents to form new documents. Information can be texts, pictures, diagrams, links to Web site and can be stored in various formats. Tools in this category not only can support the above manipulations but many are also capable of alerting the user when new information is being added or updated on the original sources as well as share the linked information or documents with other group members. Entopia is a PKM tool vendor that markets three products Quantum Collect, Quantum Collaborate and Quantum Capitalize. Quantum Collect allows a user to grab any texts and graphics from any source and store them in a Quantum (Q file) with additional notes. It uses a neural-based semantic engine to summarise and classify the captured information. Quantum Collaborate allows a user to selectively designate information to be shared with others in a workgroup by choosing to store the information locally, on a server and in a public or private folder and with specific access privileges. Quantum Capitalize tracks, by searching and linking together relevant documents and people, Intellectual Capital (IC) generated by knowledge workers in their daily activities (Hane, 2001). The characteristics of


Entopias products, especially on the capturing and subsequent classification of the information, are typical of many bottom-up PKM tools. Nearly all of these tools enable the user to re-apply the captured/classified information in Word documents and in E-Mail messages and by doing so directing enhancing the productivity of the knowledge worker. Some of these tools even extend the access to knowledge to PDAs and other wireless devices (e.g. products from Peramon Technology). Barth (2001e) and Jacobs and Linden (2001) describe several other PKM tools including Web2one, Organizer, Scopeware, and Clickgarden. More advanced tools are also available. Intelligent Agents (IA), an area in Artificial Intelligence (AI), are increasingly being used to locate and retrieve desired information on the Web. These agents can tolerate up to a certain degree of inexactness in matching and can aggregate the information found before presenting the results. Agents can not only retrieve data and information from Web sites but are also capable of executing Distributed Data Mining which is the discovery and clustering of user profiles, interests and activities. By doing so, like-minded people together can be connected together. Such capabilities can greatly enhance the efficiency and accuracy of a distributed search in a Peer-to-Peer network (see a later section). One such tool is GOtrieve from BIAP Systems. With GOtrieve, user can create specific agents to locate and retrieve Web or Network-based information from a variety of sources. Individual results generated by the agents are combined into a custom format and can be accessed by any Web-enabled device. Concept/Mind mapping These are visualization tools that support the capture, organization, and presentation of ideas (represented in the form of concepts and relations). Searching, zooming and an extensive list of navigational features are generally provided by these tools. For example, any one of the concepts can be made the head (and/or centre) of the graph and other concepts and relations are instantly re-organised to depict their relationship(s) with the head of the graph. Two products, Mind Manager and The Brain, are exceptional and deserve to be mentioned. The Brain is a visualisation system that supports 3 dimensional views of linked concepts. Concepts are generally, but not necessarily, linked by a subsumption (or part-of) relationship. Once a concept is selected, it becomes the centre or head of the tree and The Brain automatically repositions all its children with respect to the selected concept. The Brain can be Web-deployed and several organisations have adopted this software as a front end to their product (e.g. Health Language15 incorporates The Brain into its medical ontologies product and the site16 uses The Brain to assist the user to navigate its Web pages to the desired section.) (Jacobs and Linden, 2001). Mind Manager is another product that is similar to The Brain for concept representation and ideas capturing. Mind Manager produces Mind Map(s), a two dimensional graphical display of linked concepts. Each concept/node can have notes, URL addresses, documents and pictures attached to them. The Business Edition of Mind Manager can even convert a Mind Map into a Word, PowerPoint and Web pages (in html format). One does not have to use Mind Maps as a standalone tool. For example, Mind Maps can complement texts in E-Mail communications. Furthermore, Mind Maps can be superimposed. For example, a set of "base" maps (e.g. representing core concepts which cannot be changed) can be pre-defined and other maps extend the information represented in the base maps. For a detail listing of idea capturing and conceptualisation tools, please refer to Kulikauskas (1999). E-Mail management, analysis and Unified messaging One of the most common tools used by knowledge workers to communicate with each other is the E-Mail system. Although a very effective and popular communication tool, the number of E-Mail messages that a worker receives has been rising steadily. According to Kust (1999), in the US, each work receives an overage of 190 messages a day. These messages include faxes, voice messages, E-Mail messages, and ordinary mail. In Great Britain, the average is 160 messages a day17. A survey conducted by Macquarie University in Australia on the use of E-Mail by 103 managers in Australia and Hong Kong indicates that on average managers

15 16

URL= URL= 17 Statistics on E-mail messages vary enormously depending on the source and the geographical region in which the survey is conducted. For example, according to Walker (2001), Research group Jupiter Media Metrix expects the average E-mail users burden to rise from 96 messages per week in 2000 to 170 messages a week in 2006. Of that 170, 74 will be commercial messages and 28 will be unsolicited spam.


receive approximately 25 messages a day and send out about 13 messages on average (Burton and Nesbit, 2001). Furthermore, the survey reviews that managers spend more than an hour each day handling E-Mail messages and of those messages more than 20% are of little interest to the managers.). Smart tools as well as good E-Mail etiquette is definitely needed in order to contain the proliferation of E-Mail messages. Gurteen (1995) explains how computer-based communications can affect trust between people and outlines steps and etiquette for establishing trust in daily E-Mail communications. On the technology side, tools now exist to filter, categorise and divert incoming E-Mail messages. Similar to the information capturing and sharing tools mentioned above and some of the Peer-to-Peer KM tools mentioned in a later section of this paper, intelligent agents are also increasingly being used in E-Mail systems. Typically, these agents travel to users desktop via an E-Mail message and they have been pre-configured to collect and/or aggregate information from individual users before presenting the results (sometimes via a link) to its intended receiver (e.g. FireDrops Zaplet). Some agents even, based on a users response, perform instant update of the data it carries and then present the customised information to the viewer (e.g. Gizmozs Gizmo) (Robinson, 2001). The majority of Email management tools tackles the E-Mail overload problem at the client (receivers) end. In contrast, Returnpath offers a fairly unique solution to this problem by analyzing potential dead (or incorrect) E-Mail addresses18, updates new addresses and corrects syntactic and formatting mistakes in E-Mail addresses (Ayan, 2001). At the corporate level19, there exist E-Mails tools that tackle the problems of unstructured and scattered data in an organization by extending the indexing of documents/files to the workers PCs and laptops. For example, XDegrees' XMail directory server keeps track of all the shared files around the network. As a result, an enterprise search engine can locate all the shared files that a user is permitted to access. Another powerful feature of XMail is its ability to replace E-mail attachment with a link - which not only reduces download time but also boasts a higher degree security on the posted material. XMail users can also "subscribe" to specific documents and shared folders so that they are automatically notified if there is a change (i.e. modification or addition) in the content. Similarly, Abridge also has an application that keeps track and classifies all E-Mail by content and topic area using natural language analysis (Compton, 2001). Unified Messaging (UM) systems are rapidly gaining popularly. These systems integrate E-Mail messages, SMS, faxes and voice mails into a centralised repository and users can manipulate these messages (irrespective of their original format and source) using a single set of commands (Krane, 2001). At the enterprise level, Avaya and Captaris offer in-house packages for organisations to install UM systems. It is expected UM systems will be extended to the personal level and there will be widespread adoption in the next 3-5 years (loc. cit.). Voice recognition Unlike the rest of the PKM tools that are nearly all text-based, these tools accept verbal commands as input. They assist the user to, for example, send instructions to the operating system, manipulate files in folders and directories, establish Internet connection and fetch/read email, compose files/E-Mail messages and so on. In most cases, some initial training is generally required to calibre the software to adapt to a particular users voice patterns. The power of these tools lies in its ability to co-operate with other PKM tools described in the section thereby providing a much-needed voice-driven interface to a range of PKM tools. Two products in this category are Dragon Speak and IBM Via Voice20. Although progress is still slow, several search engine vendors are extending KM systems with voice-driven interfaces (Barth, 1999).

18 19

It has been reported that up to 25% of E-Mail addresses become inoperative each year (Ayan, 2001). Obviously, being enterprise systems, these tools do not satisfy the original criteria to be qualified as PKM tools. Examples of enterprise-wide E-Mail management systems are hardware and network performance monitoring tools (e.g. NetIQs AppManager, Platinum Technologies ProVision), compliance management tools which produce reports for regulatory authorities (e.g. OTGs EmailXtender), and virus detection software (e.g. Industrial Economics Guinevere) (Essex, 2001) 20 These tools are not to be confused with enterprise-based Natural Language continuous Voice Recognition (NLVR) software that, apart from carrying a considerable price tag, require extensive customisation, integration and efforts to deploy. In depth analyses of the business domain, lexicon, common utterances, and ambiguity resolution are needed in order to develop these applications. Some notable applications of NLVR are insurance premium estimation, call routing at company switchboard, telephone betting, and ordering of pizzas and taxis.


Collaboration and synchronisation These tools support knowledge sharing (in terms of question answering, discussions and ideas sharing) among groups of people with a common interest in a particular topic. Typically, these are public discussion forums, specific mailing lists and (electronic) bulletin boards hosted on public Web sites. A wide range of personalisations is available to each individual. For example, one can nominate the topic(s) (and sub-area(s)) of interest, subscribe/unsubscribe anytime, elect to receive new posting instantly (via E-Mail), a summarised report of all the postings at regular intervals, or no alert message (i.e. leave all postings on the server) at all. Among others, is probably the most established public bulletin board with thousands of active interest groups at anytime. Synchronisation refers to the use of Peer-to-Peer technologies to coordinate/replicate the content of various work spaces across multiple devices for an IKW. Please refer to a subsequent section of this paper for details. Learning - PKM systems also extend to the learning area. These tools enable knowledge workers to take control of their own learning process and assist by gathering and planning the course modules, conduct the training and track the accumulation of competencies. In particular, BrainX's Digital Learning System (DLS) assists an IKW to capture any information (e.g. documents, texts, pictures, drawings etc.) on the Web and convert these material into questions and answers (Q&As). These Q&As are valuable as they are evolved from processes that are highly integrated with workers' daily activities. Such self-developed learning packs, especially the ones by subject matter experts, represent core learning and domain knowledge of an organization (or a network of peers). Among others, Motorola University and LySonix, a surgical device manufacturer, have adopted BrainXs product (Barth, 2001a)21. In the light of recent advances in E-Learning and the findings of Knell (2000)s survey (i.e. there has been sharp increase in the number of free workers in the new economy), more and more tools like DLS will become available on the market.

Despite the availability of commercial products, as mentioned before, there appears to be inadequate academic research on the topic of technologies for PKM. Only a handful of research projects can be identified. Torrance (1995) discusses the use of personal taxonomies in a personal and group document sharing system that support the referencing of research material on the Web. Bettoni et al (1998) designed KnowPort, a PKM tool that assists an IKW to track files, E-Mail conversations, completed activities as well as relate them to critical concepts being used. Kaplan (2001, 2000b) has provided a case study on PKM strategies and tools. 3.4 Trends in the future of PKM tools

This section outlines the authors observations on the future of PKM tools. Some of the above-mentioned technologies for PKM will be, undoubtedly, included as productivity or collaboration tools in next generation operating systems (both business and personal versions). For example, Microsoft latest operating system XP has built-in support for enabling workflow, simple serverside collaborations, speech recognition and management of scanned documents (Barth, 2001d). The increasing popularity of PKM tools will lead to changes in the landscape of (commercial) KM technologies. Enterprise KM vendors will extend their offerings to support various knowledge processes at the individual level. For example, Autonomy, an Enterprise KM tool vendor, has extended its products to provide indexing and profiling of data on IKW's laptops/desktops. As a result, the "search space" of Autonomy's22 search engine has been expanded to cover the personal knowledge bases (e.g. E-Mail system, directories and folders) as well. Besides Autonomy, vendors like EnFish (originally a PKM tool vendor),
At the enterprise level, automatic question answering systems that aggregate content and route questions to the known experts are also available. Organik, AskMe, Clerity, QUIQ, RightNow Technologies are vendors that provide such products. 22 Until recently, Autonomy also had a Personal KM system (called KENJI) that can be downloaded for free (Lambiase and Hayward, 2001) (Batchelder et al, 2001).


Atomica, The Brain are also offering co-operable products for organisations and individuals. EnFish, in particular, will deliver tools that enable IKWs to build Personal Portals (see a later section). BadBlue, a vendor that offers products to help transform a client workstation into a Web-based server, also offers a free personal edition of their enterprise solution. Concept and mind mapping tools will incorporate collaboration features to support online brainstorming among peer group(s). For example, a group of users can view and edit Mind map(s) online. Individuals can submit ideas/questions/answers to a collaborative space and at the same time remain as anonymous. Results from users can be summarised, ranked and presented to the entire group. One such tool is GroupSystems Online. Considering that a significant proportion of E-Mail messages is on checking people's availabilities and scheduling appointments, the next breed of E-Mail tools should exhibit some kind of intelligence (e.g. concept extraction by way of semantic and syntactic parsing) in attempting to identify the purpose of a message and have stronger integration with information management tools (e.g. contact management, phone books, electronic diaries, calendars). Researchers are also exploring approaches to perform systematic categorisation and access to corporate information repositories. There are evidences to support that many of Artificial Intelligence (AI) and Information Retrieval (IR) techniques developed in the 80s and 90s are highly applicable to solving todays problems in information accesses. In particular, McCabe (2001) proposes a holistic approach to tackle the E-Mail Infoglut problem by using knowledge/topic maps, taxonomies and ontologies to categorise, extract and summarise corporate information for presentation. At the enterprise level, three vendors, Solutions-United, Temix and WhizBang Labs are applying a combination of syntactic (e.g. token parsing, lexical analysis) and semantic analyses (e.g. morphological analysis, disambiguation, word sense lexicon etc.) as well as user-defined rules to perform interpretation and information extraction from E-Mail messages (Sullivan, 2001). While there is a strong requirement for PKM technologies to remain as bottom-up, easy to install and powerful search, information extraction and categorization tools, it is equally important that these tools are aligned to support the common tasks to be performed by IKWs. As Jacobs and Linden (2001) has rightly foreshadowed, Personal knowledge tools must simultaneously be flexible enough to support a wide variety of behavioral patterns and organizational needs while also being tuned to the particular needs of specific knowledge tasks. This suggests that personal knowledge tool designers will need rich models of user behavior within the workplace that include goals, tasks, projects, events, intents, actions and interactions. On the last point, there is evidence to support that the latest breed of PKM tools is increasingly aligned to support project work23. For example, Advanced Data Management released a product called the Knowledge Management Desktop (ADM) in February 2002. Through a single interface, ADM collects and categorises all project information, URLs and documents/files for an IKW24. ADM offers a combination of the PKM technologies described earlier in this paper and users can download a (free) 30 days trial of the full product from its Web site. Over the years, the author has developed a set of Personal KM strategies and they are summarised in Appendix I. In the long term, the practice of PKM will lead to a gradual increase in Personal Intellectual Capital (PIC). As Barth (2000) pointed out, Personally accessible, immediately useful and relatively inexpensive PKM tools can empower knowledge workers to take ownership of their intellectual assets and offer an alternative approach for deploying KM within an organization.25 For details on how to conduct a Personal Knowledge Audit, please refer to Bailey and Clarke (2001), Barth (2000) and Cope (2000).
23 24

SixDegrees product. URL= Advanced Data Management Systems Knowledge Management Desktop (ADM). URL= 25 The authors opinion is that PKM offers a complementary rather than alternative approach to deploying KM in an organisation.


Up to now, this paper has been focussing on Personal KM and technologies for practicing KM at the personal level. The rest of the paper focusses on knowledge sharing between individuals (irrespective of whether these individuals work for the same organisation or not) using the highly user-centric Peer-to-Peer (P2P) computing paradigm.

Peer-to-Peer (P2P) Computing

One of the major drawbacks of client server systems is their inability to capitalise on the information and resources available at the edge of a network. P2P computing exploits this weakness of traditional enterprise computing by offering an alternative computing paradigm that takes advantage of the collective resources (e.g. data, computational power, connectivity etc.) on the edge of a network. Generally speaking, an electronic device situated at the edge of a network is termed a peer device. IDCs definition of a peer is a very broad one; a peer can be a software application, a client, a server, a wireless, mobile, peripherical I/O device, or a subsystem (storage/server) (Dyer et al, 2001). P2P computing is not new and has been around for at least 10-15 years. There are four major reasons why P2P has become increasingly popular in the last 2-3 years26. Firstly, storage costs have continued to come down. Secondly, computing devices are becoming more and more powerful (in terms of their computational abilities) and thirdly nowadays there is higher connectivity among peer devices than ever before. Lastly, advances in advances in broadband communications and virtual name spaces have enabled efficient and direct communications between peer devices in any network (Phifer, 2001). Shirky (2000) presents a very concise definition for P2P computing P2P is a class of applications that takes advantage of resources storage, cycles, content, human presence available at the edges of the Internet. Furthermore, Because accessing these decentralized resources means operating in an environment of unstable connectivity and unpredictable IP addresses, P2P nodes must operate outside the DNS system and have significant or total autonomy from central servers. (loc. cit.) Other definitions also exist. For example, Ovums definition of P2P is Any application or processes that uses a distributed architecture and allows direct bidirectional communication between resources. (Macehiter and Woods, 2001) A Litmus Test for determining whether an application is a P2P one is produced below (Shirky, 2000). The test consists of the following two questions: 1. 2. Does it treat variable connectivity and temporary network addresses as the norm? and Does it give the nodes at the edges of the network significant autonomy? An application is P2P if and only if the answers to BOTH of the above questions are YES. Being an emergent as well as a disruptive technology, many issues related to P2P computing are still being debated (by researchers and practitioners). Nevertheless, some of the commonly agreed advantages of P2P computing are that

P2P is less vulnerable to failure of a central node (server) Storage is inexpensive P2P is natural and personal to users. One of the greatest attractions of P2P computing lies in its ability to respond dynamically to the ways in which work group(s) are formed and the resources available in the work group(s). P2P enhances load balancing and offers alternative and improved search methods (see below)

InfoWorld specifically identified Peer-to-Peer Computing as one of 10 key emerging technologies in 2001 (Biggs, 2001).



Models for P2P computing

Generally speaking, there are two dominant models for P2P computing Pure P2P and Hybrid P2P. Pure P2P, as the name suggests, is entirely server-less. Each peer has to maintain data, directory information and knows how to connect to other peers in the network. For hybrid (or brokered) P2P, one or more servers exist to maintain data, directory information and knowledge of peer devices. More refined classifications also exist. For example, by foussing on the fundamental components of computing, Sweeney et al (2001) has defined 5 P2P models. They are 1. 2. 3. 4. 5. Atomistic (i.e. point to point connection) User-Centered (e.g. Instant Messaging) Data-Centered (e.g. File sharing and Distributed Content Networks) Compute-Centered (i.e. the collaborative computing concept whereby a computationally intensive task is split into multiple sub-tasks and rely on a network of computers to jointly accomplish the task e.g. the SETI@Home project) Web Mk 2 (i.e. a convergence of the above 4 models which will lead to an abundance of Personal Portals on the Web, the so-called Pervasive Internet concept)

Miner (2001), on the other hand, develops a taxonomy to categorise P2P models based on the topological characteristics (e.g. ring, hierarchical, centralised, decentralised) of a peer network. Furthermore, Miners advocates that, in order to assess the strengths and weaknesses of a P2P model, one needs to focus on criteria like the manageability, information coherence, extensibility, search efficiency, fault tolerance, security and scalability of a P2P network. Intel Corporation provides yet another way to classify P2P networks. Based on network usages, Intel has defined 5 models for P2P computing Universal File Sharing, Active Distributed Storage Sharing, Collaboration, Distributed Computing, and Intelligent Agents (Knighten, 2000). Recent industry news on P2P computing include, among others, SUN Microsystems announced in February 2001 that they are developing a Web-based programming language called JXTA that supports distributed P2P applications. In particular, JXTA will have the ability to connect peers to form and discover groups, as well as monitor and control their activities in a secured environment. For detail discussions on the technologies of P2P computing and major P2P projects, please refer to Oram (2001a). This paper focusses specifically on the impact of Peer-to-Peer computing on Knowledge Management as opposed to general aspects of P2P computing. More specifically, the topics of distributed/grid computing27, copyright issues with P2P file sharing (e.g. the Napster debacle) and Instant Messaging (IM)28 are excluded from the scope of this research. A list of the vendors on P2P technologies can be found in (Lambiase, 2001; Rein, 2001). Kwak and Fagin (2001)s report lists P2P vendors that provide Internet infrastructure and services.

P2P Computing and Knowledge Management

There are several reasons that prompted the author to focus on identifying the impact of P2P computing on KM. Firstly, as stated before, Knell (2000) has identified that in recent years more and more knowledge workers are switching from job to contract and contract to contract than job to job. This means that
27 28

CSC-NASA is a participant of the Global Grid Forum (Global GF). Instant Messageing (IM), nevertheless, is becoming more and more accepted into the business world. IDC expects the number of corporate users of Instant Messaging programs to increase from 5.5m in 2001 to more than 180m in 2004 (Spangler, 2001)


collaborations in the new economy are rapidly changing from intra-organisational to across organisational boundaries. Secondly, the absolute majority of KM tools and systems, both commercial systems and research prototypes, are enterprise-based systems. As such, these systems/tools require a corporate infrastructure and/or a proprietary network to operate and are generally inflexible in supporting instant, ad hoc but intensive collaborations. Thirdly, up to now, there has only been a handful of researchers29 focussing specifically on the very topic of Peer-to-Peer Knowledge Management (P2PKM). P2PKM is an under-explored but important topic. It is important because, by applying the P2P computing paradigm to the whole area of Knowledge Management, collaborations/interactions between peers can be, naturally, extended across organisational boundaries without relying on any corporate infrastructure. Peer nodes are truly autonomous and no centralised index or repository for storing knowledge is needed to support key knowledge processes (e.g. search, codification, distribution). According to Chillingworth (2002), "The computing power of peer to peer, coupled with knowledge management and collaboration applications and content delivery tools make peer to peer a powerful tool of the future." Equally, Dyer (2001) is adamant about the interplay between P2P and KM as P2P emphasizes collaboration and content management, the two primary drivers of knowledge management. Because P2P is a computing model that links peers for the purpose of sharing and leveraging resources, it feeds into the knowledge management market. When the European KM Forum used scenario planning in 2001 as a technique to project the role of technologies in supporting future knowledge-intensive work in Europe, PKM and Peer-to-Peer technologies represent two of the base factors30 in the five projected scenarios (Simpson et al, 2001). This is particularly encouraging. InfoWorld also rated Knowledge Management and Peer-to-Peer Networking as two of the top 10 technology trends in 2001 (Vizard, 2001). P2PKM can be seen as a natural extension of practicing KM from an individual level to sharing knowledge with a group of peers. Three critical factors still prevail. Firstly, like PKM, P2PKM is also highly user-centric. Secondly, peer group(s) can be dynamically formed and dissolved. Thirdly, collaborations in a P2PKM environment are not constrained by any organisational boundaries. Conversely, one can also perceive PKM as a special case of P2PKM where the number of peers equals to one. Furthermore, P2P computing is not only applicable to two or more persons. As an example, an IKW may have multiple presence in a network and resort to use a P2PKM application to synchronise/replicate all his/her (electronic) spaces, share files and conduct content-directed searches (see below) on multiple machines. In particular, the following three areas of P2PKM are examined in more details File Sharing, Collaborations and Search. 5.1 Peer-to-Peer (P2P) File Sharing

One of the most popular applications of P2P is sharing files among a group of peers in the network. Forrester Researcher predicts that By 2004, 33% of the online population will use P2P services for storing and sharing personal data. (Kasrel, 2001) By using P2P technology, a file can be located anywhere in a P2P network and any peer on the network can search, access or fetch file(s) from any other node. Http and ftp protocols are commonly used for file transfers. P2P file sharing is seen as a logical alternative to (as well as overcomes some of the major problems with) traditional EDMS (Electronic Document Management


As mentioned in an earlier section, searches on the Google Internet search engine29 using the strings Knowledge Management and Peer-to-Peer Computing returned 692,000 and 10,600 hits respectively in April 2002. However, only 18 hits were found when Peer-to-Peer Knowledge Management was used as the search string. Existing work on P2P Computing and KM include Axton et al (2002), Macehiter and Woods (2001) and Woods (2001). 30 The actual base factors used in the scenario planning process are power to the people and peer-to-peer is king.


Systems) where, typically, files are stored in a centralised location. It is especially suited for situations where Intensive collaboration is needed for a small group of people; Collaborations are predominantly explicit knowledge transfer e.g. by way of documents Members of the group belong to multiple organisations; and Rapid deployment is needed (as rolling out P2P file sharing applications can often bypass the constraints of IT policies, approval process, allocation of IP addresses and DNS configurations etc.)

Two most commonly known P2P file sharing applications are the distribution of anti-virus software inside organisations and (authorised) downloading of music files and electronic books. Texar Corporation is one vendor that offers secured file sharing on the Web using a P2P network. Roku offers a P2P File Sharing application whereby users can drag documents into any local or shared directories. Peers from various workgroups can access these documents. Roku uses SSL (Secure Socket Layer) for all its connections and information transmitted over any Roku connections are encrypted. WebV2 offers an application platform and network infrastructure to enable file sharing and searching, both within an organisation and between organisations, in an E-Business environment. On an absolutely mega scale, Microsofts Farsite and University of California at Berkeleys OceanStore projects are aiming to connect 100,000 individual computers and supporting 1 billion devices respectively. Apart from the gains in connectivity, file sharing and content delivery, there are additional benefits. Such a massively connected infrastructure that enables multiple devices to access the same piece of data from numerous locations also directly enhances the power (both in terms of accessibility and richness of the information) of PKM and e-collaboration tools (Rapport, 2001). However, there are also serious challenges. Although distributed search is powerful and communities (that together foster search spaces) are growing, significant research is still needed. As Phifer (2001) pointed out "...projects such as Oceanstore that conceive of the Internet as one very large storage system, which can be searched and indexed. The challenge is to figure out how to build search engines and directory servers that are distributed over millions of computers." Furthermore, there are also challenges in deploying P2P file sharing applications. Such issues often include bypassing corporate firewalls, VPNs and/or other kinds of security mechanisms embedded in organisational information systems. Some of the criteria for evaluating a truly sustainable P2P file sharing application are 1. 2. 3. 4. 5. 6. A powerful user interface that supports simple but effective searching of files (by name, file type, content etc.) and the ability to define the extent of file sharing on each of the client nodes Good presence management e.g. efficiency in locating other peers on the network Intelligent network management e.g. scalability, ability to monitor/control incoming and outgoing traffic as well as the ability to bypass blockades or congested nodes on the Internet (Strom, 2001) Robustness for each node to dynamically switch between acting as a server and a client The existence of a trusted community of users who share common goals and interests (Strom, 2001) A balance of genuine sharers and contributors in the peer network

Aronoff (2001b) outlines some emerging principles for businesses to consider when adopting a P2P file sharing application. Among others, there is the need for good presence management tools as a way to establish a human context around the files being shared, "deep" and intelligent search capabilities that far exceed matching at the document title level, ability to locate files of various format, and automatic sensing of network load to dynamically manage the fetching of documents from peer node(s).


Beyond the sharing of files, P2P technology has also been applied to store (large amount of) content in multiple/dispersed locations the so-called Distributed Content Network (DCN) concept. A DCN is primarily achieved by taking advantage of the expected access patterns and excess storage capacities of various nodes (peers) in a network. The benefit to the end-user is that rather than fetching a huge volume of data from one single source (which can be congested if it is a high traffic node), it is more efficient and cost-effective to access the content from a combination of more accessible sources. The DCN technology also comes with re-defined business models. For example, Yaga has recently announced a user-pay model for delivering content. In the long term, Yagas vision is to become a neutral knowledge marketplace offering a large amount of content material and downloading options (Mougayar, 2001). Other DCN vendors include Akamai, Digital Island and NextPage. 5.2 Peer-to-Peer (P2P) Collaborations

In the new (knowledge-based) economy, the complexities in tasks and the demand for creative work from the workers are both very high. These two trends have shifted managements focus from the traditional proceed-based framework to a teams-based model and the style of work from, historically, one of coordination and cooperation to collaboration (Grantham, 2001). The advance of technology and the upsurge of the free workers have also exacerbated the need for collaboration. As Knell (2000) pointed out, New forms of information and communication technology (ICT), especially the Internet, are creating new ways of doing business that allow free workers to flex their new-found muscles. The wired, networked economy is the natural habitat of the free worker, and is opening up new markets for talent and new opportunities for networking outside company walls. Meanwhile, the proliferation of new business models on the Internet is also asserting pressure on organizational information systems to become more distributed, user-centric and interoperable. As Dignum (2000) pointed out, Information systems to support modern business applications must be decentralized, autonomous and heterogeneous. A new generation of information systems is shifting towards the integrated support of structured and unstructured processes and information sources, formal and informal communication and different levels of activity coordination. Lately, the role of information systems has shifted from the support of one specific function and set of users, to that of supporting collaboration and business processes in a decentralized, distributed environment. Furthermore, in todays highly competitive business environment, it is increasingly difficult to expect any single organisation to possess all the necessary skills to support the pursuit of new business initiatives. Such initiatives can be, for instance, the development of a new product, identification of a new market segment and the alignment of processes and tools to serve the new segment. Consequently, a rising number of alliances and partnerships has been formed between organisations. It is not uncommon that, in these alliances and partnerships, project team members belong to multiple organisations, work in dispersed locations, and use productivity tools that may not be entirely interoperable. These teams are truly virtual project teams and they have a strong requirement for flexible and easy to deploy collaborative tools (CSC, 1999) (Katzy et al, 2000). Collaboration tools and groupware are a very useful and powerful technology in any organisation. When used effectively, these tools can support, among other things, remote presentations, ideas sharing, application sharing, joint decision making (e.g. real time polling), and video conferencing. Applications of this technology can range from in-house training, community building, virtual meetings, briefing sessions, product development collaborations to distance education. Traditional groupware is usually centralised and managed by IT departments. As such, these tools are inadequate for supporting fast-cycle (where group members may belong to multiple organisations and rapid deployment is needed) collaborations. P2P computing offers a flexible, easy and ad hoc way to support many-to-many small group collaborations over the Internet. (Aronoff, 2001a) Depending on the needs, these groups (and their associated knowledge structures) can be swiftly established (and dissolved). Once become operational, the


knowledge structures so created and accumulated in ad hoc collaborative groups are valuable assets and they may include shared documents, discussion forums, E-Mail and instant messages (if captured), saved searches, Web site bookmarks etc. P2P collaborations, due to its high user-centric nature, also lend itself to the formation of different types of communities. By doing so, collaborations in a P2P environment further reinforces the power of the Personalisation approach to KM. Parameswaran et al (2001) produces a classification of the various types of communities based on the quality of service and content in P2P collaborations. Generally speaking, one can classify these communities as Business Communities and Social Communities. In order to ensure that there is sustained collaborations in a community and that members are deriving value from their participation in the community, one needs to understand more about the nature and reason for existence of a community, and pay attention to the people and process issues. As Parameswaran et al (2001) pointed out The nature of the P2P community influences the economic transactions of the group, since group behavior depends not only on the behavior of other group members, but also on the context of interaction within the members of the group. For example, a social group displays more altruistic behavior and relies on social norms for enforcement whereas the regulation of a business community would be based on more rational rules governing transaction. Governance models, rewards and recognition schemes, and payment methods for P2P communities are further explored (loc cit.). On the technology side, according to Axton et al (2002), "Knowledge management applications such as search, content management and collaboration have been identified by P2P start ups as the most attractive segment of the enterprise market." In particular, Ikimbo's Omniprise is an enterprise-level groupware application that provides a platform for knowledge sharing and communications across organisational boundaries. The Omniprise application framework offers instant messaging, E-Mail, both synchronous and asynchronous communications, and file sharing among peers in the network (including wireless devices). Chamberlain Williams Tison & Associates is using Omniprise to share resumes and job descriptions in conjunction with instant messaging (IM) and Deloitte & Touche LLP is using Omniprise for document sharing between 300-400 users (Hall, 2001). Yenta is a P2P system that automatically discovers user's interest and creates discussion groups for these users. Using Yenta, a user can send an instant message to one or more users in a discussion group. Endeavors Technology targets device connectivity in a P2P infrastructure and aims to support all types of devices from mainframes to Personal Digital Assistants (PDAs) in the near future31. Vendors in the KM and P2P arenas are also forming alliances to pursue the potential business opportunities. For example, 1stWork provides an interactive peer to peer collaboration platform that supports drag and drop file sharing, instant messaging, voice connections and joint navigation of the Web among peers. Morningstar is a vendor that already has a dynamic peer connectivity technology. The alliance formed between 1st Works and Morningstar, the first alliance between a P2P vendor and ecollaboration tool vendor, will result in hotComm, 1stWorks product, integrated into Morningstars Webbased solution. Clients of the two vendors stand to gain more powerful features from their future collaborative environment32. During the course of this research, the author has established a shared space to share documents and ideas with other interested parties (both inside and outside CSC) using a P2P collaboration tool called Groove. For details of the Groove shared space and a summary of its capabilities, please refer to Appendix II. Moss and Franklin (2001) have compiled a report on the comparative evaluation33 of six E-collaboration tools including FlyPaper, Groove, eRoom, SiteScape, InfoWorkSpace and TeamWave.

31 32

For an in depth discussion of Peer to Peer Architecture and Endeavors Magi infrastructure, see Bolcer et al (2000). Source: Business Wire, 14th August, 2001, OReilly Network, 17th June, 2001 and KMWorld, 15th August, 2001. 33 Another comparison of Ikimbo, Flypaper and Groove appears at URL=,416,2706402,00.html


In the research area, Ko (2001) presents an architectural framework that supports a virtual whiteboard for visual collaboration by extending personal desktops with P2P Web Services technology. Three potential applications have also been identified Geo-spatial mapping, Product Lifecycle Management, and an Enterprise Collaboration platform (loc. cit.) The following sections further elaborate on two other significant benefits of P2P collaborations facilitation of tacit knowledge transfer and extension/preservation of the organisational memory. 5.2.1 P2P Computing and Tacit Knowledge transfer Another benefit in using a dynamic P2P collaboration tool is for the capture and transfer of tacit knowledge. Tacit knowledge, as opposed to explicit knowledge (e.g. documents, business rules, video files etc.) is a lot more difficult to identify, capture and represent (Haldin-Herrgard, 2000) (Stenmark, 2001). Successful capture of tacit knowledge in an organisation is often seen as a critical success factor for achieving a sustained KM program It is the accumulated tacit knowledge within the employees and social structure of firms that hold the promise of a sustainable long-term knowledge. (Mascitelli, 1999) According to Dignum (2000), Existing (collaboration) systems mainly concentrate on explicit knowledge, leaving tacit knowledge outside the system, so that integral experience based knowledge is not shared. Knowledge is often considered out of context, limiting its usefulness to people with background (implicit) knowledge. People need to have already contextual knowledge in order to use the information system effectively. Kidwell at al (2000) also pointed out that in the most knowledge-intensive business software development, say, or product design - the difference between a good performer and a bad performer is huge. And the difference that matters most lies in tacit knowledge: a deep understanding of how to act on knowledge effectively. Due to the richness of the functionality of tools and that they can be easily customised to suit each individual in a peer network, anecdotal evidences indicate that the combined value of P2P collaboration actually leads to more tacit knowledge flow than collaboration in a conventional client-server network. According to FAC (2001), this is certainly true in the area of services management - At a higher level, we see increasing adoption of integrated services management (ISM) software made possible because more white collar workers can incorporate P2P applications in their routines to increase productivity, often by facilitating communications, increasing immediate access to needed information, or by eliminating administrative tasks. At another level, incorporating a threaded chat into a business application can make it possible to capture and exploit significant tacit knowledge. 5.2.2 P2P Computing and the Organisational Memory A P2P KM system can contribute further to the collection and preservation of an organisational (corporate) memory34. There is no doubt that from an organisational perspective, among other activities, the departure of staff, restructuring of business/operating units actually have, at least in the short term, a negative effect on the organisational memory. The depletion of knowledge as a result of these activities can be in the form of leakage (e.g. where things are learnt but forgotten afterwards), loss (e.g. departure of a subject matter expert) or displaced (e.g. where old systems are de-commissioned, existing systems are not aligned with the needed processes, or previously client facing staff are being relocated to perform back office operations). Although most organisations realise and value the knowledge and the contributions by their employees, the fact is that very little has been being done, especially in the technical sense, to actively incorporate such valuable knowledge into an ongoing organisational memory. For example, when an employee leaves an organisation, the industry best practice is to conduct an exit interview and demand the return of all the company assets (e.g. laptop computer, mobile phone, PDA devices, security passes, physical documents etc.) (Harris et al, 2000) (Beard and Giacalone, 1997). In the case of the employees
34 An organisational memory (OM) is an abstract term that describes the collective knowledge and wisdom possessed by an organisation (including its systems and people). Typically, an OM consists of physical and intellectual assets, processes, systems, and most importantly, knowledge in the heads of the employees. An OM can never be rigidly defined; rather it is a fluid combination of the above sources of knowledge.


laptop computer, it may well have valuable documents or ideas that are not captured in any of the organisational information systems (e.g. E-Mail servers, information repositories, Intranet, portals). Unfortunately, in the authors experience, most organisations would merely recycle (e.g. by reformatting the hard disk and/or a re-installation/configuration of software and then re-allocate the equipment to another staff) rather than applying some kind of desktop data mining to harness/harvest any captured knowledge. Had a P2PKM (e.g. Groove, see Appendix II of this paper) system was used, such a system would, on an ongoing basis, enable part or all of the content (e.g. a shared workspace) of the laptop to be accessible/replicated by other colleagues/peers. In other words, the use of a P2PKM system actually cushions many of the shocks (created by corporate upsizing, downsizing, restructuring, mergers and acquisitions) on an organisational memory. Needless to say, the laptop computer would be an even richer repository of knowledge if its owner practices good PKM skills and/or uses PKM system(s). P2P collaboration systems are not without problems, however. Gingrande and Chester (1998) have outlined the following potential problems: 5.3 Collaborations may result in redundant, lost or conflicting information Inability to leverage each participants specific knowledge and skills in an optimal way Inefficient integration of information and work products from multiple sources and formats Concerns for scalability and access across increasingly distributed and decentralised environments Difficulties in managing copyright and digital rights of shared/hosted material Peer-to-Peer (P2P) Search

Search is a core part of the Codification approach to KM. However, there are significant limitations with traditional Web and enterprise search engines (see the Technologies for PKM section earlier in this paper). Searching in a P2P environment overcomes some of these limitations most noticeably in the way search indices are generated, stored and updated. One of the major differences between a conventional search and P2P search is that in a true P2P encironment, information available at any user (peer) node is indexed and only indexed when the user is online. In other words, indexing information in a P2P network is always up to date. In contrast, with conventional search engines, the accuracy and content of information contained in links need to be verified/updated regularly (Parameswaran et al 2001). Searching in a true P2P environment (where there is no centralised node or index) is in fact a collection of distributed but coordinated searches. Distributed Search35 in a P2P network, in the broader sense, is a kind of search method termed collaborative filtering (Miller et al, 1997). Collaborative filtering (CF) basically means that the information requester is relying on the judgement/decision of some like-minded peers to help direct (and/or constraint) the search space so that the results are, as a collective judgement, relevant to the requesters need. Peers in a network can share search patterns, results, rules and personal indices (Gartner, 2001). Lueg (1998) addresses the organisational and management issues, as well as the lessons learnt from the introduction of an interest-based collaborative filtering system in an organisation. Being an (user) interest-based (as opposed to a pre-defined workgroup) CF system, Lueg (1998) argues that the research prototype is highly appropriate for supporting direct communications between peers (who may belong to various business units) in the organisation. There are two fundamentally different approaches to conduct a distributed search in a P2P network "Network Topology-Based approach" and "Content-Based approach" (Warerhouse, 2001). These approaches are further explained below. The Network Topology-Based approach relies on the organisation of the peer network to route a search request; the actual routing between peers does not rely on the content at all. Depending on the frequency of

The reader can try out a distributed search engine at URL: which combines collaborative filtering with the P2P computing paradigm.


search requests and the storage capacity of the nodes, two types of information are constantly being propagated in a distributed network resource advertisements and queries. Depending on the architectural design of the network, three types of distributed searches can be further identified under this approach: a) Central Server approach - As the name suggests, means that the search is centralised or brokered (e.g. Napster) b) 'Random Graph' - Under this method, each query has a scope (i.e. a time or cycles span). History of messages is maintained at each peer and repeated messages are discarded. c) Power Law Networks' - A kind of distributed search that takes advantage of the level of connectivity of the nodes. Typically, in such a peer network, there are fewer nodes with high connectivity than nodes with lower connectivity (Warerhouse, 2001). Adamic et al (2001) explores the use of Power Law Networks to tackle the scalability issues in conducting distributed searches in a P2P environment. In contrast, under the Content-Based approach, content (or pointers to content) is stored in peer nodes. When a search requested is placed, an indexing mechanism generates a hash value for that content and the search is then directed to one or more nodes that suppose to cover the content represented by the hash value. The Content-based approach to searching also requires peers to agree on a specific set of search requests and advertisement templates. This approach is especially suited to content that can be described by a small set of related attributes but not for content that are described by a large number of wide-ranging attributes (as it is highly unlikely, if not impossible, to confine the content to a particular peer (loc. cit.).) Sadasiv (2001) proposes the use of XML meta-data to store indices and conduct searches. Basically, the search engine stores and indices the XML meta-data uploaded by the content producer and when an XML query is posed, the search engine matches the XML name-value pairs and returns the list of matched links. There are mixed successes with current technologies for searching P2P networks. With the Gnutella network (one of the most popular P2P file sharing and search program), while it is easy to configure and to locate popular content (as such material is abundant when lots of users possess it), it is especially difficult to search for specific content as one needs to know the file name and also the network can be fragmented at times. Another problem with the Gnutella search is speed. The average Gnutella search query is 70 bytes long and during a search process there can be as many as 10 per second transmitting from machine to machine plus a constant flow of ping messages. These short but highly frequent messages quickly overpower a dialup connection using a modem (Mann, 2001). However, there exist other P2P software that enhances the searching capabilities in a Gnutella network. BearShare resolves the traffic congestion problem in Gnutella by grouping users based on their ability to respond to queries and by doing so introduces intelligent routing of network traffic to faster and more responsive machines. Another way of resolving Gnutellas problem is provided by LimeWire which assists the user to connect to a variety of communities (depending on interests and geographical location) and to locate files of various formats. Unfortunately, LimeWire also has significant limitations as it can only search by filename and may return dead links (Aronoff, 2001b). Some vendors have combined distributed search and artificial intelligence technologies to develop more advanced products. For example, OpenCola is a provider of distributed network services technologies that offers more than just a search facility. OpenCola has two products: Folders and Swarmcast. In a peer network, OpenCola Folders retrieve other similar documents by asking the user to place a set of seed documents and/or bookmarks into a folder. Machine learning and additional user feedback are used to discover, track and update an users interest. Once located, relevant document(s) are sent to all interested users. OpenCola Swarmcast works by splitting the transmission of very large files on the Internet into small chunks and then re-assemble them to restore the original set at the receivers computer (Kwak and Fagin, 2001). Two other examples are Aimster and Centrata; both support natural language search across millions of peer nodes. For an in depth treatment of the P2P technology, its strengths, weaknesses, market opportunities and other perspectives on the interplay between P2P and KM, refer to Axton et al (2002).


In summary, the authors assessment of the current P2P (distributed) search technologies and trend for the next 2 years are Search can be instantaneous (i.e. real time) as well as batch (i.e. a search can be sent to a peer node and when indices are refreshed, matching is conducted and links/results are sent to the requester and other interested parties) Specialised searches. A search request can be directed to one or more peer nodes and depending on the indices and content of each node, different results may be returned. Suns JXTA search implements this strategy (Woods, 2001) Peer-assisted. A peer can refine/constraint the original search request with a view to generate more accurate hits and/or re-use a stored search pattern Advancements in the areas of Collaborative Filtering, Mobile Agents and the Semantic Web (Sadeh, 2001) (Berners-Lee et al, 2001) (Woods et al, 2001) will have a significant impact on future development and power of distributed searches in P2P networks. The Semantic Web, in particular, serves to provide a framework for supporting common sense reasoning in a knowledge/information rich Web-based environment.

At the time of compiling this paper, Resiliant offers a general purpose P2PKM product called PeerShare. PeerShare36 supports synchronous and asynchronous communications in a project-based environment. Its features include, among others, file sharing, collaboration, instant messaging, joint Web browsing, automatic notification and distributed search in virtual workspaces that operate in a P2P mode.

Peer-to-Peer Knowledge Management (P2PKM) applications

In view of the above-mentioned P2P technologies for supporting knowledge management and taking into consideration of the work activities for knowledge workers in the new economy, potential applications of P2PKM have been identified. These applications cover knowledge sharing/management at the organisational, group and individuals levels and they range from enterprise to personal/family and community applications. A summary of these applications together with the associated technology is produced in Table 2:
Table 2. Alignment of P2PKM applications with technologies


Technology File Distributed Sharing Content Networks (DCN) X X X


Distributed Search

Intelligent/ Mobile Agents

Enterprise Application Product Development & Project Management E-Learning Business Process Automation (BPA) E-Business Mergers & Acquisitions (M&A) Defence



36 A Web demonstration of Resiliants PeerShare can be found at URL=


Personal/Community application Mining of personal desktops Personal Portals, Home Networks & Family KM


The following sections further elaborate on each of the above potential applications. 6.1 Product Development and Project Management

The advance of e-collaboration tools has a dramatic (and positive) impact on the product development process (Ayan, 2001). More and more often, product development teams are "virtual teams" (Katzy et al. 2000). This is primarily because in today's highly competitive business environment, it is difficult for any one organisation, irrespective of its size, to possess all the necessary knowledge needed to take a new product to market (and be within the given time and resources constraints). Alliances and partnerships are often formed to leverage on each other's expertise and to explore new opportunities. Members in a "virtual team" often come from various organisations and work in dispersed locations. E-Collaboration tools enable team members and other stakeholders to manage projects, share data (e.g. product design data, drawings, artifacts etc.), exchange ideas (e.g. discussion threads), conduct online training, and exercise decision making in a collective way (e.g. Groupware, Group Decision Making system). Not only do E-collaboration systems capture the "whats" of a project but, by virture of having discussion forums, instant messaging and other synchronous communications support, some of the "whys" and "hows" of the group decision making process are also being unveiled and sometimes captured. Such knowledge is especially critical when lessons learnt and/or post-project reviews are being carried out (Ayan, 2001). However, the concept of a virtual team also creates new problems in communications. Among others, team members from different organisations may use different software applications, they may need to access and work on key documents frequently, and it is often difficult for the team to consider centralising all the project/product development data into one location. Most enterprise e-collaboration tools are serverbased systems (with thin clients acting as the user interfaces). Typically, these tools require all project data to be centrally stored and members must apply stringent check-in and check-out procedures to enforce version control of hosted documents. P2P collaboration systems overcome precisely these weaknesses. The use of a P2P collaboration system can reduce data redundancy and increase operational efficiency for the entire project/product development team. By using a P2P collaboration system, any member of a product development team can, for example, access up-to-date product/project data, examine/modify the product design diagram(s), and notify (almost instantly) other members or stakeholders of the latest status of the teams progress. Oculus Technologies' CO product has been designed to enable collaborative product commerce (CPC) especially in the sharing of information and in enhancing the decision making process. One of key challenges in CPC is the sharing of product information (as well as data and knowledge) among the various stakeholders including the product development team, the marketing department, the supplier(s), and other business partners. According to Oculus Technologies, the characteristics of their technology are that it is scaleable, provides secure and discrete data sharing, and supports real time connectivity. In particular, CO's security protection mechanism is far more sophisticated (or refined) than the usual file protection mechanisms. Using CO, users can limit just a specific part of a document to be made available for sharing. Among others, Ford Motor Co. is using Oculus Technologies to improve their design processes. More specifically, Ford wants to investigate ways to improve the fuel efficiency of its vehicles and despite that Ford engineers and suppliers are dispersed in multiple locations, they can use CO to instantly analyse/appraise how fuel efficiency of a vehicle is affected by a change in design. It has been reported in Kwak and Fagin (2001) that by using CO, analysis that might have taken Ford engineers three days in the


past can now be completed in less than a minute and Ford expects to improve the fuel efficiency of its sport-utility vehicles by 25% by 2005 with a projected saving of between US$5m to US$15m in vehicle development cost. In the future, it will be increasingly difficult to differentiate between producers and consumers of information in CPC as organisations developmental and innovative activities are happening at the periphery of an ever-expanding network. At the time of compiling this paper, Force12, a Cambridge (UK)based company, is developing a P2P solution to enhance Professional Services Automation (PSA) for organisations by streamlining business processes to achieve more effective control on resource coordination and remote project management. 6.2 E-Learning

P2PKM will also have a significant impact on E-Learning. This is primarily attributed to three characteristics of the P2PKM technology their user-centric nature, the reduction of storage costs, and their support for distributed content management. In an E-Learning environment, course designers, instructors and students have a need for frequent interactions with each other and all parties are eager to compile/publish as well as access content material. This is especially the case for distance/off campus tertiary education where students may have to work in groups but they are located in dispersed (sometimes remote/rural) locations. Typically for a group of off campus students working together in a group project, they need to share discussions and files with each other. Individual or all members of the group may also need to, at times, interact with the instructor and obtain feedback. Furthermore, although there is obviously the universitys Web site that stores all the course material, there is no centralised place for maintaining and publishing all the intermediate and final deliverables of student groups project work. P2P File Sharing and collaboration systems are perfect to support such a studying/learning environment. Based on the number of recent and forthcoming P2P (academic) conferences37, there are reasons to believe academic institutions are starting to appraise and pilot the use of P2P technologies in the higher education sector. As Yanosky and Bittinger (2001) rightly pointed out This (P2P) evolution toward elearning is the most significant of the many ways in which P2P will infiltrate campus IT in coming years. P2P has the potential to realize the elusive dream of modular curricula, and to stimulate a new era of collaboration, creativity and sharing of course materials on a larger scale than before. By facilitating exchange within the disciplinary communities and teaching cohorts that most academic instructors look to for ideas and teaching strategies, P2P promises to give instructors a great sense of ownership over elearning. This, in turn, can benefit institutions by creating a livelier online environment and driving down ecourse development cost. Other benefits in applying P2PKM in Higher Education include reduced curriculum development time, enhanced decision making capabilities, coordination of research and proposal writing skills, improved administrative services (e.g. student record management and course selection process), and reduced cost (Kidwell et al, 2000) (Oram, 2001b)38. In addition, the P2P technology is seen as especially relevant to the discovery of researchers profiles and interests (see below), support for university-industry
37 International Conference on P2P Computing, 27-29th August, 2001, Linkoping University, Sweden. Collaborative Computing in Higher Ed: P2P and Beyond, 30-31st January, 2002, Tempe, Arizona. URL= ResNet 2002, 28th June-2nd July, 2002, University of Buffalo, New York. URL= 38 For a detail list of potential applications and benefits of applying KM (not necessarily the P2P technology) in the research process, curriculum development and student and alumni services, please refer to Kidwell et al 2000. For a comprehensive discussion on the potential applications (e.g. student record administration, course design and delivery, library information management, research skills management etc.) in higher education, please refer to Rowley (2000).


collaborative projects (because, in such collaborations, often there is no centralised repository for knowledge sharing and both parties may not use compatible systems) as well as in the facilitation of review and approval processes for, say, proposal evaluation and student applications. Obviously, the impact P2PKM on E-Learning is not restricted to academia. Many commercial organisations are also exploring the concept of a Global Campus or a Corporate University for providing consolidated worldwide training programs. One of the key issues involved is the flexible and dynamic integration of accredited courses39 (offered by academic institutions or by certified suppliers) into an organisational Learning Management System (LMS). Intel Corporation, for example, has established a worldwide P2P distributed computing network to enable the sharing of internal content-rich training materials. Another example is Oracle University, which has incorporated an extensive amount of ELearning material into their staff development curriculum. The market for E-Learning software is potentially enormous. In 1999, in excess of 70 million people worldwide received some form of Web-based education. In 2000, the US corporate training market is worth US$66b. Out of these, 75% of the spending was on IT skills and the rest on business skills. By 2003, the Web-based training market will reach US$11.6b when 60% of US corporations are expected to have deployed a Learning Management System (LMS) (Abram, 2000). IDC also predicts Web-based E-Learning systems will represent 63% of all corporate learning systems by 2004 (up from 38% in 1999). In Australia and New Zealand, IDC predicts the E-Learning market will worth US$72m (A$132.4m) in a few years time and will increase at a rate of 22% thereafter (Lawnham, 2002). Foster and Falkowski (2000) observe that there is an increasing synergy between Knowledge Management System (KMS) and E-Learning system. For true learning (not training) to occur in organisations, LMS and learning objects (see footnote) will be, in the near future, part of an enterprise KMS supporting access to and formatting data from a diversity of sources with extensive personalisations to reflect individual needs and preferences. Apart from the usual characteristics of virtual collaboration systems (i.e. synchronous and asynchronous communications, capturing of tacit and explicit knowledge), these systems also need to be able to switch dynamically between learning and performing to support flexible interactions between the instructor and the learner, as well as provide mechanism(s) for evaluating and measuring the learning outcomes on an ongoing basis. Tsui (2002c) further elaborates on the intrinsic relationships between Knowledge Management and E-Learning systems. 6.3 Business Process Automation (BPA)

P2P technology offers a distributed solution that supports information gathering and operations at the edge of the network. This is radically different from the traditional technical infrastructures for E-Commerce/EBusiness where requests and responses are finite and largely pre-configured. P2P technology can play a major role in the automation of business processes in an E-Business environment by linking together (yet without the need to integrate) discrete applications. This capability is especially important for business processes that involve input from multiple organisations e.g. a supply chain process. In particular, mobile agents, a kind of intelligent agents that travels from machine to machine with their own storage area and executable code can be configured to operate in a P2P fashion. When applied in a BPA context, such agents can, for example, perform specific local personalisations for a user, record all activities (including user
Slade and Bokma (2001) describe an ontology-based approach to share research knowledge in a university environment. 39 Importation of learning material is often done via learning objects, a fundamental item in a Learning Management System (LMS). A Learning Object is an entity, digital or non-digital, which can be re-used or referenced during technology supported learning. (Foster and Falkowski, 2000). With todays technologies, learning objects created by course designers are Web-based and are highly interoperable. Among others, Exam Solutions is one firm with a product (named SkillDog) that supports the exchange of learning objects among organisations.


input) and aggregate content throughout the course of their execution in a virtual process. As these agents travel between people and systems/databases, they are especially suited to discover information (including peoples interest and expertise profile). Applying P2P technologies to BPA also adds to the simplicity and efficiency of systems. Parameswaran et al (2001) points out that Because P2P queries can be more lightweight than distributed database queries, bypassing interoperability and legacy system issues, the mobile code could pose a minimal computational burden and impose minimal network overload. Such a combination could lead to highly efficient selfsustaining and self-maintaining P2P networks within organizations. More specifically, potential applications of P2P technologies in BPA include, but not limited to, 1. 2. 3. Budgeting approval - where comments, reviews and authorisations from various parties are often required before a budget can be finalised Proposal preparations and reviews - where information and input from a diversity of sources may be needed especially if alliances/partnerships are involved Collection of group feedback For example, P2P mobile agents can be sent to all the participants after a training session is conducted, collect feedback from each of the participants, and collate the group response for the requester

Intelligent Agents will continue to play a major role in enabling business processes and workflow across peer nodes in a network. Dignum (2000) describes the use of a peer-based Intelligent Agent framework to analyse customer behaviour and interaction and support cross sell in a customer care of a subsidiary firm of Achmea, a financial services firm in the Netherlands. Other examples are Applied MetaComputings turnkey system called Legion and WebPagers P2P application that enables a peer to act as a 'host' and take control of another peer (i.e. a client)'s Web browser. 6.4 E-Business

E-Business is another area where they are many potential and reported applications of P2P technology to enable knowledge sharing among various stakeholders. One obvious application is the capture, aggregation and sharing of information to support decision making in the investment banking area. WorldStreet provides an Internet infrastructure and supporting services (represented as a collaborative peer-to-peer platform) that can be integrated into an organisations existing application, worklfow and business processes for sharing knowledge and for conducting Ecommerce transactions. Using WorldStreets technology, customers of investment banks (i.e. buyers of information) can pre-specify their information sources (e.g. a particular company and analyst) and the kind of information they want to receive from various publishing/research sources (i.e. sellers of information) (Spangler, 2001). The transmitted information is more than just raw data but, for example, comments and insights from research analysts can also be included. As the first peer networking solution dedicated to the financial services industry, WorldStreets solution is highly sought by investment banks, stock traders and financial planners. WorldStreets customers include, among others, JP Morgan Investment Management, Deutsche Bank, ING Barings, UBS Warburg, and Boston Partners (Kwak and Fagin, 2001). P2P File Sharing and, in a much bigger scale, Distributed Content Networks (DCNs) have also been adopted by E-Business and portal sites. For example, NextPage, a P2P DCN vendor, offers technology that enables the sharing of information for multiple parties in a business exchange. Users in the exchange can manage, access and exchange content from a diversified range of sources and in real time. There are also support for search, portal and content management capabilities. In a way, NextPages technology is almost functioning like a portal for P2P content management. Like WallStreet, NextPage is also targeting the financial services industry, a highly knowledge-intensive one, to deploy its application(s). Deloitte and Touche makes use of the NextPage P2P technology to enable its auditors to fetch the content of ABG, a


publisher of accounting and auditing standards, directly from the publishers Web site. Furthermore, the P2P application also reformats the content of ABGs material into a proprietary Deloitte and Touche template for its auditors to use (McCue, 2001). In the area of E-Procurement, the ability to match product descriptions and compare prices among ecatalogs are crucial to the success of any systems. A P2P architecture would allow data/information from the sellers to remain at their original/vendors location rather than being centralised at another location. Intelligent agents can then be sent to these locations to perform the necessary matching and discovery processes. Interestingly, Bond (2001) proposes a 2 steps approach for matching products in a Eprocurement process. The first step is to direct the request to a global index (whether centralised or not) and general matching is applied to obtain a list of secondary sites that provides the products/services being sought. The second step is to use P2P technology to search those secondary sites with full product specifications. For example, ExactOne offers a hosted service that enables distributed data to be accessed in real time. This data integration technology from ExactOne enables buyers to find and compare products from multiple suppliers instantly. FreeTradeZone (by PartMiner) is a NetMarket based on ExactOnes technology linking together 50 vendors (Kwak and Fagin, 2001). Another example of applying P2P to EProcurement is Veriscape's IntelleCat, a knowledge-based catalog management system that assists procurement officers to locate, match (using an adaptive matching engine) and save search queries in a hierarchical structure thereby representing, in a semantic framework, the best practice adopted by the individual. Several vendors are also currently developing P2P collaboration applications to facilitate the payment and the sharing of knowledge among the various stakeholders (which may include buyers, sellers, alliance partners, and regulators) in a financial services business vortal/exchange (i.e. portal for a particular industry). For example, Dynamic Transactions has a proof-of-concept P2P payment system called PayPlace40 (Damore and Savage, 2000). Gutberlet (2000) discusses the use of Intelligent Agents to establish a payment system for file sharing in a P2P environment and believes that wide spread adoption of P2P systems is highly dependent on an effective payment collection method (i.e. the business model). P2P is also finding its way into the E-Commerce world. For example, Toadnode and SmartPeer are two firms that produce software that enables consumer and enterprise data sharing among participating computers/nodes. Each computer/node relays data to and from other computers. Their software also assists users of wireless devices to locate and purchase items at the most convenient location for the user at the time of transaction. By doing so, there is no need for any centralised servers. (Marmor, 2000) While the majority of P2P applications are in the financial services industry, applications in other industries also exist. For example, Viant (2001) discusses potential applications of P2P in the healthcare industry where typically many different stakeholders (e.g. patient, pharmaceutical company, physician, insurance company, employer etc.) are involved in common business processes. Common considerations that shape business models in healthcare are, among others, cost constraint, compliance issues, patient privacy and entitlement to decision making, and quality of service. IAS is Intels Internet Authentication Services which provides authentication for all kinds of healthcare-related transactions on the Web. IAS transmits messages on the Web via P2P as well as conventional networks. The success of P2P technology will eventually lead to a higher and faster adotpion of Collaborative commerce, a new form of E-Business using Web technologies. 6.5 Mergers and Acquisitions (M&A)

Another promising application of P2P collaborations and file sharing is to support teams working on mergers and acquisitions (M&A) of organisations. In most M&A situations, there are constant and



intensive communications and collaborations among the stakeholders and a lot of knowledge-intensive decisions are made in a relatively short or fixed time frame. Furthermore, any KM system for supporting a M&A process needs to be deployed swiftly. As multiple organisations and sometimes third party consultants are involved, there is usually not a centralised location to store all the documents nor there is a common technical infrastructure that can support all the above participants (CSC Index, 1997). Among other case studies, law firm Baker and McKenzie, with 3,000 attorneys in 62 offices and operates in 35 countries, is using NextPages P2P technology to facilitate document sharing, contract negotiations, and discussions between its own attorneys and clients in M&A processes (Mosquera, 2001). Compared to other applications, a P2P system that support a M&A process is closer to a real time KM system. There is also evidence from recent research to suggest that, in the information services area, a real time KM system can actually facilitate knowledge creation as opposed to merely managing knowledge (Ghilardi, 1997). 6.6 Defense

Two primary applications of P2P technology in the defence area have been identified. They are real time collaborations and battle simulations. Real time collaborative technology has been well explored at the US Department of Defense (DoD). As McConville (2002) pointed out, DoD's use of this technology is to "establish, access, and then sustain a distributed, non-contiguous operation without relying on fixed bases adjacent to the objective area." Another goal of DoD in deploying P2P technology is ensure that knowledge sharing can be done at anytime and from any device. Reasons for adopting real time collaborative technology in defence include increase in deployment capabilities, mission effectiveness and process efficiency, as well as reduction in the decision making time, and the ability to integrate a wider set of (US) national capabilities than those in the military alone (loc. cit.). Among the case studies reported by McConville (2002), DoD has established various virtual team rooms for real time collaborations. The content and the technology employed in these virtual collaborative spaces include secure project repositories, shared production tools, and presentations on project guidance, training, decision briefings and progress reviews. For details of DoDs lessons learnt on the use of P2P collaborative technology and cultural issues, please refer to McConville (2002). Battle simulation is also a very popular concept in the defence industry. Simulations can not only significantly reduce the costs and risks involved in staging and modelling real scenarios but often also serve as a very effective training tool for defence personnel. Applying P2P computing to battle simulations is especially attractive as Koman (2001b) pointed out from a military context, having a centralised server is a point of failure, a critical failure node. You dont want to pull all your data on one server because once you take that server out, youve got a lot of blind people with a lot of useless electronics. Furthermore, P2P computing can support quick and unstructured simulations on-the-go the so called Simulation on demand. As such, a battle simulation system is transformed into a mission planning system instantaneously. For example, before a squad attacks a hill, the commander can order a simulation of a planned attack for, say, the next 10 minutes. The entire squad can observe and learn from the simulation result while waiting for their orders (Koman, 2001b). Another Simulation on demand project in defence is the use P2P to enable embedded training for dismounted soldiers. According to Ferguson (2001), the objective of the project is to develop and demonstrate revolutionary Embedded Training (ET) capabilities for dismounted soldiers. Empower the dismounted soldier and his unit with individual and collective training on-demand, anywhere and anytime. 6.7 Mining of personal desktops

All the above applications are enterprise-based applications. P2PKM can also be applied to the individual/personal level. These applications can be viewed as standalone applications, part or an extension of an enterprise KM application. This and the next section focus on these applications.


As discussed in earlier sections, current KM systems are strong in connecting people to information but fall short of connecting people to people. People finder systems (or expertise discovery systems), for example, are seriously lacking and remains very much under-explored. Furthermore, IKWs desktop/laptop may well contain valuable knowledge that are not captured by an enterprise KMS. A handful of researchers have tackled the difficult problem of automatic expertise discovery in an organisation or organisations with limited success (Soltysiak and Crabtree, 1998) (Maybury et al, 2000) (Becerra-Fernandez, 2000a & 2000b) (Mattox et al, 1999). In particular, Sihn and Heeren (2001) describe a methodology for locating experts (using individuals thematic field of interest) among cooperating companies. Common approaches to expertise discovery are server-based lexical analysis of E-Mail messages, intelligent agents to generate clusters of user profiles, and link analysis of, say, publication records and citation frequencies. Automatic discovery of an users interests and expertise is a difficult problem because of two reasons. Firstly, the evolution of information systems in organisations in the past two decades has resulted in much of the needed data (e.g. personal details, project repositories, training records, competency profile, community memberships, publication lists etc.) scattered across in multiple databases and systems. Secondly, up to now there is no unified way to access and process (in the sense of apply reasoning to) all these information although advances in the Web Services and the Semantic Web may improve the accessibility/interoperability of these applications and provide the necessary reasoning capabilities in the next 2-3 years. The end result is that while there are many KM (e.g. search, classification, collaboration) systems available on the market, very few vendors41 provide true expertise discovery systems. P2P computing offers an alternative approach to expertise discovery in organisations. It is particularly attractive as It is a decentralised approach so users have a greater control of what can be discovered on their workstation and among their daily activities42; and Reconciliation and merging of user profiles are simpler compared with top-down server-based discovery mechanisms43; Regular mining of personal desktops, assuming that privacy rights are observed, serves to extend the organisational memory (OM) and cushions some of the shocks on an OM created by corporate mergers and restructuring.

Yet another way of applying P2PKM at the personal level is on the synchronisation/replication of multiple workspaces that belong to the same individual. One should also note that as well as a tool for Peerto-Peer collaboration, a knowledge worker can also use, say, a Groove shared space to

Lotus' K-Station and the Knowledge Discovery Server integrate document and personnel knowledge management by mining user interests, activities and memberships of groups inside a corporation. Variants of expertise discovery systems also exist. For example, P2PQ is a freely available product that functions as an integrated Question and Answering (Q&A) system. P2PQ filters and directs a question to a known expert and the answer to the question is based on the aggregated responses from the experts' rating. Similarly, StarBase has a P2P system that is specifically designed to assist help desk operators. Using StarBase, an operator can broadcast a problem to other peers. When a solution is available, it is sent to the original operator and the problem resolution log (i.e. the case base in Case-Based Reasoning) is updated. 42 Angus and Boyd (2001) are confident that P2P profiling systems will become very popular - A secondary beneficiary will be the private citizen, whose personal life is a treasure trove of information begging to the exploited. The defensive qualities of P2P will spark the adoption of P2P profiling systems. Profiling is going to be under individual control, not the control of our businesses management, not marketing groups, and not national governments. 43 Anecdotal evidences suggest that it is generally easier to combine individual user profiles than to adapt generic profiles to accommodate for individual variations. Additional research in this area is needed, however.



synchronise/replicate information across several of his/her machines. In fact, the author uses a shared space to replicate specific information between his home and university workstations. 6.8 Personal Portals, Home Networks and Family KM

Ultimately, advances in P2P computing will push content, processing and control to the edge of the network. Individual users, surrounded by their own electronic workspaces, have absolute flexibility and power to configure the content and control the access by other peers. This phenomenon leads to the emergence of Personal Portals. According to Phifer (2001), " We'are talking about a new collaborative platform, not a series of loosely coupled applications under one common Web interface. We're talking about a personal portal that users can customize to the variety of tasks they perform and the communities with whom they interact." Such portals allow individuals to select and configure tools that are most suited to their own needs (whether business or leisure). Through these portals, a user establishes his/her own personal portal(s) on one or more local servers and then open up access to friends and other family members (see below on Family KM). These personal portals can be installed on fixed or mobile devices (e.g. PDAs) and can interact with other "intelligent devices". This is the so-called Personal Networks concept (also see below). Furthermore, increased adoption of distributed searches will hasten the establishment of personal portals as certain nodes are frequently aggregating content and processing search requests from peer nodes. Personal Portals will become the ultimate Desktop Environments (Batchelder, 2001). Tools that support synchronous (e.g. Instant Messaging, Video conferencing, Net Meeting etc.) and asynchronous (e.g. E-mail messages, broadcasts) communications, file sharing as well as direct communications with various linked devices will be available from inside these portals. As Batchelder et al (2001) pointed out, Ultimately, the ability to perform these tasks (i.e. spontaneously communicate, exchange data and cooperatively run application programs) will be a function of the platform that runs on the client, rather than on the server. Increasingly, that client will act like a personal portal that users can employ to build task-specific collaborative work environments. Enterprise portals, on the other hand, will become less significant (in acting as information aggregation and dissemination gateways) and will possibly become XML object dispensers (extracting data and rules) supporting other portals and users44. P2P is also highly applicable to the home environment. The cost for high-speed communications (e.g. Broadband, DSL, fibre optics) will continue to drop making these technologies more and more affordable to the average household. Furthermore, with the advance in wireless communications and automated Eprocesses, in the next decade, intelligent Personal Networks45 will be operating in the home environment (Graham and Hederen, 2002. These networks will connect multiple Web appliances (e.g. Internet-enabled fridge, telephones, washing machines, microwave ovens, terminals etc.) at home a key concept of ubiquitous computing where every computing or electronic device is connected via the Internet or other kinds of networks. (But not too soon. Due to the current global economic downturn, the drive towards ubiquitous computing has had a few setbacks46.) These networks control, monitor and synchronise the operation of home electronic devices and electric appliances in accordance with individual preferences47.

Research services firm Gartner Groups position is that As personal portals evolve, they will interact with enterprise portals to enable their next generation of collaborative P2P work environments. (Batchelder et al, 2001) 45 Some authors also refer Personal Networks as Home Networks, Intelligent homes and Intelligent Networks. 46 Acer Computer Australia has recently cancelled the development of their I-Phone product and 3Com cancelled the Audrey product (a Web terminal). Although Korean electronics firm LG plans to trial its Internet-enabled computer fridge in France and Spain in 2002 with a view to offer them in Australia in November, 2002. However, the estimated price of the LG Internet-enabled fridge is A$16,000 (as opposed to A$900-A$1200 for a conventional fridge) and it is way out of reach of the average Australian family (Braue, 2001) (Hayes, 2002). 47 Though not necessarily using P2P technology, firms that offer products for smart homes include GE Smart Homes (URL=, Icebox FlipScreen (URL= and Oster In2itive Blender (URL= In addition, Accenture Labs Silent Commerce (URL= and



As Endeavors (2000) has outlined Homes will contain an increasing number of intelligent, Internetcapable devices, such as digital video recorders, burglar alarms, heating systems and even refrigerators! If you view the house as a workspace, you can easily imagine now, using a WAP phone, the home owner can receive alerts from the burglar alarm or home PC system, as well as dialing in to adjust the video scheduling, set the temperature, or determine what kind of shopping is required. Another application of P2P in the home environment is on the sharing of, possibly in multimedia, personal and family data (a precursor to Family KM). According to Kasrel (2001), By 2002, 3 million households will use P2P to manage and share personal information 17 million households will use devices like 3 megapixel digital cameras to capture personal memories. By the end of 2002, 18% of the 17 million rich media households will use a P2P sharing service. With this growth in the volume of data, the conventional method of consumers uploading images or other multimedia files into centralised server(s) will cease to be the norm (due to time and cost factors). Instead, using P2P applications, users can open up part or all of their hard drives for authorised peers to access the necessary data/information. More specifically, in the area of family KM48, there can be multiple types of peer groups. For example, parents can form peer groups to discuss about issues related to the education of their children, a family would, naturally, form a group and can use P2P collaboration and file sharing systems to record their key events, collect and share their treasured photos and documents, as well as pass valuable experience from generation to generation. On the other hand, teenagers and youngsters would naturally form peer groups to play games, discuss their toys and their habits49. Home-helpers may form groups to share their know-how and share their experience in performing work in various families (loc. cit.). Kasrel (2001) forecasts that P2P will eventually transition into a component of the Pervasive Internet where Clients will become far more standardised and interoperable Data throughput will be 5-10Mbps downstream and 500Kbps-10Mbps upstream Applications will have strong emphases on group collaborations and personal information/knowledge management The Home PC will become just one of many devices in a home/personal network

The P2PKM Effect on Enterprise computing

In the next few years, the synergy of people, process and technology, which together forms the central plank of a sustainable KM environment, will become more and more evident at the individual and group levels. Personal computing devices will have added computational power (Yung, 2002), enhanced intelligence (which enables them to be more adaptive and friendlier), connectivity, and support various business processes. Abram (2000) classifies this evolution as the Fourth-Generation Convergence. In retrospect, First Generation is when the boundaries between physical devices begin to disappear. Hybrid devices are formed. For example, phones, faxes, and E-mail tools are provided on one piece of mobile equipment. Second Generation is when the hybrid devices operate in a digital environment and the PC becomes the dominant workplace ecology.

OnStar at Home (URL= are currently testing their soon to be released products (McManus, 2002). 48 Incidentally, Groove Networks is marketing a Family Toolkit for their shared spaces. 49 Cybiko is a low end P2P communications gadget that supports point-to-point communications between each Cybiko device. Each unit also works as a repeater and re-transmits the received signal to other units within a indoors range of 200ft.


Third Generation is when the hybrid devices offer workflow capability and extensive personalisations.

In the Fourth Generation, the hybrid devices become more and more personal. They may be voicedriven and can even model and learn from human behaviour. P2P technology and P2PKM, among others, are seen as major drivers behind these convergences. Having discussed the P2PKM technology and applications, it is appropriate to draw on all these material and identify the impact of P2PKM on enterprise computing in the next few years. (Once again, the reader is reminded that this research focusses on the P2P Knowledge Management as opposed to P2P computing in general.) The impact has been categorised into the following areas: 1. Responsibility for managing content. As P2PKM is predominantly a decentralised approach, IKWs using a P2PKM system have a strong sense of ownership of the information and control of the tools and environment. The management of organisational assets (e.g. data, people, systems, projects, finances) will, naturally, become less compartmentalised. The responsibility and accountability of the IKW will become greater, more dispersed and no doubt more difficult to measure and control. There is no doubt that P2PKM will distort the governance model and content management processes of enterprise-wide KM and content management (CM) systems. If not managed properly or a unified standard fails to emerge, it will be increasingly difficult to interface and transfer knowledge between personal productivity tools and enterprise systems in the future. PKM and P2PKM support in future operating systems. There will be increasing support for P2P computing and personal productivity tools in future operating systems. For example, Microsoft latest operating system XP has built-in support for enabling workflow, simple server-side collaborations, speech recognition and management of scanned documents (Barth, 2001d). The number of Personal Portals will increase. As mentioned in the previous section, with the uprising of the Personal Portals, the importance of Enterprise Portals will gradually decrease. However, there is still a significant amount of work to be done in standards and integration (see below) so it will be several years before the proliferation of personal portals as Jacobs and Linden (2001) has pointed out Personal portal technology is likely poised for more widespread adoption but will require another two to three years to reach performance and design points that allow this promise to be realized. P2P search in enterprise systems. Search engines will gradually be integrated into project management packages and content management tools (see below). Both distributed (including automatic and human-assisted) and centralised searches will be supported and results from various sources will be seamlessly integrated for presentation to the requester (Woods et al, 2001) (Kasrel, 2001)50. Virtual Application Service Providers (ASP). As the P2P technology enables resources and data to be polled and shared across P2P and conventional networks, future ASPs do not need to remain at a fixed location nor restricted to offer a static list of services. Instead, todays ASPs will be transformed by P2P technology to become virtual (or dynamic) ASPs in the future. Using P2P, the needed resources (e.g. computing power, storage capacities, bandwidth, content) can be polled from a wide diversity of sources and at real time from any ASP (hence the concept of a Virtual ASP). Intelligent routing of messages among network nodes will become a norm. Ultimately, virtual ASPs will become more and more abundant (Sadeh, 2001) and the number of ordinary (or static) ASPs will decrease. Depending on the tasks that need to be executed, these virtual ASPs can be dynamically formed and disbanded/collapsed (Geneer, 2001). This evolution is also closely related to an emerging concept called utility computing in the IT industry where the capacity and type of services can be dynamically adjusted (by IT service providers) at an on-demand basis. New business models for Internet Service Providers (ISPs)51. Using P2P, ISPs may no longer need to centralise all their data but instead may rely on extra capacities in a peer network to compute and
For a detail discussion on next generation Web search techniques, please refer to Woods et al (2001). Scott (2001) provides the following guidelines for pricing/charging common types of P2P applications:






50 51




hold/transmit data thereby capitalising on the power of distributed computing and distributed content networks (DCNs) respectively52 53. If the adoption of P2P applications continues to increase in the next few years, there will no doubt be a shakeup in ISPs pricing models (and incentive schemes for users who tender their resources) for Internet services due to the increase in bandwidth and the availability of collaborative processing power in a network. As Girard (2001) pointed out, Internet access providers are not priced or provisioned with P2P in mind. P2P users should expect lower service levels and potentially higher prices for Internet access. P2PKM for Small to Medium size Enterprises (SMEs). Small to Medium size Enterprises (SMEs) will find the atomistic P2P model particularly attractive because firstly they may have only limited requirements for P2P applications and secondly, the infrastructure for enabling the atomistic P2P is relatively easy to establish (Sweeney et al, 2001) (Bond, 2001). One of the key differences between knowledge sharing in large organisations and SMEs is that knowledge sharing for the latter is often extended across multiple organisations (Sparrow, 2001). A P2P architecture can support ad hoc collaboration, information sharing and workflow that encompass multiple organisations such as the business model doe medium size enterprises defined by Rehfeldt and Turowski (2000). Their model incorporates the Internet, Intranet and Extranet, together with the use of intelligent agents and fuzzy rules, enables the automation of the procurement process for medium size manufacturing companies. Furthermore, P2P tools like Groove delivers the functionalities right into the hands of the end-users. As small enterprises often do not have an IS department, the cost of using (which includes creation and maintenance) a shared space in Groove can be simply, for example, apportioned based on usages by the end-users. As Drakos (2001a) pointed out, P2P is a natural architectural model for cooperative business models where the user pays in kind with computing resources for admission into the shared service. Changes in the landscape of commercial KM tools. The rising popularity of PKM and P2PKM tools will introduce some significant changes to the landscape of commercial KM tools. Firstly, traditional enterprise KM vendors will incorporate P2P technology and develop new product offering(s)54. Secondly, P2P and KM vendors will continue to form alliances with an aim to further leverage the combined power of their products and technologies55. Thirdly, Personal KM tool vendors will also

Charging model Savings generated by reduced bandwidth at (original) server(s) Flat fee (for downloading a P2P FS tool) A fee for each view/play (or deductions to pre-paid credits) Intelligent Agents Add-on charge $X for Y number of PCs Distributed Content Storage and Search Add-on charge $X for Y number of PCs, or Z MB of content Collaborations Set price per PC or user provided with managed services Variable price based on the tools/utilities provided in a shared space 52 One important application is P2P Multicasting which is a streaming technology that relies on a network of connected PCs to further transmit the data to their destinations. There are two significant implications of this technology. Firstly, content will become more readily available on the Web and over time this will probably affect consumer behaviour. Secondly, multicasting enables any PC to broadcast its content (most probably audio and video signals) to other PCs a key component of a Personal Portal. Technologists are advocating that multicasting will ultimately provide highly efficient, low cost and personalised content to the end user (Parameswaran et al 2001).An example of a P2P multicasting service company is Allcast. 53 A case in point is that Sharman Networks has announced that in June 2002 the company will activate the Altnet P2P file sharing and distributed computing software embedded inside their product KaZaA Media Desktop (which already has 20 million users). Altnet is being positioned as an alternative network to the Internet by functioning as a giant virtual supercomputer (Cochrane, 2002). 54 Autonomys latest product enhancement proved that Peer-to-Peer architecture can co-exist with traditional enterprise architecture. More KM vendors are expected to add P2P functionality into their product range (Axton et al, 2002). 55 P2P vendor Inktomi has formed alliances with content management tool vendors Vignette and Interwoven and KM vendor Stratify. Vignette also has an alliance with SUN and Akamai. Other alliances include 1stWorks with MorningStar and OpenText; Intraspect and Hummingbird with other P2P vendors.

Application Data/File Sharing



exploit new business opportunities by offering an enterprise version of their product(s)56. As a result, more and more KM tool vendors now offer both personal and enterprise versions of their product(s)57. This trend will also impact other application software. Project Management packages, for instance, will be increasingly Web-based incorporating features for virtual collaborations, product design support and data management, and workflow capabilities58. Traditional enterprise collaborations tool vendors are responding to the increased competition by supporting more flexible collaborations and by hosting collaborative spaces59. Furthermore, pressure is mounting on content management tool vendors to support distributed content management. P2P enabling an existing application. Not all Peer-to-Peer applications need to be designed and developed from scratch (e.g. starting from the infrastructure and use one or more of the P2P products to conduct search, collaboration, file sharing etc.). Existing applications can also be transformed to become a P2P one. Advanced Realitys product Presence-AR offers a Peer-to-Peer collaborative platform that embeds secure, real-time collaborative capabilities into any existing applications without the need to modify or rewrite the source code (KMWorld, 2001). Presence-AR enables sharing and synchronisation at the data level (as opposed to the application level) and, as such, users can collaborate on the same data using different applications on any access devices.

Critical Issues

This section discusses the critical issues that underpin the adoption of P2P computing and the success of PKM and P2PKM systems. As an emerging technology, the critical issues that very much determine the acceptability of P2P computing in the business world are performance, security, standards, change and control, and copyright60. Nearly all of these issues arise from the clashes between the P2P and conventional (and highly popular) client-server computing models. These issues and more are common topics in many P2P articles on the Internet. With file sharing, collaboration and search being conducted directly, repeatedly and frequently between nodes in a peer network, there is no doubt that P2P computing poses bandwidth and performance problems for conventional computing networks. Statistics collected by educational institutions indicate that students uses of P2P file sharing software (e.g. Napster, Gnutella) significantly downgrade the performance of university computing networks. As P2P applications can be downloaded readily from many sites, users also share part of the blame (for the mis-use of the technology). Many users adopt P2P applications without a full understanding of the potential impact (on security, bandwidth, copyright, system load, and variation from the Standard Operating Environment) of these applications on the enterprise (Margevicius, 2001). In the corporate world, in order to prevent unexpected demand for network capacity or potential copyright violations, it is common that corporate policies are in place to deter the installation of any kind of P2P application. P2P computing also clashes with the conventional client-server model on the topic of 'control'. P2P is a technology that helps to increase the agility of an organisation to respond to change and to take on new opportunities. This is achieved by "empowering" the individual knowledge worker (IKW) with a set of flexible and appropriate tools to perform their tasks. By doing so, control is being decentralised and the responsibility (to exercise control) is vested with individuals (who are situated at the edge of a network).

PKM tool vendor EnFish has merged with KnowledgeTrack and is offering an enterprise version of its product. There is also a mind mapping tool in Groove (URL= 57 Some of these vendors are Atomica, BadBlue, Knowledge Management Software (KMS), Entopia, TheBrain, and EnFish. 58 SixDegrees offers a product that links E-Mail messages with people and documents. URL= 59 Lotus, Intraspect and OpenText are offering more flexible collaboration tools and eRoom and WebEx have offerings to host collaborative spaces. 60 The copyright issues associated with the use of P2P technology (file sharing in particular) are outside the scope of this paper.


However, this is contrary to the way most, if not all, Information Systems (IS)/Technology (IT) departments operate as, traditionally, these departments set standards (e.g. standard operating environment (SOE)) and control (e.g. procedures, processes) that users have to comply with. In such environments, there is a strong tendency for tools and services to be server-based and direct peer-to-peer interactions are rare. MIS managers or directors of IT are generally not receptive, at least initially, to the introduction of any kind of P2P system into their organisation as P2P almost invariably means shifting control from the centre to the edge of a network. Another critic of P2P computing is about the proprietary platforms on the market that operate as servers in a peer network. Many people argue that these platforms do not provide the needed level of security in the exchange of data nor do they provide a guaranteed quality of service (in terms of speed, file type and download size) for the transmission of large multi-media files. Distributed search also poses a significant concern for peers whose desktop/laptop data are being checked and fetched by some (possibly) unknown third parties. In the corporate world, security, scalability and performance of peer-to-peer networks are crucial issues for consideration and often a satisfactory solution cannot be found without additional investments in the infrastructure. There are currently two opposing schools of thoughts about the perceived adoption of P2P collaborations and content management. One view is that as P2P empowers users to manage their own data/content and have ultimate control of their environment, such a decentralised approach to collaboration will lead to higher degree security and privacy. Ultimately, adoption of P2P collaboration systems will be high. On the other hand, as there is a lack of standard (see below) for connecting and integrating P2P applications with other enterprise systems, concerns about security and interoperability rank high on the list of considerations especially when one needs to decide whether to introduce P2P collaboration systems into an organisation61. To achieve successful deployment and adoption of P2P collaborative systems, organisational change is also needed. This is most evident in the design and management of the organizational structure. Most organisations, still based on the traditional management style, have a hierarchical structure (for communications, decision making and line of authority). Collaborative systems, on the other hand, are more suited to a flat (or at least flatter) organisational structure. Standards are also lacking in P2P computing. There is not a comprehensive software infrastructure that, for example, synchronises file sharing (and distributed CPU cycle pooling applications) (Yates, 2000). On the issue of security in a P2P network, Pescatore (2001) has defined the following five key issues:

Authentication i.e. Who the user really is? Availability i.e. Can the user perform what he/she needs to do when he/she wants to do it? Authorisation i.e. What are all the functions that the user is permitted to do? Nonrepudiation i.e. Can the system prove that the user has carried out a particular operation? Privacy i.e. Can the system protect the users data?

Research services firms Gartner Group and IDC are not entirely optimistic about the spread of the P2P technology. Gartner Groups recommendation on enterprises adoption of P2P applications is that Most enterprises, however, will be best-served waiting for P2P technology to mature before adopting and deploying it on a wide scale. Until security, standards and bandwidth issues are resolved, Gartner recommends that IS organizations establish written policies to dictate acceptable uses of P2P. (Margevicius, 2001) Furthermore, Gartner is not optimistic about the design and programming complexities involved in designing P2P applications Gartner expects the architects of P2P applications to get things wrong more often than not. In the same way that network computing freed up the client but pushed a lot more complexity back into the server environment, P2P offers freedom in the server environment while pushing complexity further back into architectural design. (Drakos, 2001b) On the other hand, Forrester Group and Ovum Research appear to disseminate a more balanced opinion and predict that there will be a gradual adoption of P2P technology by the corporate world in the next 3-4 years with Instant Messaging (IM) as the first widespread application among corporate users.


On the topic of PKM, as iterated before, technology is merely an enabler and an IKW really need to learn and practice good PKM skills. PKM does NOT mean a radical departure from P2PKM and Enterprise KM. In fact, quite the contrary, practicing KM at the individual, group and enterprise levels are complementary to each other. In order to properly harness the knowledge created at each of the above three levels, some kind of knowledge interchange formats and fusion algorithms need to be developed. These formats and algorithms would be applicable to represent and process the knowledge structures (e.g. knowledge bases, shared spaces) created by the PKM, P2PKM and enterprise KM systems. Up to now, this area is very much under-explored and, in the authors opinion, clearly represents an exciting topic for technology-based KM research.


Observations on the strong bias of KM technologies towards the enterprise level, changes to the workplace and knowledge requirements for workers in the new economy have provided much of the motivations for the author to carry out this research. This paper has provided a unique and in depth coverage of a bottomup approach to understand technologies that support knowledge sharing at the individual and group levels. In particular, Peer-to-Peer computing is perceived to have a significant impact on the codification and personalisation approaches to KM and will revolutionise the technologies for KM in several aspects including file sharing, distributed content networks, collaborations and distributed search. Potential and existing applications of P2PKM are identified as well as the impact of PKM and P2PKM on enterprise computing. This paper concludes with a list of the critical issues that underpin the adoption and success of PKM and P2PKM systems.

The author would like to thank CSCs Leading Edge Forum (LEF) and two business units in CSC Australia for supporting this research. Special thanks also go to Paul Gustafson, director of LEF, for sharing his thoughts on Peer-to-Peer computing and in confining this research to a manageable scope that can be accomplished within the grant period.

Abram, S., Collaborations & KM, KM World, 13th September, 2000, URL= Adamic, L.A., R.M. Lukose, A.R. Puniyani and B. Huberman, P2P Search that Scales, The OReilly Peer-to-Peer and Web Services Conference, 5-8th November, 2001. Allen, L.R., Diverting a Crisis in Global Human and Economic Development: A New Transational Model for Lifelong Continuous Learning and Personal Knowledge-Management, GATE 1998, 1st November, 1998, URL= Angus, J. and S. Boyd, KM Revolution from an unlikely direction, 2001, URL= Apostolou, D., G. Mentzas, R. Young and A. Abecker, Consolidating the Product Versus Process Approaches in Knowledge Management: The Know-Net Approach, in Proceedings of the 3rd International Conference on the Practical Applications of Knowledge Management, 2000, 149168. Aronoff, J., "Collaborative networking: the social dimension of Peer-to-Peer," CSC Foundation Research Journal, November, 2001, 61-67a. Aronoff, J., "Thoughts are Peer-to-Peer (P2P) File Sharing, CSC Research Services, 2001b, URL=


Axton, C., R. Gear, N. Macehiter and E. Woods, Peer-to-Peer Computing: Applications and Infrastructure, Ovum Report, February, 2002. Ayan, J., "Web-based Collaboration Tools, ReturnPath, Marketleap Visibility Index," Executive Technology Briefing, February, 2001, URL= Bailey, C. and M. Clarke, Managing Knowledge for Personal and Organisational benefit, Journal of Knowledge Management, 5, 1, November, 2001, 58-67. Barth, S., KM and Voice Recognition Tell It to the Machine, Knowledge Management Magazine, 1999, URL= Barth, S., The Power of One, destinationCRM, 2000, URL= Barth, S., "Workers Teach Thyself A Personal Learning manager puts workers in charge of their education," destinationCRM, August, 2001a, URL= Barth, S., The Promise of Peerless Platforms, destinationCRM, March, 2001b, URL= Barth, S., Self-Organization, 2001c, URL= Barth, S., KMs Suite Spot, destinationCRM Knowledge Management, October, 2001d, URL= Barth, S., Knowledge in Real Time, destinationCRM Knowledge Management, May, 2001e, URL= Barth, S., Pick a card-any card , Knowledge Management Magazine, 2001f, URL= Barue, D., Web wash-out, The Bulletin (Australia), 4th December, 2001, 72-73. Batchelder, R., Portals, XML and UDDI Enable a New Computing Paradigm, Gartner Research Note Strategic Planning, SPA-12-7391, 5th April, 2001. Batchelder, R., S. Hayward and A. Roussel, Seeking New Investment Opportunities: The Next Paradigm, Gartner Research Note Technology, T-13-4395, 1st August, 2001. Beard, J.W. and R.A. Giacalone, Exit Interviews of technical personnel: missed opportunities, lost knowledge, in Proceedings of the Portland International Conference on Management and Technology (PICMET97), 27-31st July, 1997, 316. Becerra-Fernandez, I., Facilitating the Online Search of Experts at NASA using Expert Seeker PeopleFinder, in Proceedings of the Third International Conference on Practical Applications of Knowledge Management (PAKM2000), Reimer, U. (ed.), 30-31st October, 2000a, Basel, Switzerland. Becerra-Fernandez, I., The Role of Artificial Intelligence Technologies in the Implementation of PeopleFinder Knowledge Management Systems, Knowledge-Based Systems, 13, 5, October, 2000, 315320. Berners-Lee, T., J. Hendler and O. Lassila, The Semantic Web, Scientific American, May, 2001, URL= Bettoni, M.C., R. Ottiger, R. Toldesco and K. Zwimpfer, KnowPort A Personal Knowledge Portfolio Tool, in Proceedings of the 2nd International Conference on Practical Aspects of Knowledge Management (PAKM98), Basel, Switzerland, 29-30th October, 1998. Biggs, M., Technologies to watch in 2001, InfoWorld, 23, 5, 289th January, 2001, 74. Binney, D., The knowledge management spectrum understanding the KM landscape, Journal of Knowledge Management, 5, 1, 2001, 33-42. Bolcer, G.A., M. Gorlick, A.S. Hitomi, P. Kammer, B. Morrow, P. Oreizy and R.N. Taylor, Peer-to-Peer Architecure and the Magi Open-Source Infrastructure, Endeavors Technology Inc., 6th December, 2000, URL= Bond, J., Business uses of Peer to Peer (P2P) technologies, Netmarkets Europe White Paper, January, 2001,


URL= Brake, D., Lost in Cyberspace, New Scientist, 28th June, 1997. Burton, S. and P. Nesbit, E-Mail overload, MIS Australia, November, 2001, 62-63. Caldwell, F., Knowledge Management Scenario, Session 25a, Gartner Australian Symposium/ITExpo, 30th October to 2nd November, 2001, Brisbane, Australia. Chillingworth, M., "Will Knowledge Management go to the Peer?" 5th February, 2002, URL= Cochrane, N., KaZaA users brace for hijack, The Sydney Morning Herald, Tuesday, 30th April, 2002, Next 3. Compton, J., "Groupware Grows Up," ZDNet, 29th March, 2001. Cope, M., Know your value? Value what you know. London: Financial Times Prentice Hall, 2000. CSC, Making use of Collaborative Technologies, Foundation Operational Excellence Report, 1999. CSC Index, Managing IT through Mergers and Acquisitions, CSC Index Research and Advisory Services, Foundation Report 114, 1997. Damore, K. and M. Savage, Peer-to-Peer Pressure As the case builds for corporate adoption of P2P technology, Intel pushes for standards, CRN, 909, 28th August, 2000, 14. Dignum, V, Towards a People-Oriented Knowledge Management Environment, in Proceedings of the 11th International Workshop on Database and Expert Systems Applications (DEXA'00), 2000, 1134-1140. Doom, C., "Get Smart: How Intelligent Technology can enhance our world," CSC Leading Edge Forum White Paper, 2000, URL= Dorsey, P.A., What is PKM? Seminar, Millikin University, 2000, URL= Drakos, N., Peer-to-Peer Economics, Gartner Research Note Commentary, COM-12-9460, 23rd February, 2001a. Drakos, N., P2P Networks: One Step Forward, Two Steps Back, Gartner Research Note Technology, T13-1453, 10th April, 2001b. Dyer, G., M. Levitt, V. Turner, R. Villars, M. Maclachlan, J. Gantz, R. Mahowald, A. Gillen, D. Kusnetzky, C. Anderson, B. Bingham, A. Mizoras, R. Glaz, D. Goldfarb and J. Goepfert, An IDC View of Peer-to-Peer Computing, IDC Document #24496, April, 2001. Endeavors, Introducing Peer-to-Peer, Endeavors Technology, 2000, URL= Essex, D., Managing e-mail for maximum uptime,ComputerWorld, 26th March, 2001, URL=,10801,58930,00.html FAC, P2P-More than an architecture, eEnterprise Software Research Monthly, FAC/Equities, January, 2001, 3-7. Ferguson, J., P2P Customers, Panel discussion at The OReilly Peer-to-Peer and Web Services Conference, 5-8th November, 2001, URL= Foster, D. and T. Falkowski, The Convergence of KM and e-Learning, KM World Conference, 2000, Frand, J.L. and C. Hixon, Personal Knowledge Management: Who, What, Why, When, Where, How, December, 1999, URL= Gartner, The Emergence of Distributed Content Management and Peer-to-Peer Content Networks, Gartner Engagement #010022501, January 2001. Geneer, Geneer Business Report, Issue #3, Geneer Corporation, 2001, URL= Ghilardi, F.J.M., Getting to real time knowledge management: From knowledge management to knowledge generation, Online, 21, 5, September/October, 1997, 99-102. Gingrande, A. and B. Chester, Meshing the gears of business collaboration, KM World, 7, 1, 1st January, 1998,


URL= =1035&Publication_ID=51 Girard, J., P2P Applications: New Internet Bandwidth Monsters, Gartner Research Note Technical Guidelines, TG-12-6293, 8th December, 2000. Graham, R.L. and J. Hederen, Peering the Smart Home, in Proceedings of the International Conference on Peer-to-Peer Computing (P2P2001), 27-29th August, 2001, Sweden, 103-104. Grantham, C.E., The Future of Work. CommerceNet Press, McGraw-Hill, 2001. Grimes, B., Enterprise Technology: Peer-to-Peer Gets Down to Business, PC World, 19, i5, May, 2001, 150, URL=,aid,44862,00.asp Gunnarsson, M. and M. Lindstrom, Nobody is as smart as everybody What is your strategy for managing knowledge? Masters Thesis 20p, Course: IA7400, Department of Informatics, Goteborg University, 1999. Gurteen, D., An Etiquette for Computer-Based Communication, 1995, URL= C/ Gutberlet, L., Peer-to-Peer Computing A Technology Fad or Fact? Information Systems Management Seminar, WS2000 Term Paper, European Business School, 10th October, 2000, URL= Haldin-Herrgard, T., Difficulties in the diffusion of tacit knowledge in organizations, Journal of Intellectual Capital, 1, 4, 2000, 357-365. Hall, M., "Start-Up Pushes Instant Collaboration," ComputerWorld, 29th, October, 2001, URL=,1199,NAV47-71-365-383_STO65060,00.html Hane, P.J., Entopia launches company, KM World News, 7th November, 2001, URL= Harris, K., F. Caldwell and J. Lehman, Capturing knowledge from retiring employees, Gartner Research Note Select A&A, QA-10-7668, 4th April, 2000. Hayes, S., LG opens door to the internet fridge, The Australian, Tuesday, 21st May, 2002, 36. Hyams, R., 10 skills of Personal Knowledge Management, 2000, URL= wledge/personal.html Jacobs, J. and A. Linden, Personal Knowledge Organizers: For Most, Not Yet, Gartner Research Note Technology, T-14-2701, 23rd August, 2001. Jones, S. and P. Thomas, Empirical assessment of individuals Personal Information Management Systems, Behaviour and Information Technology, 16, 3, 1997, 158-160. Kaplan, R., Tools for Personal Knowledge Effectiveness Part I, knowldgWORKS News, Number 28, 25th March, 2000a, URL= Kaplan, R., Tools for Personal Knowledge Effectiveness Part II, knowldgWORKS News, Number 29, 28th April, 2000b, URL= Kasrel, B., P2Ps Pervasive Future, Forrester Report, January, 2001. Katzy, B., R. Evaristo and I. Zigurs, "Knowledge Management in virtual projects: A research agenda," in Proceedings of the 33rd Annual Hawaii International Conference on System Sciences, 2000, 10. Kidwell, J.L., K.M. Vander Linde and S.L. Johnson, Applying Corporate Knowledge Management Practices in Higher Education, Educause Quarterly, 4, 2000, 28-33. KMWorld, P2P, naturally, seamlessly, KM World News, 2001, URL= pdf


Knell, J., Most Wanted: The Quiet Birth of the Free Worker. A Futures Report, The Industrial Society, 2000. Knighten, B., Peer-to-Peer Computing, Intel Developer Forum, 2000, URL= RIAL Ko, J.D., Visual Collaboration With Hybrid P2P Virtual Whiteboard, in The OReilly Peer-to-Peer and Web Services Conference, 5-8th November, 2001, Koman, R., "The Great Rewiring," The O'Reilly Network, 2001, URL= Koman, R., P2P Goes to War, The OReilly Network, 28th August, 2001b, URL= Krane, J., Technology Merges Phone, Fax, E-Mail, Associated Press, Monday, 5th November, 2001, URL= Kulikauskas, A., Tools for organizing thoughts, 1999, URL= Kust, P.N., Peer to Peer, InfoWorld, 21, i6, 8th February, 1999, 62, URL= Kwak, C. and R. Fagin, Internet Infrastructure & Services, Equity Research Technology, BearStearns, 2001, URL= Lambiase, S., Peer to Peer Technologies: An Introduction, Gartner Technology Overview, DPRO-97205, 5th April, 2001. Lambiase, S. and S. Hayward, "Catalysts in the Expanding Peer-to-Peer Space," Gartner Research Note Technology T-13-1405, 10th April, 2001. Lawnham, P., Corporate push for soft skills, The Australian, Higher Education Supplement, 1st May, 2002, URL=http://www.theaustralian/highered Lueg, C., Considering Collaborative Filtering as Groupware: Experiences and Lessons Learnt, in Proceedings of the 2nd International Conference on Practical Aspects of Knowledge Management (PAKM98), Basel, Switzerland, 29-30 October, 1998, 16-1 to 16-6. Macehiter, N. and E. Woods, Peer-to-Peer and Knowledge Management: Making connections, Ovum Research, 2001. Mann, C.C., Taming the Web, MITs Technology Review, 104, 7, 1st September, 2001, 44, URL= Margevicius, M., Reining in Peer-to-Peer on User Desktops, Gartner Research Note Tactical Guidelines, TG-13-7631, 26th June, 2001. Marmor, M.S., "Making the P2P leap with Toadnode, Web Techniques, 5, 12, December, 2000, 44-49. Mascitelli, R., A framework for sustainable advantage in global high-tech markets, International Journal of Technology Management, 17, 3, 1999, 240-258. Mattox, D., M. Maybury, D. Morey, Enterprise Expert and Knowledge Discovery, in Proceedings of the International Conference on Human Computer Interface (HCI 99), 23-27th August, 1999, Munich, Germany. Maybury, M., R. DAmore and D. House, Automating the Finding of Experts, Research Technology Management, November-December, 2000, 12-15. McCabe, B., Information Overload and the E-Mail Monster, Session 41c, Gartner Australian Symposium/ITExpo, 30th October to 2nd November, 2001, Brisbane, Australia. McConville, J., Reaching Back! DoD goes Real-Time, FOSE, 2002, URL= l+%22DoD+goes+Real-Time%22+mcconville&hl=en


McCue, A., Deloitte & Touche Takes On P2P Technology, ITtoolbox Knowledge Management, 2nd November, 2001, URL= McManus, M.R., Smart Living 2002 Main Street, Ziff Davis Smart Business, May, 2002, 78-80, URL= Miller, B.N., J.T. Reidl and J.A. Konstan, GroupLens for Usenet: Experiences in Applying Collaborative Filtering to a Social Information System, Communications of the ACM, March, 1997, 77-87. Minar, N., Peer-to-Peer is Not Always Decentralized when Centralization is Good, in The OReilly Peer-to-Peer and Web Services Conference, 5-8th November, 2001, URL= Morrison, J., Organizational memory information systems: Characteristics and development strategies, in Proceedings of the 30th Hawaii International Conference on System Sciences, 2, 7-10th January, 1997, 300-309. Mosquera, M., Law firms collaborate globally, InternetWeek, 22nd October, 2001, URL= Moss, G. and D. Franklin, "Assessment of Collaboration Tools, Version 1.0, CSC Internal Report, September, 2001. Mougayar, W., Yaga Changes Content Distribution Battlefield, PeerIntelligence-News Newsletter, 10th September, 2001, URL= Oram, A., Peer-to-Peer, Harness the Benefits of a Disruptive Technology. Cambridge: OReilly, 2001a. Oram, A., Peer-to-Peer for Academia, OReilly Network, 29th October, 2001b, URL= Parameswaran, M., A. Susarla and A.B. Whinston, P2P Networking: An Information-Sharing Alternative, IEEE Computer, July, 2001, 31-38, URL= Pescatore, J., Trusted Identities Are the Key to P2P Security, Gartner Research Note Technology, T-132073, 9th April, 2001. Phifer, G., "Enterprise Applications: Upfront and Personal," Australian Symposium/ITxpo, 30 October-2 November, 2001. Pienaar, H., Personal Knowledge Management, 1990, URL= Rapport, M., Think Tanks See Global P2P, destinationCRM, March, 2001, URL= Rehfeldt, M. and K. Turowski, Business Models for coordinating the next generation enterprises, in Proceedings of the Academia/Industry Working Conference on Research Challenges (AIWORC 2000), URL= Rein, L., OReilly P2P Directory, 2001, URL= Richardson, J. and A. Barry, Personal Tools to tame the Web, AusWeb, 1999, URL= Robinson, T., E-Mails expanded power, Internetweek, 852, Manhasset, 12th March, 2001, 53. Rowley, J., Is higher education ready for knowledge management? The International Journal of Education Management, 14, 7, 2000, 325-333. Sadasiv, R., "Next Generation Content Networks - Syndicating the Dark Matter," O'Reilly Peer to Peer and Web Services Conference, 5-8th November, 2001, URL= Sadeh, N.M., The Semantic Web: Challenges, Opportunities and Challenges, Talk given at the OntoWeb Kickoff, Crete, June, 2001, URL=


Saunders, V.M., Collaborative Enterprise Environments Enterprise-Wide Decision Support & Knowledge Management, in Proceedings of the 12th Annual Software Technology Conference, 30th April to 5th May, 2000, Salt Lake City, Utah, 321-330. Scott, J., An overview of Collaborative Computing, O'Reilly Peer to Peer and Web Services Conference, 5-8th November, 2001, URL= Shirky, C., What is P2P And What Isnt, The OReilly Network, 24th November, 2000, URL= Sihn, W. and F. Herren, Xpertfinder Increased Knowledge Exchange Through Expert Search in Cooperating Companies, in Proceedings of the 7th International Conference on Concurrent Enterprising, 27-29th June, 2001, Bremen. Simpson, J., M. Auckland, J. Kemp, M. Padlatz, S. Jenzowsky and B. Bredehorst, Scenarios for Future Work in the Knowledge Economy Extract from Deliverables 1.2: Trends and Visions for KM, European KM Forum IST Project No 2000-26393, 2001, URL= Skyrme, D.J., The Knowledge Networkers Toolkit. Butterworth-Heimann, 1999. Slade, A.J. and A.F. Bokma, Conceptual Approaches for Personal and Corporate Information and Knowledge Management, in Proceedings of the 34th Hawaii International Conference on System Sciences, January, 2001. Soltysiak, S.J. and I.B. Crabtree, Automatic learning of user profiles towards the personalisation of agent services, BT Technology Journal, 16, 3, July, 1998, 110-117. Spangler, T., From Revolution to Evolution, Interactive Week, 8, 23, 11th June, 2001, 23, URL=,2000024993,20230443,00.htm Sparrow, J., Knowledge Management in Small Firms, Knowledge and Process Management, 8, 1, 2001, 3-16. Stenmark, D., Leveraging Tacit Organizational Knowledge, Journal of Management Information Systems, 17, 3, Winter, 2000-2001, 9-24. Strom, D., Comparing Peer-to-Peer file sharing technologies, in The OReilly Peer-to-Peer and Web Services Conference, 5-8th November, 2001, URL= Sullivan, D., A Closer Look, Intelligent Enterprise, 5th December, 2001, URL= Sweeney, J., S. Hayward, N. Drakos and R. Batchelder, The Five Peer-to-Peer Models: Towards the New Web, Gartner Research Note, COM-12-4447, 5th February, 2001. Tiwana, A., The Knowledge Management Toolkit: Practical Techniques for Building a Knowledge Management System. Upper Saddle River: Prentice Hall, 2000. Torrance, M.C., Active Notebook: A Personal and Group Productivity Tool for Managing Information, in AAAI Fall Symposium on AI Applications in Knowledge Navigation and Retrieval, Technical Report FS-95-03, AAAI Press, 1995. Tsui, E., Exploring the KM Toolbox, Knowledge Management, 4, 2, October, 2000a, 11-14, URL= Tsui, E., The role of Artificial Intelligence in Knowledge Management, Knowledge-Based Systems, 13, 5, October, 2000b, 235-239, URL= Tsui, E., Tracking the Role and Evolution of Commercial Knowledge Management Software, in Holsapple, C. (ed.), Handbook on Knowledge Management. Berlin/Heidelberg: Springer-Verlag, 2002a. Tsui, E., Knowledge Management KM32534 Course Book, Faculty of Information Technology, University of Technology, Sydney, 2002b. Tsui, E., Knowledge Management and E-Learning, in preparation, 2000c. Van Heijst, G., R. Van der Spek and E. Kruizinga, Organizing Corporate Memories, Kenniscentrum CIBIT internal report, 1996.


Van Heijst, G., R. Van der Spek and E. Kruizinga, Corporate memories as a tool for knowledge management, Expert Systems with Applications, 13, 1, September, 1997, 41-54. Viant, The Human Side of Peer to Peer, Viant Innovation Center Project, 2001, URL= Vizard, M., Top 10 technology trends in 2001 all ask one thing: Are you experienced? InfoWorld, 23, 1, 8th January, 2001, 59, URL= Walker, D., Indigestible spam, Sydney Morning Herald, IT Pages, 20th November, 2001. Warerhouse, S., "JXTA Search and other Distributed Search Techniques," in The OReilly Peer-to-Peer and Web Services Conference, 5-8th November, 2001, URL= Wenger, E., Supporting communities of practice: a survey of community-oriented technologies, 2001, URL= Wilson, P., 20 years in the life of a Long Term Empirical Personal Electronic Filing Study, Behaviour and Information Technology, 20, 5, September, 2001, 395-409, URL= Withers, S. and D. Gardiner, Peer, there and everywhere, APC, April, 2001, 90, URL= Woods, E., Knowledge management and peer-to-peer computing: making connections, KM World, 10, 9, October, 2001, URL= rticle_ID=1104&Publication_ID=56 Woods, E., A. Ashenden and M. Budd, Next-generation Search: Building the Smart Portal, Ovum Research, 2001. Yanosky, R. and S. Bittinger, P2P: Opening the Door to Academic E-Learning, Gartner Research Note Technology T-13-3673, 12th April, 2001. Yates, S., P2P: Pushing Computing Power to the Edge, The Forrester Brief, 26th October, 2000. Yung, R., The Evolution of the Corporate Portal, Keynote presentation, Corporate Portals Asia 2002, Singapore, 10-11th January, 2002, URL=


Appendix I My PKM strategies

The objectives of the authors approach to personal knowledge have always been to Avoid overloading of email messages in the routinely used email address(es) Incorporate a PULL capability for various topics of interest and from various (valued and trusted) sources Enable automatic classification of all incoming information Make use of freely available tools to improve indexing and categorisation of stored information Try to maintain the information received at the organisational, group and personal levels in synchronisation Never ignore the people issues Build trust among colleagues, clients and friends

Over the years, the author has been gradually experimenting and perfecting the following set of strategies for PKM: My PKM Strategies 1. An alternative E-Mail address other than the usual one is established. (There are two reasons for doing this. Firstly, automatic broadcasts, auto-alerts, and promotional information will be sent into this email address rather than the routinely used one(s). Secondly, there are occasions where one needs to send a message to an un-trusted audience (e.g. distribution list of a newsletter, an anonymous E-Mail address) and, based on past experience, it is better to use a second E-mail address to do so. This way, the broadcaster's ordinary E-mail address and affiliations are not disclosed. For Web sites/newsletters/research services/library databases that one value, if possible, configure an "Auto Alert" agent/service that regularly scans and extracts relevant/new information from those sites. Most information services Web site provide such a feature. If not, some of the Personal KM tools mentioned in this paper also provide this capability. "Automatic filtering and classification agents" are also configured in the receiving E-Mail handling program to categorise the incoming messages into their respective folders. For example, in the Lotus Notes environment, this can be accomplished by the use of Agents. In MS Outlook Express, Business Rules can be set up to perform this task. If there are multiple alternative E-Mail addresses, consider forwarding all incoming messages into one E-Mail address. Whenever possible, develop and maintain a consistent structure to classify files on various computing devices (e.g. laptop, office desktop(s), home PC etc.) so that it is easier to locate information. Consider using a Peer-to-Peer tool (e.g. Groove) to share/replicate electronic work spaces across multiple computers. For example, the author uses his company laptop and desktop to store work-related information, university workstation for student details, assignments, lecture notes and research material, and home PC for personal and family information. PKM tools are used to index the material on individual machines. Furthermore, Groove shared spaces are created to replicate specific information across these machines. The CSC Portal is used to collaborate with CSC colleagues and clients, at work and at home. For backup, portability and ease of sharing purposes, Quick Response Kits" (in the form of small CDs each one can hold up to 160MB of material) are also routinely created. A "Quick Response Kit" (QRK) contains key CSC information, worldwide best practices, cornerstone articles and reports, and key presentations recently compiled by the author. Currently, there are QRKs in KM, E-Learning and Content Management. Time permitting, the author always try to assist others. Furthermore, other peoples interest are remembered and relevant material are sent to them routinely. Build up trust in others by sending others only un-biased, non-marketing, relevant and quality material. Continuous learning is also very important for every knowledge worker. The author maintains a very extensive network of peers (e.g. graduate students, university researchers, librarians, industry analysts etc.) at the local and international levels.





6. 7. 8.


Appendix II Groove Share space for research collaborations

While nearly all server-based Knowledge Managemenet Systems (KMS) (e.g. e-collaboration systems, portals etc.) support personalisations (of user interface and content) for the end-user, the extent of these personalisations is still fairly limited as they are designed top-down. As such, many of tools and functions provided to the end-user by server-based KMS are very difficult, if not impossible, to be aligned to support the work tasks of the end-user (an IKW). As Barth (2001b) quoted a statement by Frederic Boulanger of Macadamian Technologies, Server-side collaborative solutions are good at automating whats happening between each step but not really suited for work required at each one of the steps. Groove62 is one product that offers a solution to the above problem. In a nutshell, Groove is a clientbased personal portal. Using Groove, an IKW can customise (by pick and mix) a set of tools that is most suitable to support his/her tasks. For example, among other capabilities, a Groove user can send instant and E-Mail messages, share files, share discussions, as well as navigate the Web simultaneously with other peers. Most importantly, with Groove, all these activities and more are performed on the edge of a network. The power of Groove lies in its ability to Support synchronous and asynchronous communications Detect online presence of other peers Connect people to information as well as to people Foster collaborations across organisational boundaries Provide an environment for the user to work offline Allow the user to incorporate/add any material into a shared space anytime Provide a secure and efficient mechanism for the storage and transmission63 of all data

Groove offers a preview edition (free to download) and a premium version (with extra services and features). For example, Grooves Enterprise Network Services provides an outsourcing model for component management security services. The Groove Development Kit (GDK) provides templates and resources for building custom applications. There is also a proprietary presence and relay service (e.g. journalise all the exchange of messages and interactions between community members) offered by Groove. Grooves corporate clients include, among others, Alliance Consulting, Agora Professional Services Ltd. (UK), BAe Systems. These firms all find Groove as a powerful tool because firstly, it requires minimal corporate infrastructure to operate and secondly it fosters human interaction (rather than just content and information aggregation) (Barth, 2001b). Pharmaceutical firm GlaxoSmithKline uses its 10,000 licenses of Groove to foster collaborations (e.g. sharing of sensitive material and monitor project progresses) between its scientists and researchers in biotechnology companies and research laboratories in universities64. During the course of this research, the author has established a Groove shared space and invited other peers to join. Some of these peers are CSC colleagues, academics and personal friends who share the same interest in P2P technology and/or KM. Most of the features available in the preview edition (free to download) have been tried out during the months of research and collaboration. Below are several snapshots of the shared space:

URL= When transmitting data, Groove makes uses of an algorithm that takes consideration of the size of the data and the number of peers it needs to send the data to. If Groove detects that there may be a bottleneck in the transmission, it will automatically send the data to a relay server which then forwards the data to the destinations (Grimes, 2001). 64 URL= and URL=



The above screen is the entry point (or Home Page) of the authors Groove interface. The lower box on the right hand side lists all the shared spaces in which the author belongs to. Among others, the space titled Personal and Peer to Peer KM is the one established for collaboration with peers on this research project. The next screen is on the File Sharing aspect of a Groove shared space.


The above screen shows the File Area in the shared space. The author has placed hundreds of files (which totals nearly 10MB in size) relating to this research into the space to share with other peers. Inside a Groove space, all the data are encrypted. The left hand side window also shows the presence status (i.e. Online or Not Online) of other peers who have access to the current shared space.


The above screen shows the General Discussion area of the shared space. Similar to other e-collaboration tools, any member of the shared space can create a new entry with a topic name as well as file a reply to any of the existing entries. All members in the shared space can view all the postings when their space has been replicated. The next screen shows a powerful feature of Groove collaborative browsing of the Internet. A user can browse the Internet inside a Groove shared space and more importantly, this user can assume control of the browsing and ask other peers to participate in the navigation simultaneously. Groove can also support voice communications via the interface (called the skin). The very screen below shows the home page of the authors project in the technology grant section of the CSC Leading Edge Forum (LEF) Web site. Bookmarks of favorite Web sites can also be added into the Groove Web browser. Although Groove is a powerful tool (or even a Killer Application as some would call it) for collaborations, it is not without problems. During the months of collaborations using the Groove shared space, the authors opinion on the limitations of Groove are that It requires a fat client (the preview edition is about 10MB in size) to operate. A shared space can be slow to replicate when relying on modem to connect to an ISP. There is no search capability provided by Groove and because everything is encrypted inside a Groove shared space, enterprise and desktop search engines also cannot locate material inside a Groove shared space. There is no mechanism to transition/convert the information (e.g. documents, discussion threads, bookmarks etc.) captured inside a shared space into an ordinary file system. This is a serious limitation especially when one needs to collapse (i.e. archive the material in) a shared space when its aims have been fulfilled. One can, obviously, save each and every knowledge item into an ordinary file system but this amounts to a very tedious task. The preview edition does not provide an uninstaller for the software.


Groove is not yet integrated into other desktop and enterprise systems. At present, it functions as a separate application on the desktop. From outside of a shared space, one cannot create an URL address to point to a specific piece of information/location inside the shared space. Groove is only available for Windows platforms. There are still concerns about the reliability and stability of the preview edition. Nearly a quarter of the authors peers using the shared space have expressed difficulties in installing, operating and replicating the shared space (across multiple machines).

One should also note that as well as a tool for Peer-to-Peer collaboration, Groove can also function as a PKM tool. A knowledge worker can also use a Groove shared space to synchronise/replicate information across several of his/her machines. For example, the author uses a shared space to replicate specific information between his home and university workstations.


Profile Eric Tsui ( Eric Tsui is the Chief Research Officer, Asia Pacific of Computer Sciences Corporation (CSC) and Innovation Manager of Australian Mutual Provident (AMP). In his current capacities, he is responsible for strategic research, knowledge brokering (between CSC and AMP), innovation management, and universityindustry collaborations. Eric is a member of the advisory editorial board of the Knowledge-Based Systems journal, ACS National Committee for AI and Expert Systems and holds adjunct positions at RMIT University, University of Kentucky, University of Sydney and University of Technology, Sydney. He has B.Sc.(Hons.), PhD and MBA qualifications.