Sie sind auf Seite 1von 4

Knowledge Representation and learning in Artificial General Intelligence System

Jekin Trivedi (jekintrivedi@gmail.com) Abey George Philip (abey.g.p@gmail.com) Bharat Mani (bharatmani.mumbai@gmail.com) Vidyavardhini College Of Engineering and Technology,Vasai Abstract Knowledge representation and knowledge engineering are central to AGI systems. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time causes and effects; knowledge about knowledge (what we know about what other people know) and many other, less well researched domains. The first part of this paper includes includes Knowledge representation which is an area of artificial intelligence whose fundamental goal is to represent knowledge in a manner that facilitates inferencing (i.e. drawing conclusions) from knowledge. The Second part would be learning that has been central to AI research from the beginning. A truly intelligent system must be able to learn from its environment. In the case of an expert system this will be achieved by interaction with the user. Using Classification we can determine what category something belongs in, after seeing a number of examples of things from several categories. Regression takes a set of numerical input/output examples and attempts to discover a continuous function that would generate the outputs from the inputs. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. In this paper we describe the techniques used for representing knowledge as well as learning in AGI systems and the implementation done by our team in OpenCog. Introduction Artificial Intelligence is the science and engineering of making intelligent machines and systems. Now a days AI is further categorized in Narrow AI and Artificial General Intelligence. Narrow AI the creation of software programs carrying out highly specific functionalities that are typically considered intelligent when humans carry them out. Artificial General Intelligence (AGI) to refer to the pursuit of software systems that display a wide variety of intelligent functionalities, including a reasonably deep understanding of themselves and others, the ability to learn how to solve problems in areas theyve never encountered before, the ability to create new ideas in a variety of domains, and the ability to communicate richly in language. Knowledge Representation We use SELF-MODIFYING, EVOLVING PROBABILISTIC HYPERGRAPHS (SMEPH) for representing knowledge in various Artificial Intelligence System. A hypergraph is an abstract mathematical structure , which consists of objects called

Vertices and objects called Edges, which connect the Vertices. A hypergraph differs from a graph in that it can have Edges that connect more than two Vertices; and SMEPHs hypergraphs extend ordinary hypergraphs to contain additional features such as Edges that point to Edges instead of Vertices; or Vertices that, when you zoom in on them, contain embedded hypergraphs. Properly, SMEPHs hypergraphs should always be referred to as generalized hypergraphs, but we will persist in calling them hypergraphs instead. In a SMEPH hypergraph, edges and vertices are not as distinct as they are within an ordinary mathematical graph, and so it is useful to have a generic term encompassing both Edges and Vertices; for this purpose, in SMEPH and Novamente, we use the term Atom. The SMEPH approach to intelligence is centered on a particular collection of Vertex and Edge types. The key Vertex types are ConceptVertex and SchemaVertex, the former representing an idea or a set of percepts, and the latter representing a procedure for doing something (perhaps something in the physical world, or perhaps an abstract mental action). The key Edge types are ExtentionalInheritanceEdge (ExtInhEdge for short: an edge which, linking one Vertex or Edge to another, indicates that the former is a special case of the latter), ExtensionalSimilarityEdge (ExtSim: which indicates that one Vertex or Edge is similar to another), and ExecutionEdge (a ternary edge, which joins {S,B,C} when S is a SchemaVertex and the result from applying S to B is C). So, in a SMEPH system, one is often looking at hypergraphs whose Vertices represent ideas or procedures, and whose Edges represent relationships of specialization,

similarity or transformation among ideas and/or procedures. Learning Learning is basically carried out using various methods like PLN, MOSES (Meta-Optimizing Semantic Evolutionary Search), Neural Networks etc. We however use PLN and MOSES. Probabilistic Logic Networks (PLN) is basically a probabilistic reasoning engine which exists specifically to carry out reasoning on various relationships. MOSES (Meta-Optimizing Semantic Evolutionary Search), is a modification of evolutionary learning mechanism called of the Bayesian Optimization Algorithm Programming (BOAP) algorithm. MOSES is an algorithm for learning PredicateNodes or SchemaNodes satisfying specified criteria. MOSES complements PLN: whereas PLNs job is to extrapolate existing knowledge and build new Nodes and Links that directly follow from old ones in an incremental way, MOSESs job is to create complex combinations of Nodes and Links out of the blue, via heuristic, evolutionary/probabilistic exploration of the large space of possibilities.

Implementation in various systems In OpenCog The various applications of Knowledge Representation and Learning are in systems like OpenCog and Novamente. In OpenCog the system describes each and every attribute for an object and environment. This Knowledge is stored in form of SMEPH. In this system One could create a population of knowledge-sharing agents that operate as a sort of borg mind, with intelligence beyond what any of the agents could achieve with its own

resources, with each giving contextually appropriate expression to the same knowledge and intelligence. However, in some cases preserving a significant degree of separation between the KBs of different agents may actually be optimal in terms of advancing the total intelligence of the population; this is similar to the reasons why in evolutionary learning one sometimes uses an islands model consisting of a set of separately evolving subpopulations with limited interaction, rather than one big population. In many practical applications what the end users

based theorem proving, to derive new conclusions (deduction) or introduce new hypotheses (abduction) from the assertions in the KB. Most of the assertions in the KB are intended to capture commonsense knowledge pertaining to the objects and events of everyday human life, such as buying and selling, kinship relations, household appliances, eating, office buildings, vehicles, time, and space. The KB also contains highly specialized, expert knowledge in domains such as chemistry, biology, military organizations,

of virtual agents want is also an agent with its own strengths and weaknesses, and its own learning process that the agents owner gets to participate in. CYC An application of the Cyc system is described, in which the system contributes to the software engineering effort involved in its own construction. Using its Semantic Knowledge Source Integration (SKSI) facility. Cycs ability to reason is provided by an inference engine that employs hundreds of pattern-specific heuristic modules, as well as general, resolution-

diseases, and weapon systems, as well as the grammatical and lexical knowledge that enables Cycs extensive natural language processing (parsing and generation) capabilities. Cyc also includes specialized CycL vocabulary, inference modules and supporting connection management code (proxy server, database drivers) that together constitute a facility called Semantic Knowledge Source Integration (SKSI). SKSIs CycL vocabulary supports detailed semantic descriptions of external information sources, such as databases and web sites. These semantic descriptions render

explicit the entities, concepts, and relations that often are only implicit in a sources implementation data model Implementation done by our team We have helped in development of Web Module of OpenCog system. This work done was the creation of an adapter between OpenCog server and the web module . This helps in easy viewing and creation of AtomSpace through using HTML 5 and jquery for making the interaction eye candy and interactive Conclusion Our ultimate objective is to make programs that learn from their experience as effectively as humans do. We shallsay that a program has common sense if it automatically deduces for itself a sufficient wide class of immediate consequences of anything it is told and what it already knows. Through such endeavors we try to reach closer to Understanding the most mysterious and wonderful machine:The Human Mind

Papers from the AAAI Workshop. AAAI Technical Report, vol. WS-06-02, pp. 36-48. Menlo Park, CA: AAAI Press [5] R. Hecht-Nielsen R (2005) Cogent confabulation. Neural Networks 18:111-115 [6] W Duch, Oentaryo R.J, Pasquier M, Cognitive architectures: where do we go from here?, Proceedings of AGI 2008, IOS Press [7] P Wang, Rigid Flexibility: The Logic of Intelligence, Sppringer [8]Van Vleck, T. 1989. Three Questions About Each Bug You Find.ACM SIGSOFT Software Engineering Notes 14(5):62-63.

References [1] R. Kurzweil. The Singularity Is Near. Penguin Press [2] Goertzel, Ben and Cassio Pennachin, Editors. Artificial General Intelligence. Springer, New York [3] See http://www.engagingexperience.com/2006 /07/ai50_first_poll.html for a poll taken of attendees at the AI@50 conference [4] Samsonovich, A. V. (2006). Biologically inspired cognitive architecture for socially competent agents. In Upal, M. A., & Sun, R. (Eds.). Cognitive Modeling and AgentBased Social Simulation:

Das könnte Ihnen auch gefallen