Sie sind auf Seite 1von 85

UNIT-I

SHORT ANSWERS
1) What are the applications of AI?

ANS :

AI has been dominant in various fields such as −

Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc., where
machine can think of large number of possible positions based on heuristic knowledge.

Natural Language Processing − It is possible to interact with the computer that understands
natural language spoken by humans.

Expert Systems − There are some applications which integrate machine, software, and special
information to impart reasoning and advising. They provide explanation and advice to the users.

Vision Systems − these systems understand, interpret, and comprehend visual input on the
computer. For example,

A spying aeroplane takes photographs, which are used to figure out spatial information or map of
the areas.

Doctors use clinical expert system to diagnose the patient.

Police use computer software that can recognize the face of criminal with the stored portrait
made by forensic artist.
2.Write about tic-tac-tie diagram?

ANS:
3.Define intelligent systems?

ANS:

 The ability of a system to calculate, reason, perceive relationships and analogies, learn
from experience, store and retrieve information from memory, solve problems,
comprehend complex ideas, use natural language fluently, classify, generalize, and adapt
new situations.

Types of Intelligence:

 As described by Howard Gardner, an American developmental psychologist, the


Intelligence comes in multifold –

Intelligence Description Example

The ability to speak, recognize, and use mechanisms


Linguistic intelligence of phonology (speech sounds), syntax (grammar), and Narrators, Orators
semantics (meaning).

The ability to create, communicate with, and


Musicians, Singers,
Musical intelligence understand meanings made of sound, understanding
Composers
of pitch, rhythm.
4. Write about development of AI languages?

ANS:

 Nowadays, one of the most promising topics is the AI or Artificial Intelligence.


 AI is the simulation of human intelligence processes by machines, especially
computer systems.
 In my previous article, I’ve discussed the Difference between AI, Machine
Learning, and Deep Learning.

Below are the top 5 best programming languages in the field of Artificial Intelligence?

1. PYTHON

Python is focused on DRY (don’t repeat yourself) and RAD (rapid application
development). Python is one of the most widely used programming languages in the AI
field of Artificial Intelligence thanks to its simplicity.

2. LISP

Lisp is one of the oldest (developed in 1958) and prominent language created by the Dr. John
MaCarthy, who coined the term ‘Artificial Intelligence’.

3. JAVA

Javais also a great choice. It is an object-oriented programming language that focuses on


providing all the high-level features needed to work on AI projects, it’s portable, and it offers
inbuilt garbage collection.
4. PROLOG

This language stays alongside with the Lisp when we talk about development in AI field. The
features provided by it include efficient pattern matching, tree-based data structuring, and
automatic backtracking.

5. C++

C++ is the fastest programming language in the world. Its ability to talk at the hardware level
enables developers to improve their program execution time. C++ is extremely useful for AI
projects, which are time-sensitive. Search engines, for example, can utilize C++ extensively.

--------------------------------------------------------------------------------------------------

1st unit long answers


ARTIFICAL INTELLIGENCE

UNIT-I

LONG ANSWERS

1. Define Artificial Intelligence. Explain the techniques of A.I. Also describe the
characteristics of Artificial Intelligence?

ANS:

 A branch of computer science dealing with the simulation of


intelligent behaviour in computers.

 The capability of a machine to imitate intelligent human behavior


 Artificial Intelligence (AI) is a branch of Science which deals with helping machines
finds solutions to complex problems in a more human-like fashion.
 This generally involves borrowing characteristics from human intelligence, and
applying them as algorithms in a computer friendly way.
 A more or less flexible or efficient approach can be taken depending on the
requirements established, which influences how artificial the intelligent behavior appears.

The Techniques of A.I:


In the real world, the knowledge has some unwelcomed properties −
 Its volume is huge, next to unimaginable.

 It is not well-organized or well-formatted.

 It keeps changing constantly.


AI Technique is a manner to organize and use the knowledge efficiently in such a way that −

 It should be perceivable by the people who provide it.

 It should be easily modifiable to correct errors.

 It should be useful in many situations though it is incomplete or inaccurate.


AI techniques elevate the speed of execution of the complex program it is equipped with.

Characteristics of Artificial Intelligence:


They are four types of characters:
1. adaptability,
2. learning, .
3. Modeling,
4. Problem identification,

1.Adaptability:
By far the most commonly expressed attribute of intelligences adaptability, which for us means
the speed and scope of adaptability to unforeseen situations, including recognition (of the
unforeseen situation), assessment, proposals(for reacting to it), selection (of an activity), and
execution.
Accurate prediction of effects is even better (and more successful), but we save that one for a
later section.
2. Learning:
Another common attribute of intelligence is learning, which for us is the rate of effective
learning of observations, behavior patterns, facts, tools, methods, etc.
There is an enormous literature on learning in humans and animals, but our interest here is
mainly on the measurements for computing systems that can learn.
3 .Predictive modeling:
An important way to be less surprised at environmental phenomena is predictive
modeling, which for us means accurate modeling and prediction of the relevant external
environment.
This kind of modeling includes the ability to make more effective abstractions (which are treated
below in a later section).
4 .Problem Identification:
The best way to respond to problems quickly is to identify them quickly, which requires
speed and clarity of problem identification and formulation.
---------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------
2.Discuss the areas of application of Artificial Intelligence?

Applications of AI :
AI has been dominant in various fields such as −

 Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc.,
where machine can think of large number of possible positions based on heuristic
knowledge.

 Natural Language Processing − It is possible to interact with the computer that


understands natural language spoken by humans.

 Expert Systems − There are some applications which integrate machine, software, and
special information to impart reasoning and advising.

 They provide explanation and advice to the users.

 Vision Systems − These systems understand, interpret, and comprehend visual input on
the computer. For example,

o A spying aeroplane takes photographs, which are used to figure out spatial
information or map of the areas.

o Doctors use clinical expert system to diagnose the patient.

o Police use computer software that can recognize the face of criminal with the
stored portrait made by forensic artist.

Speech Recognition − Some intelligent systems are capable of hearing and


comprehending the language in terms of sentences and their meanings while a human
talks to it.

It can handle different accents, slang words, noise in the background, change in
human’s noise due to cold, etc.

Handwriting Recognition –
1. The handwriting recognition software reads the text written on paper by a pen or on screen
by a stylus.

2. It can recognize the shapes of the letters and convert it into editable text.

Intelligent Robots − Robots are able to perform the tasks given by a human.

They have sensors to detect physical data from the real world such as light, heat,
temperature, movement, sound, bump, and pressure.

They have efficient processors, multiple sensors and huge memory, to exhibit
intelligence.

In addition, they are capable of learning from their mistakes and they can adapt to the
new environment.

2. Define intelligent system and explain the types?

ANS:

The ability of a system to calculate, reason, perceive relationships and analogies, learn from
experience, store and retrieve information from memory, solve problems, comprehend complex
ideas, use natural language fluently, classify, generalize, and adapt new situations.

Types of Intelligence:
As described by Howard Gardner, an American developmental psychologist, the Intelligence
comes in multifold −

Intelligence Description Example

Linguistic intelligence The ability to speak, recognize, and Narrators, Orators


use mechanisms of phonology (speech
sounds), syntax (grammar), and
semantics (meaning).

The ability to create, communicate


with, and understand meanings made
Musical intelligence Musicians, Singers, Composers
of sound, understanding of pitch,
rhythm.

The ability of use and understand


Logical-mathematical relationships in the absence of action
Mathematicians, Scientists
intelligence or objects. Understanding complex and
abstract ideas.

The ability to perceive visual or spatial


information, change it, and re-create
Map readers, Astronauts,
Spatial intelligence visual images without reference to the
Physicists
objects, construct 3D images, and to
move and rotate them.

The ability to use complete or part of


the body to solve problems or fashion
Bodily-Kinesthetic
products, control over fine and coarse Players, Dancers
intelligence
motor skills, and manipulate the
objects.

The ability to distinguish among one’s


Intra-personal intelligence own feelings, intentions, and Gautam Buddha
motivations.

The ability to recognize and make


Mass Communicators,
Interpersonal intelligence distinctions among other people’s
Interviewers
feelings, beliefs, and intentions.

You can say a machine or a system is artificially intelligent when it is equipped with at least
one and at most all intelligences in it.
Intelligence Composed of:
The intelligence is intangible. It is composed of −

 Reasoning

 Learning

 Problem Solving

 Perception

 Linguistic Intelligence

Let us go through all the components briefly −

 Reasoning − It is the set of processes that enables us to provide basis for judgment,
making decisions, and prediction. There are broadly two types −

Inductive Reasoning Deductive Reasoning

It conducts specific observations to makes broad It starts with a general statement and examines the
general statements. possibilities to reach a specific, logical conclusion.

Even if all of the premises are true in a statement, If something is true of a class of things in general, it is also
inductive reasoning allows for the conclusion to be true for all members of that class.
false.
Example − "All women of age above 60 years are
Example − "Nita is a teacher. Nita is studious. grandmothers. Stalin is 65 years. Therefore, Stalin is a
Therefore, All teachers are studious." grandmother."

 Learning − It is the activity of gaining knowledge or skill by studying, practising, being


taught, or experiencing something. Learning enhances the awareness of the subjects of
the study.

The ability of learning is possessed by humans, some animals, and AI-enabled


systems. Learning is categorized as −

o Auditory Learning − It is learning by listening and hearing. For example,


students listening to recorded audio lectures.

o Episodic Learning − To learn by remembering sequences of events that one has


witnessed or experienced. This is linear and orderly.

o Motor Learning − It is learning by precise movement of muscles. For example,


picking objects, Writing, etc.

o Observational Learning − To learn by watching and imitating others. For


example, child tries to learn by mimicking her parent.

o Perceptual Learning − It is learning to recognize stimuli that one has seen


before. For example, identifying and classifying objects and situations.

o Relational Learning − It involves learning to differentiate among various


stimuli on the basis of relational properties, rather than absolute properties. For
Example, Adding ‘little less’ salt at the time of cooking potatoes that came up
salty last time, when cooked with adding say a tablespoon of salt.

o Spatial Learning − It is learning through visual stimuli such as images, colors,


maps, etc. For Example, A person can create roadmap in mind before actually
following the road.
o Stimulus-Response Learning − It is learning to perform a particular behavior
when a certain stimulus is present. For example, a dog raises its ear on hearing
doorbell.

 Problem Solving − It is the process in which one perceives and tries to arrive at a
desired solution from a present situation by taking some path, which is blocked by
known or unknown hurdles.

Problem solving also includes decision making, which is the process of selecting the
best suitable alternative out of multiple alternatives to reach the desired goal are
available.

 Perception − It is the process of acquiring, interpreting, selecting, and organizing


sensory information.

Perception presumes sensing. In humans, perception is aided by sensory organs. In the


domain of AI, perception mechanism puts the data acquired by the sensors together in a
meaningful manner.

 Linguistic Intelligence − It is one’s ability to use, comprehend, speak, and write the
verbal and written language. It is important in interpersonal communication.

Difference between Human and Machine Intelligence:


 Humans perceive by patterns whereas the machines perceive by set of rules and data.

 Humans store and recall information by patterns, machines do it by searching algorithms.


For example, the number 40404040 is easy to remember, store, and recall as its pattern
is simple.

 Humans can figure out the complete object even if some part of it is missing or distorted;
whereas the machines cannot do it correctly.

4.Discuss the methods of AI techniques?

ANS:
In the real world, the knowledge has some unwelcomed properties −

 Its volume is huge, next to unimaginable.

 It is not well-organized or well-formatted.

 It keeps changing constantly.

AI Technique is a manner to organize and use the knowledge efficiently in such a way that

 It should be perceivable by the people who provide it.

 It should be easily modifiable to correct errors.

 It should be useful in many situations though it is incomplete or inaccurate.

AI techniques elevate the speed of execution of the complex program it is equipped with --
• AI Techniques
 Even apparently radically different AI systems (such as rule based expert systems
and neural networks) have many common techniques.
 four important ones
are:
1. Knowledge Representation: Knowledge needs to be represented Somehow – perhaps as
a series of if-then rules, as a frame based system, as
 A semantic network, or in the connection weights of an artificial neural
network
3. Learning: Automatically building up knowledge from the environment –
 Such as acquiring the rules for a rule based expert system, or
determining
 The appropriate connection weights in an artificial neural
network.
3. Rule Systems: These could be explicitly built into an expert system by aknowledge engineer,
or implicit in the connection weights learnt by a neural network.
 Search: This can take many forms – perhaps searching for a sequence of
states that leads quickly to a problem solution, or searching for a good set
of connection weights for a neural network by minimizing a fitness
function.
UNIT-II

1.Define State Space?

 State space search is a process used in the field of computer science, including artificial
intelligence(AI), in which successive configurations or states of an instance are
considered, with the intention of finding a goal state with a desired property.
 Problems are often modelled as a state space, a set of states that a problem can be in.
 The set of states forms a graph where two states are connected if there is an operation
that can be performed to transform the first state into the second.
 State space search often differs from traditional computer science search methods
because the state space is implict: the typical state space graph is much too large to
generate and store in memory.
 Instead, nodes are generated as they are explored, and typically discarded thereafter.
 A solution to a combinatorial search instance may consist of the goal state itself, or of a
path from some initial state to the goal state.

2. What is heuristic search technique with example?

 Heuristic search is an AI search technique that employs heuristic for its moves.
 Heuristic is a rule of thumb that probably leads to a solution. Heuristics play a major role
in search strategies because of exponential nature of the most problems.
 Heuristics help to reduce the number of alternatives from an exponential number to a
polynomial number.
 In Artificial Intelligence , heuristic search has a general meaning, and a more
specialized technical meaning.
 In a general sense, the term heuristic is used for any advice that is often effective, but is
not guaranteed to work in every case.
 Within the heuristic search architecture, however, the term heuristic usually refers to the
special case of a heuristic evaluation function .

Example: 8-puzzle
1 2 3
8 4
7 6 5

h=3

3. Define Iterative Deepening A* with example?

 Just as iterative deepening solved the space problem of breadth-first search , iterative
deepening A* (IDA*) eliminates the memory constraints of A* search algorithm
without sacrificing solution optimality.
 Each iteration of the algorithm is a depth-first search that keeps track of the cost, f(n)
= g(n) + h(n), of each node generated.
 As soon as a node is generated whose cost exceeds a threshold for that iteration, its path
is cut off, and the search backtracks before continuing.
 The cost threshold is initialized to the heuristic estimate of the initial state, and in each
successive iteration is increased to the total cost of the lowest-cost node that was pruned
during the previous iteration.
 The algorithm terminates when a goal state is reached whose total cost does not exceed
the current threshold.

Example:

A* Example
Towers of Hanoi
Little Peg Peg
Disk 1 Peg 3
2
Big
Disk
Move both disks on to Peg 3
Never put the big disk on top the
little disk
4. Distinguish between problem reduction and game playing? .

 When a problem can be divided into a set of sub problems, where each sub problem can
be solved separately and a combination of these will be a solution, AND-OR graphs or
AND - OR trees are used for representing the solution.
 The decomposition of the problem or problem reduction generates AND arcs.
 One AND are may point to any number of successor nodes.
 All these must be solved so that the arc will rise to many arcs, indicating several possible
solutions.
 Hence the graph is known as AND - OR instead of AND. Figure shows an AND - OR
graph.
 An algorithm to find
a solution in an
AND - OR graph must
handle AND area
appropriately. A*
algorithm can
not search AND -
OR graphs
efficiently.
 This can be
understand from
the give figure.
2ND UNIT LONG ANSWERS

UNIT-II

LONG ANSWERS

o Define Heuristic search? What are the advantages of Heuristic search?

ANS:

Heuristics:

o A heuristic is a way of trying to discover something or an idea imbedded


in a
o Program. The term is used variously in AI. Heuristic functions are used in
some
o Approaches to search to measure how far a node in a search tree seems to
be from a is goal.
o Heuristic predicates that compare two nodes in a search tree to see if one
is better than the other, i.e. constitutes an advance toward the goal, may
be more useful.

HEURISTIC INFORMATION
 In order to solve larger problems, domain-specific knowledge must be added to
improve search efficiency.
 Information about the problem include the nature of states, cost of transforming
from one state to another, and characteristics of the goals.
 This information can often be expressed in the form of heuristic evaluation
function, say f(n,g), a function of the nodes n and/or the goals g.
Following is a list of heuristic search techniques:
1. Pure Heuristic Search
2. A* algorithm
3. Iterative-Deepening A*
4. Depth-First Branch-And-Bound
5. Heuristic Path Algorithm
6. Recursive Best-First Search

1..Pure Heuristic Search:


 The simplest of heuristic search algorithms, the pure heuristic search, expands
nodes in order of their heuristic values h(n).
 It maintains a closed list of those nodes that have already been expanded, and a
open list of those nodes that have been generated but not yet been expanded. The
algorithm begins with just the initial state on the open list.
 At each cycle, a node on the open list with the minimum h(n) value is expanded,
generating all of its children and is placed on the closed list.
2. A* algorithm:
 Suppose that, for each node n in a search tree, an evaluation function f(n) is defined as
the sum of the cost g(n) to reach that node from the start state, plus an estimated cost h(n)
to get from that state to the goal state. That f(n) is then the estimated cost of the cheapest
solution through n.
• A* search, which is the most popular form of best-first search, repeatedly picks
the node with the lowest f(n) to expand next. It turns out that if the heuristic
function h(n) satisfies certain conditions, then this strategy is both complete and
optimal.
• In particular, if h(n) is an admissible heuristic, i.e. is always optimistic and never
overestimates the cost to reach the goal, then A* is optimal.

• The classic example is finding the route by road between two cities given the
straight line distances from each road intersection to the goal city. In this case, the
nodes are the intersections, and we can simply use the straight line distances as
h(n).
3. Iterative-Deepening A*:
 Just as iterative deepening solved the space problem of breadth-first
search,iterative deepening A* (IDA*) eliminates the memory
constraints ofA* search algorithm without sacrificing solution
optimality.
 Each iteration of the algorithm is a depth-first search that keeps track
of the cost, f(n) = g(n) + h(n), of each node generated.
 As soon as a node is generated whose cost exceeds a threshold for
that iteration, its path is cut off, and the search backtracks before
continuing.
 The cost threshold is initialized to the heuristic estimate of the initial
state, and in each successive iteration is increased to the total cost of
the lowest-cost node that was pruned during the previous iteration.
 The algorithm terminates when a goal state is reached whose total
cost does not exceed the current threshold.

4. Depth-First Branch-And-Bound:
 For many problems, the maximum search depth is known in advance or the search
is finite.
 For example, consider the traveling salesman problem (TSP) of visiting each of
the given set of cities and returning to the starting city in a tour of shortest total
distance.
 The most natural problem space for this problem consists of a tree where the root
node represents the starting city, the nodes at level one represent all the cities that
could be visited first, the nodes at level two represent all the cities that could be
visited second, etc.
 In this tree, the maximum depth is the number of cities, and all candidate solution
occurs at this depth. In such a space, a simple depth-first search gurantees finding
an optional solution using space that is only linear with repsect to the number of
cities.

5. Heuristic Path Algorithm:

 Since the complexity of finding optimal solutions to these problems is generally


exponential in practice, in order to solve significantly larger problems, the optimality
requirement must be released.

 An early approach to this problem was the heuristic path algorithm (HPA).

 Heuristic path algorithm is a best-first search algorithm, where the figure of merit of node
n is f(n) = (1-w)*g(n)+w*h(n).
 Varying w produces a range of algorithms from uniform-cost search (w=0) through A*
(w=1/2) to pure heuristic search (w=1).

 Increasing w beyond ½ generally decreases the amount of computation while increasing


the cost of the solution generated.

 The trade off is often quite favorable, with small increases in solution cost yielding huge
savings in computation.

 Moreover, it shows that the solutions found by this algorithm are guaranteed to be no
more than a factor of w/(1-w) greater than optimal, but often are significantly better.

6. Recursive Best-First Search:

 The memory limitation of the heuristic path algorithm can be overcome simply by
replacing the best-first search with IDA* search using the sure weighted evaluation
function, with w>=1/2.

 IDA* search is no longer a best-first search since the total cost of a child can beless
than that of its parent, and thus nodes are not necessarily expanded in best-first order.
Recursive Best-First Search (RBFS) is an alternative algorithm. Recursive best-first
search is a best-first search that runs in space that is linear with respect to the maximum
search depth, regardless of the cost funtion used. Even with an admissible cost function,
Recursive Best-First Search generates fewer nodes than IDA* , and is generally superior
to IDA* , except for a small increase in the cost per node generation.

 It works by maintaining on the recursion stack the complete path to the current node
being expanded as well as all immediate siblings of nodes on that path, along with the
cost of the best node in the sub-tree explored below each sibling. Whenever the cost of
the current node exceeds that of some other node in the previously expanded portion of
the tree, the algorithm backs up to their deepest common ancestor, and continues the
search down the new path. In effect, the algorithm maintains a separate threshold for each
sub-tree diverging from the current search path.

ADVANTAGES AND DISADVANGES OF HEUROSTICS


 A heuristic evaluation should not replace usability testing. Although the heuristics relate
to criteria that affect your site’s usability, the issues identified in a heuristic evaluation are
different than those found in a usability test.

Advantages Disadvantages

 It can provide some quick and relatively  It requires knowledge and


inexpensive feedback to designers. experience to apply the heuristics
 You can obtain feedback early in the design effectively.
process.  Trained usability experts are
 Assigning the correct heuristic can help suggest the sometimes hard to find and can be
best corrective measures to designers. expensive.
 You can use it together with other usability testing  You should use multiple experts and
methodologies. aggregate their results.
 You can conduct usability testing to further  The evaluation may identify more
examine potential issues. minor issues and fewer major issues.

2.Explain about alpha-beta pruning with examples?

ANS:

Alpha-Beta Search: A Brief Walk through:

Here we are going to go through a generalized run through of the algorithm. It may help your
understanding to use a diagram for reference early in the document.

For each node visited we will assign storage for the path taken to backtrack to that node, a value
called Alpha and a value called Beta, as well as the current score for that node.

Set the value of Alpha at the initial node to -Limit and Beta to +Limit. Because initially these
are the max values that Alpha or Beta could possibly obtain.
o Search down the tree to the given depth.

o Once reaching the bottom, calculate the evaluation for this node.(i.e. it's
utility)

o Backtrack, propagating values and paths according to the following:

 If the move being backtracked would be made by the opponent:

o If the current score is less than the score stored at the parent, replace the score at
the parent with this and store the path from the bottom and the value of Beta in
the parent.

o If the score at the parent is now less than Alpha stored at that parent, ignore any
further children of this parent and backtrack the parent's value of Alpha and Beta
up the tree.

 If the score at the parent is greater than Alpha, set the Alpha value of the parent to this
score and proceed with the next child, sending Alpha and Beta down. If no children
exist, propagate Alpha and Beta up the tree and propagate the value of Alpha up as the
min score.

 If the move being backtracked would be made by the computer:

o If the current score is more than the score stored at the parent, replace the score at
the parent with this and store the path from the bottom and the value of Alpha in
the parent.

o If the score at the parent is now more than Beta stored at that parent, ignore any
further children of this parent and backtrack the parent's value
of Alpha and Beta up the tree.

o If the score at the parent is less than Beta, set the Beta value of the parent to this
score and proceed with the next child, sending Alpha and Beta down. If no
children exist, propagate Alpha and Beta up the tree and propagate the value
of Beta up as the max score.

4. When the search is complete, the Alpha value at the top node gives the minimum score
that the player is guaranteed to attain if using the path stored at the top node.
Let’s make above algorithm clear with an example.

 The initial call starts from A. The value of alpha here is -INFINITY and the value of beta
is +INFINITY. These values are passed down to subsequent nodes in the tree. At A the
maximizer must choose max of B and C, so A calls B first
 At B it the minimizer must choose min of D and E and hence calls D first.
 At D, it looks at its left child which is a leaf node. This node returns a value of 3. Now
the value of alpha at D is max( -INF, 3) which is 3.
 To decide whether its worth looking at its right node or not, it checks the condition
beta<=alpha. This is false since beta = +INF and alpha = 3. So it continues the search.
 D now looks at its right child which returns a value of 5.At D, alpha = max(3, 5) which is
5. Now the value of node D is 5
 D returns a value of 5 to B. At B, beta = min( +INF, 5) which is 5. The minimizer is now
guaranteed a value of 5 or lesser. B now calls E to see if he can get a lower value than 5.
 At E the values of alpha and beta is not -INF and +INF but instead -INF and 5
respectively, because the value of beta was changed at B and that is what B passed down
to E
 Now E looks at its left child which is 6. At E, alpha = max(-INF, 6) which is 6. Here the
condition becomes true. beta is 5 and alpha is 6. So beta<=alpha is true. Hence it breaks
and E returns 6 to B
 Note how it did not matter what the value of E‘s right child is. It could have been +INF
or -INF, it still wouldn’t matter, We never even had to look at it because the minimizer was
guaranteed a value of 5 or lesser. So as soon as the maximizer saw the 6 he knew the
minimizer would never come this way because he can get a 5 on the left side of B. This
way we dint have to look at that 9 and hence saved computation time.
 E returns a value of 6 to B. At B, beta = min( 5, 6) which is 5.The value of node B is also
5
So far this is how our game tree looks. The 9 is crossed out because it was never computed.

 B returns 5 to A. At A, alpha = max( -INF, 5) which is 5. Now the maximizer is


guaranteed a value of 5 or greater. A now calls C to see if it can get a higher value than 5.
 At C, alpha = 5 and beta = +INF. C calls F
 At F, alpha = 5 and beta = +INF. F looks at its left child which is a 1. alpha = max( 5, 1)
which is still 5.
 F looks at its right child which is a 2. Hence the best value of this node is 2. Alpha still
remains 5
 F returns a value of 2 to C. At C, beta = min( +INF, 2). The condition beta <= alpha
becomes true as beta = 2 and alpha = 5. So it breaks and it does not even have to compute
the entire sub-tree of G.
 The intuition behind this break off is that, at C the minimizer was guaranteed a value of 2
or lesser. But the maximizer was already guaranteed a value of 5 if he choose B. So why
would the maximizer ever choose C and get a value less than 2 ? Again you can see that it
did not matter what those last 2 values were. We also saved a lot of computation by
skipping a whole sub tree.
 C now returns a value of 2 to A. Therefore the best value at A is max( 5, 2) which is a 5.
 Hence the optimal value that the maximizer can get is 5
This is how our final game tree looks like. As you can see G has been crossed out as it was never
computed.

2.Explain about BFS and DFS algorithms?

ANS:

• Breadth First Search (BFS):

BFS expands the leaf node with the lowest path cost so far, and keeps going until a goal node
is generated. If the path cost simply equals the number of links, we can implement this as a
simple queue (“first in, first out”).

• This is guaranteed to find an optimal path to a goal state. It is memory intensive if the state
space is large. If the typical branching factor is b, and the depth of the shallowest goal state is
d – the space complexity is O(bd), and the time complexity is O(bd).
• BFS is an easy search technique to understand. The algorithm is presented below.
breadth_first_search ()

store initial state in queue Q

set state in the front of the Q as current state ;

while (goal state is reached OR Q is empty)

apply rule to generate a new state from the current

state ;

if (new state is goal state) quit ;

else if (all states generated from current states are

exhausted)

delete the current state from the Q ;

set front element of Q as the current state ;

} else continue ;

• The algorithm is illustrated using the bridge components configuration problem.The initial
state is PDFG, which is not a goal state; and hence set it as the current state. Generate another
state DPFG (by swapping 1st and 2nd position values) and add it to the list. That is not a goal
state, hence; generate next successor state, which is FDPG (by swapping 1st and 3rd position
values). This is also not a goal state; hence add it to the list and generate the next successor
state GDFP.
• Only three states can be generated from the initial state. Now the queue Q will have three
elements in it, viz., DPFG, FDPG and GDFP. Now take DPFG (first state in the list) as the
current state and continue the process, until all the states generated from this are evaluated.
Continue this process, until the goal state DGPF is reached.
• The 14th evaluation gives the goal state. It may be noted that, all the states at one level in
the tree are evaluated before the states in the next level are taken up; i.e., the evaluations are
carried out breadth-wise. Hence, the search strategy is called breadth- first search.
• Depth First Search (DFS):
DFS expands the leaf node with the highest path cost so far, and keeps going until a goal
node is generated. If the path cost simply equals the number of links, we can implement this
as a simple stack (“last in, first out”).
• This is not guaranteed to find any path to a goal state. It is memory efficient
even if the state space is large. If the typical branching factor is b, and the
maximum depth of the tree is m – the space complexity is O(bm), and the time
complexity is O(bm).
• In DFS, instead of generating all the states below the current level, only the
first state below the current level is generated and evaluated recursively. The
search continues till a further successor cannot be generated.
• Then it goes back to the parent and explores the next successor. The algorithm is given
below.
depth_first_search ()

set initial state to current state ;

if (initial state is current state) quit ;


else

if (a successor for current state exists)

generate a successor of the current state and

set it as current state ;


} else return ;

depth_first_search (current_state) ;

if (goal state is achieved) return ;

else continue ;

• Since DFS stores only the states in the current path, it uses much less memory during the
search compared to BFS.
• The probability of arriving at goal state with a fewer number of evaluations is higher
with DFS compared to BFS. This is because, in BFS, all the states in a level have to be
evaluated before states in the lower level are considered. DFS is very efficient when more
acceptable solutions exist, so that the search can be terminated once the first acceptable
solution is obtained.
• BFS is advantageous in cases where the tree is very deep.
• An ideal search mechanism is to combine the advantages of BFS and DFS.

4. Discuss about Iterative Deeping A* algorithm?


ANS:
• A* Search: Suppose that, for each node n in a search tree, an evaluation function f(n) is
defined as the sum of the cost g(n) to reach that node from the start state, plus an estimated cost
h(n) to get from that state to the goal state. That f(n) is then the estimated cost of the cheapest
solution through n.
• A* search, which is the most popular form of best-first search, repeatedly picks the node
with the lowest f(n) to expand next. It turns out that if the heuristic function h(n) satisfies
certain conditions, then this strategy is both complete and optimal.
• In particular, if h(n) is an admissible heuristic, i.e. is always optimistic and
neveroverestimates the cost to reach the goal, then A* is optimal.
• The classic example is finding the route by road between two cities given the straight line
distances from each road intersection to the goal city. In this case, the nodes are the
intersections, and we can simply use the straight line distances as h(n).
ITERATIVE DEEPENING A* (IDA*) SEARCH:
 Just as iterative deepening solved the space problem of breadth-first
search,iterative deepening A* (IDA*) eliminates the memory
constraints ofA* search algorithm without sacrificing solution
optimality.
 Each iteration of the algorithm is a depth-first search that keeps track
of the cost, f(n) = g(n) + h(n), of each node generated. As soon as a
node is generated whose cost exceeds a threshold for that iteration, its
path is cut off, and the search backtracks before continuing.
 The cost threshold is initialized to the heuristic estimate of the initial
state, and in each successive iteration is increased to the total cost of
the lowest-cost node that was pruned during the previous iteration.

 The algorithm terminates when a goal state is reached whose


total cost does not exceed the current threshold.
 Since Iterative Deepening A* performs a series of depth-first
searches, its memory requirement is linear with respect to the
maximum search depth. In addition, if the heuristic function is
admissible, IDA* finds an optimal solution.
 Finally, by an argument similar to that presented for DFID,
IDA* expands the same number of nodes, asymptotically, as
A* on a tree, provided that the number of nodes,
asymptotically, as A* on a tree, provided that the number of
nodes grows exponentially with solution cost.
 These costs, together with the optimality of A*, imply that
IDA* is asymptotically optimal in time and space over all
heuristic search algorithms that find optimal solutions on a
tree.
 Additional benefits of IDA* are that it is much easier to
implement, and often runs faster than A*, since it does not
incur the overhead of managing the open and closed lists.
UNIT-III

1.Write about Propositional Logic?

ANS:

 One of the points of logic is that you can reason about statements even when you don't
know what those statements mean.

 We can replace statements, or "propositions," with variable names.

 So, for example, you can say "It's raining and I'm wet," which is a representation as
characters describing an utterance in natural language.
 In logic, we might represent this like so:
(a∧ b ) {\displaystyle (a\wedge b)}

 In this case the proposition "It's raining" is represented by "a" and "I'm wet" is
represented by b.
 The ∧ {\displaystyle \wedge} symbol stands for "and."

2.Define Semantic with examples?

ANS:

 A semantic net (or semantic network) is a knowledge representation


technique used for propositional information.
 So it is also called a propositional net.
 Semantic nets convey meaning.
 They are two dimensional representations of knowledge . Mathematically a
semantic net can be defined as a labelled directed graph

Figure: A Semantic Network


3.Write about Predicate logic?

ANS:

 Predicate Logic deals with predicates, which are propositions containing variables.

 Predicate Logic – Definition

 A predicate is an expression of one or more variables defined on some specific domain.

 A predicate with variables can be made a proposition by either assigning a value to the
variable or by quantifying the variable.

 The following are some examples of predicates −

 Let E(x, y) denote "x = y"

 Let X(a, b, c) denote "a + b + c = 0"

 Let M(x, y) denote "x is married to y"


Well Formed Formula

 Well Formed Formula (wff) is a predicate holding any of the following −

 All propositional constants and propositional variables are wffs

 If x is a variable and Y is a wff, ∀xY∀xY and ∃xY∃xY are also wff


 Truth value and false values are wffs

 Each atomic formula is a wff

 All connectives connecting wffs are wffs

Quantifiers

 The variable of predicates is quantified by quantifiers.

 There are two types of quantifier in predicate logic − Universal Quantifier and
Existential Quantifier.

 Universal Quantifier

 Universal quantifier states that the statements within its scope are true for every value of
the specific variable.
 It is denoted by the symbol ∀∀.
 ∀xP(x)∀xP(x) is read as for every value of x, P(x) is true.
 Example − "Man is mortal" can be transformed into the propositional form
∀xP(x)∀xP(x) where P(x) is the predicate which denotes x is mortal and the universe of
discourse is all men.

4.Define Axiomatic systems?

ANS:

 In mathematics, an axiomatic system is any set of axioms from which some or all
axioms can be used in conjunction to logically derive theorems.
 A theory consists of an axiomatic system and all its derived theorems. An axiomatic
system that is completely described is a special kind of formal system.
 A formal theory typically means an axiomatic system, for example formulated within
model theory.
 A formal proof is a complete rendition of a mathematical proof within a formal system.

 Stating definitions and propositions in a way such that each new term can be formally
eliminated by the priorly introduced terms requires primitive notions (axioms) to avoid
infinite regress.
 This way of doing mathematics is called the axiomatic method.[1]
 A common attitude towards the axiomatic method is logicism.
 In their book Principia Mathematica, Alfred North Whitehead and Bertrand Russell
attempted to show that all mathematical theory could be reduced to some collection of
axioms.
 More generally, the reduction of a body of propositions to a particular collection of
axioms underlies the mathematician's research program.
 This was very prominent in the mathematics of the twentieth century, in particular in
subjects based around homological algebra.

AI LONG ANSWERS
UNIT-III
LONG ANSWERS

1. What is predicate logic? Explain the predicate logic representation with reference to
suitable example?
ANS:
Predicate logic:
 The world is described in terms of elementary propositions and their logical
combinations.
 In predicate logic the formalism of propositional logic is extended and is made it
more finely build than propositional logic.
 Thus it is possible to present more complicated expressions of natural language
and use them in formal inference.

ALGORITHM: RESOLUTION IN PREDICATE LOGIC:


 Convert all the statements of F to clause form
 Negate P and convert the result to clause form. Add it to the set of clauses
obtained in step 1.
 Repeat until either a contradiction is found or no progress can be made or a
predetermined amount of effort has been expended:
o Select two clauses. Call these the parent clauses.
o Resolve them together. The resulting clause, called the resolvent, will be
the disjunction of all of the literals of both the parent clauses with
appropriate substitutions performed and with the following exception: If
there is one pair of literals T1 and T2 such that one of the parent clauses
contains T1 and the other contains T2 and if T1 and T2 are unifiable, then
neither T1 nor T2 should appear in the resolvent.
o We call T1 and T2 complementary literals. Use the substitution produced
by the unification to create the resolvent. If there is one pair of
complementary literals, only one such pair should be omitted from the
resolvent.
 c) If the resolvent is the empty clause, and then a contradiction has been found. If
it is not then add it to the set of clauses available to the procedure.

Predicate Logic – Definition


 A predicate is an expression of one or more variables defined on some specific domain.

 A predicate with variables can be made a proposition by either assigning a value to the
variable or by quantifying the variable.

 The following are some examples of predicates −

 Let E(x, y) denote "x = y"


 Let X(a, b, c) denote "a + b + c = 0"

 Let M(x, y) denote "x is married to y"


Well Formed Formula
 Well Formed Formula (wff) is a predicate holding any of the following −

 All propositional constants and propositional variables are wffs

 If x is a variable and Y is a wff, ∀xY∀xY and ∃xY∃xY are also wff


 Truth value and false values are wffs

 Each atomic formula is a wff

 All connectives connecting wffs are wffs

Quantifiers
 The variable of predicates is quantified by quantifiers. There are two types of quantifier
in predicate logic − Universal Quantifier and Existential Quantifier.

Universal Quantifier
 Universal quantifier states that the statements within its scope are true for every value of
the specific variable.
 It is denoted by the symbol ∀∀.
 ∀xP(x)∀xP(x) is read as for every value of x, P(x) is true.
Example –
 "Man is mortal" can be transformed into the propositional form ∀xP(x)∀xP(x) where
P(x) is the predicate which denotes x is mortal and the universe of discourse is all men.
 Existential Quantifier
 Existential quantifier states that the statements within its scope are true for some values
of the specific variable. It is denoted by the symbol ∃∃.
 ∃xP(x)∃xP(x) is read as for some values of x, P(x) is true.
Example –
 "Some people are dishonest" can be transformed into the propositional
form ∃xP(x)∃xP(x) where P(x) is the predicate which denotes x is dishonest and the
universe of discourse is some people.
 Nested Quantifiers
 If we use a quantifier that appears within the scope of another quantifier, it is called
nested quantifier.

Example

 ∀ a∃bP(x,y)∀ a∃bP(x,y) where P(a,b)P(a,b) denotes a+b=0a+b=0


 ∀ a∀b∀cP(a,b,c)∀ a∀b∀cP(a,b,c) where P(a,b)P(a,b) denotes a+(b+c)=(a+b)+ca+
(b+c)=(a+b)+c
Note − ∀a∃bP(x,y)≠∃a∀bP(x,y).
--------------------------------------------------------------------------------------

2.Discuss about resolution refutation in proportional logic?

ANS:

Propositional Logic :

• There are a number of possible systems of logic.

• The system we have been examining so far is called propositional logic.

• The language that is used to express propositional logic is called the propositional
calculus.

• A logical system can be defined in terms of its syntax (the alphabet of symbols
and how they can be combined), its semantics (what the symbols mean), and a set of rules of
deduction that enable us to derive one expression from a set of other expressions and
thus make arguments and proofs.

• Syntax

 We have already examined the syntax of propositional calculus. The alphabet of


symbols, _ is defined as follows

∑ = {true, false, ¬,→, (, ),

,↔, p1, p2, p3, . . . , pn, . . . }


 Here we have used set notation to define the possible values that are contained within the
alphabet ∑.

 Note that we allow an infinite number of proposition letters, or propositional


symbols, p1, p2, p3, . . . , and so on.

 More usually, we will represent these by capital letters P, Q, R, and so on,

 If we need to represent a very large number of them, we will use the subscript notation
(e.g., p1).

 An expression is referred to as a well-formed formula (often abbreviated as wff) or


a sentence if it is constructed correctly, according to the rules of the syntax of
propositional calculus, which are defined as follows.

 In these rules, we use

A, B, C to represent sentences. In other words, we

define a sentence recursively, in terms of other sentences.

 The following are wellformed sentences:

P,Q,R. . .

true, false

(A)

¬A

 Semantics

 The semantics of the operators of propositional calculus can be defined interms of truth
tables.

 The meaning of P ∧ Q is defined as “true when P is true and Q is also true.”

 The meaning of symbols such as P and Q is arbitrary and could be ignoredaltogether if we


were reasoning about pure logic.

 In other words, reasoning about sentences such as P ∨ Q∧¬R is possiblewithout


considering what P, Q, and R mean.
 Because we are using logic as a representational method for artificialintelligence,
however, it is often the case that when using propositional logic,the meanings of these
symbols are very important.

 The beauty of this representation is that it is possible for a computer to reasonabout them
in a very general way, without needing to know much about thereal world.

 In other words, if we tell a computer, “I like ice cream, and I like chocolate,” itmight
represent this statement as A ∧ B, which it could then use to reasonwith, and, as we will see,
it can use this to make deductions.

----------------------------------------------------------------------------------

3. Explain about semantic tableau system in proportional logic?

ANS:

 I'm taking a course in Mathematical Logic right now and we have to use semantic tableau
to find out if a formula is satisfiable (some interpretations give a value of T).
 Given these examples for logical formulas A and B: Ben-Ari: Mathematical Logic for
Computer Science Fig. 2.7
 How do I determine how to build the tree? I know that the first time you decompose the
formula you remove the conjunction, like this:

p ∧ (¬q ∨ ¬p)

↓ p, ¬q ∨ ¬p

But I'm not sure why this

p, ¬q ∨ ¬p

/ \

p, ¬q p, ¬p

 happens.
 Why did this decomposition happen?
 Can someone explain to me how the tree came to be, step by step?
 I read the textbook (Ben-Ari Mathematical Logic for Computer Science) and I'm still
confused at how to build the tree.
logic

 The method of semantic tableaux is an efficient decision procedure for satisfiability (and
by duality validity) in propositional logic.
 The principle behind semantic tableaux is very simple: search for a model (satisfying
interpretation) by decomposing the formula into sets of atoms [e.g. propositional letters :
p,q,…] and negations of atoms.
 It is easy to check if there is an interpretation for each set: a set of atoms and negations
of atoms is satisfiable iff the set does not contain an atom p and its negation ¬p.
 The formula is satisfiable iff one of these sets is satisfiable.
 For each formula, every step is uniquely defined, because you have to decompose the
formula according to the principal connective.
 A literal is an atom or the negation of an atom.
 An atom is a positive literal and the negation of an atom is a negative literal.
 For any atom p,{p,¬p} is a complementary pair of literals.
 Let :

A=p∧(¬q∨¬p).

 The principal operator of A is conjunction, so [by truth-tables for connectives; see page
16] vI(A)=T if and only if both vI(p)=T and vI(¬q∨¬p)=T.
 The principal operator of ¬q∨¬p is disjunction, so vI(¬q∨¬p)=T if and only if either
vI(¬q)=T or vI(¬p)=T.
 Thus we have to apply the procedure to A with the goal of verifying if the formuala A is
satisfiable or not.
 The procedure will always end, because a formula is a finite string of symbols, with only
a finite number of occurrences of connectives.
 At every step we decompose the final a formula of a branch according to the principal
connective applying the above rules.
 If the formula B has as principal connective the conjunction, it is B1∧B2.
 Thus, according to the rules for evaluating the connectives, in order to satisfy B we have
that both B1 and B2 must be true.
 Thus, we add a new node to the branch with both B1 and B2.If the formula B has as
principal connective the disjunction, it is B1∨B2.
 Thus, according to the rules for evaluating the connectives, in order to satisfy B we have
that at least one of B1 and B2 must be true. Thus, we branch, one for each possibility.
 Thus, for :

p∧(¬q∨¬p)

 you can only apply the rule for ∧, because it is the princupal connective.
 In the second step :
p,¬q∨¬p

 p is atomic, i.e. indecomposable. Thus you can only decompose ¬q∨¬p, applying the rule
for ∨.
 You have two formulae; thus you can choose how to start.
 First decompose ¬p∧¬q and then p∨q, with the branching.
 Thus, the "strategy" is : if you can (as in B) use the "branching" rules at the end, in order
to have more "compact" trees.
 I checked the book, and I think the construction of the semantic tableaux is quite well
explained, but I will try to give you some hints about the procedure.

 The objective is find a model, an interpretation (a truth assignment to the atoms p,q,...)
that satisfies the formula. You place the formula at the root (top node) of the tree, and you
decompose it, step by step starting from the main operator (connective). At each step
(node), depending on the form of the formula that you are decomposing, the tree splits
(the node has two child nodes) or not (see Table 2.8 of the book). You follow the
procedure until you only have atoms (p,q,...) or negations of atoms (¬p,¬q,...) at the
bottom (leaves) of the tree. Then you check all the leaves of the tree: if any of them does
not contain an atom and its negation, the set of atoms of that leave and all subformulas in
the path to the top node are satisfiable.
 Let's take example A.
 The main operator is ∧. For the formula to be T, both subformulas p and ¬q∨¬p must be
true, just one possibility, so that the tree does not split and you write both subformulas
separated by a comma in the next node.
 In this node, the first subformula is already an atom (p), so nothing to do with it, it will
pass unchanged to the next node(s). The other ¬q∨¬p is a disjuction (main operator ∨),
so that for it to be T, either ¬q is T or ¬p is true. This means that there are two
possibilities to satisfy the formula, so that the tree must split: in one node we write ¬q and
the other ¬p (in both cases with the atom p which as I said passes unchanged)
 Now we have reached the leaves, since we only have atoms or negation or atoms. We
have two sets, one for each leave: {p,¬q} and {p,¬p}.
 If any of them is satisfiable, the original formula (and all subformulas in the path to the
top along that branch of the tree) are also satisfiable.
 Obviously, {p,¬p} is not satisfiable, since both p and its negation cannot be T
simultaneously, so this path (branch) closes, and we mark it with an X at the bottom.
 But {p,¬q} is satisfiable in the interpretation that assigns T to p and F to q, so we have
found an interpretation that satisfies the formula.
 I hope this helps. I am sure that you will be able to do example B.
 I just want to add something. I have been talking about truth values, meaning of the
logical connectives,... that is about the semantics of the propositional logic, in order to
guide you in the process.
 But the rules for construction of the semantic tableaux can be given (and that is their
main objective) as an algorithm in which the rule to apply at each node of the tree only
depends on the form (syntaxis) of the formula there, see section 2.6.2 and algorithm 2.64
in the book.
 In other words, it is a proof system which you (or a computer) can apply to find out if a
formula is satisfiable (or valid, or and inference valid, ... all semantic notions) based only
on the form (how it is made up of symbols, a syntactic notion) of the formula.

4. Explain about various axomatic systems? With examples?


Ans:
 In mathematics, an axiomatic system is any set of axioms from which some or all
axioms can be used in conjunction to logically derive theorems. A theory consists
of an axiomatic system and all its derived theorems.
 An axiomatic system that is completely described is a special kind of formal
system. A formal theory typically means an axiomatic system, for example
formulated within model theory. A formal proof is a complete rendition of
a mathematical proof within a formal system.

 The following exercises are written to further develop an understanding of the terms and
concepts described in section 1.1.1 Introduction to Axiomatic Systems.

 The theorems may not be numbered in the order you need to prove them, but make sure
you do not use circular reasoning.

 Solutions for selected problems are available in the solutions section of the Chapter One
table of contents.

Exercise 1.1.

Consider the following axiom set.

Postulate 1. There are at least two buildings on campus.


Postulate 2. There is exactly one sidewalk between any two buildings.
Postulate 3. Not all the buildings have the same sidewalk between them.

a. What are the primitive terms in this axiom set?

b. Deduce the following theorems:


Theorem 1. There are at least three buildings on campus.
Theorem 2. There are at least two sidewalks on campus.

c. Show by the use of models that it is possible to have


exactly two sidewalks and three buildings;
at least two sidewalks and four buildings; and,
exactly three sidewalks and three buildings.

d. Is the system complete? Explain.

e. Find two isomorphic models.

f. Demonstrate the independence of the axioms.

Exercise 1.2.

Consider the following axiom set.

A1. Every hive is a collection of bees.


A2. Any two distinct hives have one and only one bee in common.
A3. Every bee belongs to two and only two hives.
A4. There are exactly four hives.

a. What are the undefined terms in this axiom set?

b. Deduce the following theorems:


T1. There are exactly six bees.
T2. There are exactly three bees in each hive.
T3. For each bee there is exactly one other bee not in the same hive with it.

c. Find two isomorphic models.

d. Demonstrate the independence of the axioms.

Exercise 1.3.

Consider the following axiom set.

P1. Every herd is a collection of cows.


P2. There exist at least two cows.
P3. For any two cows, there exists one and only one herd containing both cows.
P4. For any herd, there exists a cow not in the herd.
P5. For any herd and any cow not in the herd, there exists one and only one other herd containing
the cow and not containing any cow that is in the given herd.

a. What are the primitive terms in this axiom set?

b. Deduce the following theorems:


T1. Every cow is contained in at least two herds.
T2. There exist at least four distinct cows.
T3. There exist at least six distinct herds.
c. Find two isomorphic models.

d. Demonstrate the independence of the axioms.

UNIT-IV

1. Discuss about knowledge based systems?

ANS:

 Knowledge Base It contains domain-specific and high-quality knowledge.


 Knowledge is required to exhibit intelligence. The success of any ES majorly
depends upon the collection of highly accurate and precise knowledge.
 What is Knowledge?
 The data is collection of facts. The information is organized as data and facts
about the task domain.
 Data, information, and past experience combined together are termed as
knowledge.
 Components of Knowledge Base the knowledge base of an ES is a store of both,
factual and heuristic knowledge.
 Factual Knowledge − It is the information widely accepted by the Knowledge
Engineers and scholars in the task domain.
 Heuristic Knowledge − It is about practice, accurate judgement, one’s ability of
evaluation, and guessing.

2.What is a semantic network?

ANS:

 A semantic network or net is a graphic notation for representing knowledge in


patterns of interconnected nodes and arcs.
 Computer implementations of semantic networks were first developed for
artificial intelligence and machine translation, but earlier versions have long been
used in philosophy, psychology, and linguistics.
 What is common to all semantic networks is a declarative graphic representation
that can be used either to represent knowledge or to support automated systems
for reasoning about knowledge.
 Some versions are highly informal, but other versions are formally defined
systems of logic.

2. Writeabout conceptual dependency theory?

ANS:

 Conceptual dependency theory is a model of natural language understanding used


in artificial intelligence systems.
 Roger Schank at Stanford University introduced the model in 1969, in the early days
of artificial intelligence.[1]

 This model was extensively used by Schank's students at Yale University such as
Robert Wilensky, Wendy Lehnert, and Janet Kolodner.

 Schank developed the model to represent knowledge for natural language input into
computers.

 Partly influenced by the work of Sydney Lamb, his goal was to make the meaning
independent of the words used in the input, i.e. two sentences identical in meaning,
would have a single representation.

 The system was also intended to draw logical inferences.[2]

 The model uses the following basic representational tokens:[3]

 real world objects, each with some attributes.

 real world actions, each with attributes

 times

 locations
 A set of conceptual transitions then act on this representation, e.g. an ATRANS is
used to represent a transfer such as "give" or "take" while a PTRANS is used to act on
locations such as "move" or "go".
 An MTRANS represents mental acts such as "tell", etc.

 A sentence such as "John gave a book to Mary" is then represented as the action of an
ATRANS on two real world objects John and Mary.

3. Write a short notes semantic web?

ANS:
 Artificial Intelligence (AI) is once again attracting everyone’s interest.

 This time around, it’s both connected and disconnected from fundamental ideas behind

the seminal “Semantic Web” meme — unleashed in a Scientific American article.

 Here’s a brief definition of some frequently used terms that provide context for this post

about reconnecting the notion of a Semantic Web and AI:

 Artificial — not human

 Intelligence — an ability to apply reasoning and inference to information (that is, data in

some context)

 Language — systematic use of signs, syntax, and semantics for encoding and decoding

information via sentences

 Signs — entity identification (Denotation and Connotation Duality)

 Syntax — rules for arranging signs to construct a sentence; i.e., syntax

 Semantics — meaning associated with each slot occupied by a sign in a sentence

 Documents — where sentences are inscribed and persisted


 RDF (Resource Description Framework) — language (or framework) for constructing

digital sentences that are comprehensible to both humans and machines (courtesy of logic

as the system’s conceptual schema)

 Logic — formal expression of the fact that everything is related to something else, in a

variety of ways; i.e., observation (or data) is a collection of entity relationships categorized

(or classified) by relationship type.

AI LONG ANSWERS

UNIT-IV

LONG ANSWERS

1. Explain in brief about the issues in representation of knowledge?

ANS:

Representation of knowledge

 The object of a knowledge representation is to express knowledge in a computertractable


form, so that it can be used to enable our AI agents to perform well.
 A knowledge representation language is defined by two aspects:
  Syntax:The syntax of a language defines which configurations of the
 components of the language constitute valid sentences.
 Semantics: The semantics defines which facts in the world the sentences referto, and
hence the statement about the world that each sentence makes.
 Suppose the language is arithmetic, then ‘x’, ‘=’ and ‘y’ are components (or symbolsor
words) of the language the syntax says that ‘x = y’ is a valid sentence in thelanguage, but
‘= = x y’ is not the semantics say that ‘x = y’ is false if y is bigger thanx, and true
otherwise
 The requirements of a knowledge representation are:
  Representational Adequacy – the ability to represent all the different kinds
ofknowledge that might be needed in that domain.
  Inferential Adequacy – the ability to manipulate the representational structuresto derive
new structures (corresponding to new knowledge) from existingstructures.
  Inferential Efficiency – the ability to incorporate additional information intothe
knowledge structure which can be used to focus the attention of theinference mechanisms
in the most promising directions.
  Acquisitional Efficiency – the ability to acquire new information easily.
 Ideally the agent should be able to control its own knowledge acquisition, butdirect
insertion of information by a ‘knowledge engineer’ would beacceptable.
 Finding a system that optimizes these for all possible domains is
 not going to be feasible.
 In practice, the theoretical requirements for good knowledge representations canusually
be achieved by dealing appropriately with a number of practical requirements:
  The representations need to be complete – so that everything that couldpossibly need
to be represented can easily be represented.
  They must be computable – implementable with standard computing
 procedures.
  They should make the important objects and relations explicit and accessible –
 so that it is easy to see what is going on, and how the various componentsinteract.
  They should suppress irrelevant detail – so that rarely used details don’tintroduce
unnecessary complications, but are still available when needed.
  They should expose any natural constraints – so that it is easy to express how
 one object or relation influences another.
  They should be transparent – so you can easily understand what is being said.
  The implementation needs to be concise and fast – so that information can be
 stored, retrieved and manipulated rapidly.
 The four fundamental components of a good representation
  The lexical part – that determines which symbols or words are used in
therepresentation’s vocabulary.
  The structural or syntactic part – that describes the constraints on how thesymbols can
be arranged, i.e. a grammar.
  The semantic part – that establishes a way of associating real world meaningswith the
representations.
  The procedural part – that specifies the access procedures that enables ways ofcreating
and modifying representations and answering questions using them,i.e. how we generate
and compute things with the representation.
 Knowledge Representation in Natural Language
  Advantages of natural languageo It is extremely expressive – we can express virtually
everything innatural language (real world situations, pictures, symbols, ideas,emotions,
reasoning).
 Most humans use it most of the time as their knowledge
 representation of choice
  Disadvantages of natural language
 Both the syntax and semantics are very complex and not fullyunderstood.
 There is little uniformity in the structure of sentences.

2.What is a semantic network?

ANS:

 A semantic network or net is a graphic notation for representing knowledge in


patterns of interconnected nodes and arcs.
 Computer implementations of semantic networks were first developed for artificial
intelligence and machine translation, but earlier versions have long been used in
philosophy, psychology, and linguistics.

 What is common to all semantic networks is a declarative graphic representation that


can be used either to represent knowledge or to support automated systems for
reasoning about knowledge.
 Some versions are highly informal, but other versions are formally defined systems
of logic.
 Following are six of the most common kinds of semantic networks, each of which is
discussed in detail in one section of this article.

 A definitional network emphasizes the subtype or is-a relation between a concept


type and a newly defined subtype.
 The resulting network, also called a generalization or subsumption hierarchy,
supports the rule of inheritance for copying properties defined for a super type to all
of its subtypes.
 Since definitions are true by definition, the information in these networks is often
assumed necessarily true.
 Assertional networks are designed to assert propositions.
 Unlike definitional networks, the information in an assertional network is assumed
contingently true, unless it is explicitly marked with a modal operator.
 Some assertional networks have been proposed as models of the conceptual
structures underlying natural language semantics.
 Implicational networks use implication as the primary relation for connecting nodes.
 They may be used to represent patterns of beliefs, causality, or inferences.
 Executable networks include some mechanism, such as marker passing or attached
procedures, which can perform inferences, pass messages, or search for patterns and
associations.
 Learning networks build or extend their representations by acquiring knowledge
from examples.
 The new knowledge may change the old network by adding and deleting nodes and
arcs or by modifying numerical values, called weights, associated with the nodes and
arcs.
 Hybrid networks combine two or more of the previous techniques, either in a single
network or in separate, but closely interacting networks.
 Some of the networks have been explicitly designed to implement hypotheses about
human cognitive mechanisms, while others have been designed primarily for
computer efficiency.
 Sometimes, computational reasons may lead to the same conclusions as
psychological evidence.
 The distinction between definitional and assertional networks, for example, has a
close parallel to Tulving’s (1972) distinction between semantic memory and episodic
memory.

3.Write about conceptual dependency theory?

ANS:

 This representation is used in natural language processing in order to represent them


earning of the sentences in such a way that inference we can be made from the
sentences.
 It is independent of the language in which the sentences were originally stated.
 CD representations of a sentence is built out of primitives , which are not words
belonging to the language but are conceptual , these primitives are combined to form
the meaning s of the words.
 As an example consider the event represented by the sentence.
 In the above representation the symbols have the following meaning:
 Arrows indicate direction of dependency
 Double arrow indicates two may link between actor and the action
 P indicates past tense
 ATRANS is one of the primitive acts used by the theory . it indicates transfer of
possession
 0 indicates the object case relation
 R indicates the recipient case relation
 Conceptual dependency provides a str5ucture in which knowledge can be represented
and also a set of building blocks from which representations can be built.
 A typical set of primitive actions are

 ATRANS - Transfer of an abstract relationship(Eg: give)


 PTRANS - Transfer of the physical location of an object(Eg: go)

 PROPEL - Application of physical force to an object (Eg: push)

 MOVE - Movement of a body part by its owner (eg : kick)

 GRASP - Grasping of an object by an actor(Eg: throw)

 INGEST - Ingesting of an object by an animal (Eg: eat)

 EXPEL - Expulsion of something from the body of an animal (cry)

 MTRANS - Transfer of mental information(Eg: tell)

 MBUILD - Building new information out of old(Eg: decide)

 SPEAK - Production of sounds(Eg: say)

 ATTEND - Focusing of sense organ toward a stimulus (Eg: listen)

 A second set of building block is the set of allowable dependencies among the
conceptualization describe in a sentence.
4.Explain about script structure?

ANS:

SCRIPT STRUCTURE:

 So more then having a problem with a block of code I have a question regarding code
structure and code layout.
 I see in AI scripts, co routines are the way to go.
 It isn't to hard to tell why as I had my AI script to begin with THE "big ugly mess".
 I had briefly read a few times about co-routines and figured the problems I would be
having with my script would likely be fixed by a distinct separation.
 separate scripts or co routines. So now its done and its working great so I was coming
to the problem of "hey that's my enemy buddy, lets crash into him and push off
course", cause that is delightfully amusing.
 I tried a couple things to see if i could get relative data which a check against the
enemy tag and if its index is not the index of this object then getting the other enemy
data, then testing to see how close that enemy.
 Which this did work and I could stop my enemy, but it was very erratic.
 So that's not going to be my answer.
 I was thinking of running a ray cast then at least if your ray casting then you know
the enemy is in front of you.
 Upon reading about how others implemented this idea, well it looks really ugly,
almost like its still not meant to be that way and your over stretching the methods
purposes (with ray-casting into the angles).
 So now I'm wondering whats going to be good piece for this job..
 I know it should not only be able to test for the enemy but also the player, then it
really only needs to be turned on when a condition some-wheres else it met so its not
running all the time.
 And I thought why not use rectangles and triangles definitions in a class of
"enemyF.O.V" or whatever, its on its one is specific and its easily transferable to
different enemies and different styled F.O.V. make something to serve a purpose and
serve it well solely.

 So what I am asking is in your individual experiences and opinions what did you use
and why? whats better for performance as well.
 Obviously nothings perfect but I'd like to see what others have to say.
 Please I want to see the theory not to much direct code, I'll figure something out :)

 PS: as I wrote this i came up with a neat idea about this(sorry if this is not legible)
figured id throw it in ((kinda like the class contains a list of rect transforms, accept
the transforms acctualy type FOV-obj cause i'm not sure of a name so a new class
inheriting from rectTrasform that contains a Type of shape)) and this^ is added into
the Enemy class which would be defined in the enemy constructor because each
enemy would have there individual.
 Roger Schank, Robert P. Abelson and their research group, extended Tomkins' scripts
and used them in early artificial intelligence work as a method of representing
procedural knowledge.
 [1] In their work, scripts are very much like frames, except the values that fill the
slots must be ordered.
 A script is a structured representation describing a stereotyped sequence of events in
a particular context.
 Scripts are used in natural language understanding systems to organize a knowledge
base in terms of the situations that the system should understand.

 The classic example of a script involves the typical sequence of events that occur
when a person drinks in a restaurant: finding a seat, reading the menu, ordering
drinks from the waitstaff...
 In the script form, these would be decomposed into conceptual transitions, such as
MTRANS and PTRANS, which refer to mental transitions [of information] and
physical transitions [of things].

 Schank, Abelson and their colleagues tackled some of the most difficult problems in
artificial intelligence (i.e., story understanding), but ultimately their line of work
ended without tangible success.
 This type of work received little attention after the 1980s, but it is very influential in
later knowledge representation techniques, such as case-based reasoning.

 Scripts can be inflexible. To deal with inflexibility, smaller modules called memory
organization packets (MOP) can be combined in a way that is appropriate for the
situation.

5.Discuss about dependency theory?

ANS:
 Conceptual dependency theory is a model of natural language understanding used in
artificial intelligence systems.

 Roger Schank at Stanford University introduced the model in 1969, in the early days of
artificial intelligence.
 [1] This model was extensively used by Schank's students at Yale University such as
Robert Wilensky, Wendy Lehnert, and Janet Kolodner.

 Schank developed the model to represent knowledge for natural language input into
computers.
 Partly influenced by the work of Sydney Lamb, his goal was to make the meaning
independent of the words used in the input, i.e. two sentences identical in meaning, would
have a single representation.
 The system was also intended to draw logical inferences.[2]

 The model uses the following basic representational tokens:[3]

 real world objects, each with some attributes.


 real world actions, each with attributes
 times
 locations
 A set of conceptual transitions then act on this representation, e.g. an ATRANS is used to
represent a transfer such as "give" or "take" while a PTRANS is used to act on locations
such as "move" or "go".
 An MTRANS represents mental acts such as "tell", etc.

 A sentence such as "John gave a book to Mary" is then represented as the action of an
ATRANS on two real world objects John and Mary.
UNIT-V

SHORT ANSWERS

1. Distinguish between expert and traditional system?

ANS:

Expert System:
 In artificial intelligence, an expert system is a computer system that emulates the
decision-making ability of a human expert.
 Expert systems are designed to solve complex problems by reasoning through
bodies of knowledge, represented mainly as if–then rules rather than through
conventional procedural code.
 The first expert systems were created in the 1970s and then proliferated in the
1980s.
 Expert systems were among the first truly successful forms of artificial
intelligence (AI) software.
 However, some experts point out that expert systems were not part of true
artificial intelligence since they lack the ability to learn autonomously from
external data.

Traditional system:
 Traditional engineering, also known as sequential engineering, is the
process of marketing, engineering design, manufacturing, testing and
production where each stage of the development process is carried out
separately, and the next stage cannot start until the previous stage is
finished.

 Therefore, the information flow is only in one direction, and it is not until the end of the
chain that errors, changes and corrections can be relayed to the start of the sequence,
causing estimated costs to be under predicted.

 This can cause many problems; such as time consumption due to many modifications
being made as each stage does not take into account the next.

 This method is hardly used today, as the concept of concurrent engineering is more
efficient.

 Traditional engineering is also known as over the wall engineering as each stage blindly
throws the development to the next stage over the wall.

2. Describe in brief about rule based expert system?

ANS:

 Rules are the popular paradigm for representing knowledge.


 A rulebasedexpertsystem is one whose knowledge base contains the domain
knowledge coded in the form of rules
 . A rulebasedexpertsystem consists of the following components:
 This is a mechanism to support communication between and the system.

3. Describe the blackboard systems?


ANS:

 A blackboard system is an artificial intelligence approach based on the


blackboard architectural model, where a common knowledge base, the
"blackboard", is iteratively updated by a diverse group of specialist knowledge
sources, starting with a problem specification and ending with a solution.
 Each knowledge source updates the blackboard with a partial solution when its
internal constraints match the blackboard state.
 In this way, the specialists work together to solve the problem. The blackboard
model was originally designed as a way to handle complex, ill-defined problems,
where the solution is the sum of its parts.

4.Discuss about application of export systems?

ANS:

Applications of Expert System:

The following table shows where ES can be applied:

Application Description

Design Domain Camera lens design, automobile design.

Diagnosis Systems to deduce cause of disease from observed data,


Medical Domain
conduction medical operations on humans.
Comparing data continuously with observed system or with prescribed
Monitoring Systems
behavior such as leakage monitoring in long petroleum pipeline.

Process Control Systems Controlling a physical process based on monitoring.

AI LONG ANSWERS

UNIT-V

LONG ANSWERS

1. Explain about phases in building expert systems?

ANS:

 Expert systems (ES) are one of the prominent research domains of AI. It is introduced
by the researchers at Stanford University, Computer Science Department.

What are Expert Systems?


The expert systems are the computer applications developed to solve complex problems in a
particular domain, at the level of extra-ordinary human intelligence and expertise.
Characteristics of Expert Systems

 High performance

 Understandable

 Reliable

 Highly responsive
Capabilities of Expert Systems
The expert systems are capable of −

 Advising

 Instructing and assisting human in decision making

 Demonstrating

 Deriving a solution

 Diagnosing

 Explaining

 Interpreting input

 Predicting results

 Justifying the conclusion

 Suggesting alternative options to a problem


They are incapable of −

 Substituting human decision makers

 Possessing human capabilities

 Producing accurate output for inadequate knowledge base


 Refining their own knowledge
Components of Expert Systems
The components of ES include −

 Knowledge Base

 Inference Engine

 User Interface
Let us see them one by one briefly −

Knowledge Base
It contains domain-specific and high-quality knowledge.

Knowledge is required to exhibit intelligence. The success of any ES majorly depends upon the
collection of highly accurate and precise knowledge.

What is Knowledge?
The data is collection of facts. The information is organized as data and facts about the task
domain. Data, information, and past experience combined together are termed as knowledge.

Components of Knowledge Base


The knowledge base of an ES is a store of both, factual and heuristic knowledge.

 Factual Knowledge − It is the information widely accepted by the Knowledge


Engineers and scholars in the task domain.

 Heuristic Knowledge − It is about practice, accurate judgement, one’s ability of


evaluation, and guessing.

Knowledge representation
It is the method used to organize and formalize the knowledge in the knowledge base. It is in
the form of IF-THEN-ELSE rules.

Knowledge Acquisition
The success of any expert system majorly depends on the quality, completeness, and accuracy
of the information stored in the knowledge base.

The knowledge base is formed by readings from various experts, scholars, and the Knowledge
Engineers. The knowledge engineer is a person with the qualities of empathy, quick learning,
and case analyzing skills.

He acquires information from subject expert by recording, interviewing, and observing him at
work, etc. He then categorizes and organizes the information in a meaningful way, in the form
of IF-THEN-ELSE rules, to be used by interference machine. The knowledge engineer also
monitors the development of the ES.

Inference Engine
Use of efficient procedures and rules by the Inference Engine is essential in deducting a correct,
flawless solution.

In case of knowledge-based ES, the Inference Engine acquires and manipulates the knowledge
from the knowledge base to arrive at a particular solution.

In case of rule based ES, it −

 Applies rules repeatedly to the facts, which are obtained from earlier rule application.

 Adds new knowledge into the knowledge base if required.

 Resolves rules conflict when multiple rules are applicable to a particular case.

To recommend a solution, the Inference Engine uses the following strategies −


 Forward Chaining

 Backward Chaining
Forward Chaining
It is a strategy of an expert system to answer the question, “What can happen next?”

Here, the Inference Engine follows the chain of conditions and derivations and finally deduces
the outcome. It considers all the facts and rules, and sorts them before concluding to a solution.

This strategy is followed for working on conclusion, result, or effect. For example, prediction of
share market status as an effect of changes in interest rates.

Backward Chaining
With this strategy, an expert system finds out the answer to the question, “Why this
happened?”

On the basis of what has already happened, the Inference Engine tries to find out which
conditions could have happened in the past for this result. This strategy is followed for finding
out cause or reason. For example, diagnosis of blood cancer in humans.

User Interface
User interface provides interaction between user of the ES and the ES itself. It is generally
Natural Language Processing so as to be used by the user who is well-versed in the task domain.
The user of the ES need not be necessarily an expert in Artificial Intelligence.

It explains how the ES has arrived at a particular recommendation. The explanation may appear
in the following forms −

 Natural language displayed on screen.

 Verbal narrations in natural language.

 Listing of rule numbers displayed on the screen.


The user interface makes it easy to trace the credibility of the deductions.

Requirements of Efficient ES User Interface


 It should help users to accomplish their goals in shortest possible way.

 It should be designed to work for user’s existing or desired work practices.

 Its technology should be adaptable to user’s requirements; not the other way round.

 It should make efficient use of user input.

Expert Systems Limitations


No technology can offer easy and complete solution. Large systems are costly, require
significant development time, and computer resources. ESs have their limitations which include

 Limitations of the technology

 Difficult knowledge acquisition

 ES are difficult to maintain

 High development costs

1.Explain about the development of rule base system?

ANS:
 The simplest form of artificial intelligence which is generally used in industry is the rule-
based system, also known as the expert system.

 Before we discuss in detail what these are, let's take a step back and point out that there
are different opinions as to what really constitutes artificial intelligence.

 Some people, when they use the term AI, are referring to systems which have some
ability to learn.

 That is, the system will improve its performance over time as it gains experience in
solving problems, just as a human would.

 Other people, when they use the term AI, are referring just to systems which are capable
of exhibiting human-level performance in some very narrow area, but which are
incapable of learning or expanding their expertise.

 Different people are always going to disagree about what AI is, but it's this fairly simple
form of AI which we want to talk about right now.

 A rule-based system is a way of encoding a human expert's knowledge in a fairly narrow


area into an automated system.

 There are a couple of advantages to doing so.

 One is that the human expert's knowledge then becomes available to a very large range of
people.

 Another advantage is that if you can capture the expertise of an expert in a field, then any
knowledge which they might have is not lost when they retire or leave the firm.

 Rule-based systems differ from standard procedural or object-oriented programs in that


there is no clear order in which code executes.

 Instead, the knowledge of the expert is captured in a set of rules, each of which encodes
a small piece of the expert's knowledge.

 Each rule has a left hand side and a ride hand side.

 The left hand side contains information about certain facts and objects which must be true
in order for the rule to potentially fire (that is, execute).

 Any rules whose left hand sides match in this manner at a given time are placed on
an agenda.

 One of the rules on the agenda is picked (there is no way of predicting which one), and its
right hand side is executed, and then it is removed from the agenda.
 The agenda is then updated (generally using a special algorithm called the Rete
algorithm), and a new rules is picked to execute.

 This continues until there are no more rules on the agenda.

 A typical rule for a mortgage application might look something like this:

 IF

 (number-of-30-day-delinquencies > 4)
AND (number-of-30-day-delinquencies < 8)
 THEN
 increase mortgage rate by 1%

 As you can see, a rule bears a close resemblance to an if-then-else statement, but unlike
an if-then-else statement, it stands alone and does not fire in any predetermined order
relative to other if-then-else statements.

 In a way, a rule-based system might almost be thought of as being similar to a multi-


threaded system in that just as one doesn't know which thread will execute next, one
doesn't know which rule will execute next.

 However, rule-based systems are generally implemented as single-thread programs.

 The advantage to this type of approach, as opposed to a procedural approach, is that if the
system is designed well, then the expert's knowledge can be maintained fairly easily, just
by altering whichever rules need to be altered.

 Indeed, many rule-based systems come along with a rules editor which allows for rules to
be easily maintained by non-technical people.

 Rules are generally implemented in something called a rules engine, which provides a
basic framework for writing rules and then for running them in the manner described
above. In the past, it used to be very difficult to actually work with a rules engine, since
they tended to be technologies unto themselves and very hard to interface with the rest of
the IT world.

 In the last couple of years, however, great strides have been made in making rules
engines much more easily compatible with other technologies.

 Some of the more well known rules engines include:


1. Discuss about the black board system?

ANS:

Blackboard System

 A blackboard system is an artificial intelligence approach based on the blackboard


architectural model, where a common knowledge base, the "blackboard", is iteratively
updated by a diverse group of specialist knowledge sources, starting with a problem
specification and ending with a solution.
 Each knowledge source updates the blackboard with a partial solution when its
internal constraints match the blackboard state.
 In this way, the specialists work together to solve the problem.
 The blackboard model was originally designed as a way to handle complex, ill-defined
problems, where the solution is the sum of its parts.

 Metaphor
 The following scenario provides a simple metaphor that gives some insight into how a
blackboard functions:

 A group of specialists are seated in a room with a large blackboard.


 They work as a team to brainstorm a solution to a problem, using the blackboard as the
workplace for cooperatively developing the solution.

 The session begins when the problem specifications are written onto the blackboard.
 The specialists all watch the blackboard, looking for an opportunity to apply their
expertise to the developing solution.
 When someone writes something on the blackboard that allows another specialist to
apply their expertise, the second specialist records their contribution on the
blackboard, hopefully enabling other specialists to then apply their expertise.
 This process of adding contributions to the blackboard continues until the problem
has been solved.
 Components
 A blackboard-system application consists of three major components.
 The software specialist modules, which are called knowledge sources (KSs). Like the
human experts at a blackboard, each knowledge source provides specific expertise
needed by the application.
 The blackboard, a shared repository of problems, partial solutions, suggestions, and
contributed information. The blackboard can be thought of as a dynamic "library" of
contributions to the current problem that have been recently "published" by other
knowledge sources.
 The control shell, which controls the flow of problem-solving activity in the system.
 Just as the eager human specialists need a moderator to prevent them from trampling
each other in a mad dash to grab the chalk, KSs need a mechanism to organize.
 Their use in the most effective and coherent fashion. In a blackboard system, this is
provided by the control shell.

2. Explain about application of expert systems?

ANS:

Applications of Expert System


The following table shows where ES can be applied.

Application Description

Design Domain Camera lens design, automobile design.

Diagnosis Systems to deduce cause of disease from observed data,


Medical Domain
conduction medical operations on humans.

Monitoring Systems Comparing data continuously with observed system or with


prescribed behavior such as leakage monitoring in long petroleum
pipeline.

Process Control Systems Controlling a physical process based on monitoring.

Knowledge Domain Finding out faults in vehicles, computers.

Detection of possible fraud, suspicious transactions, stock market


Finance/Commerce
trading, Airline scheduling, cargo scheduling.

Expert System Technology


There are several levels of ES technologies available. Expert systems technologies include −

 Expert System Development Environment − The ES development environment


includes hardware and tools. They are −

o Workstations, minicomputers, mainframes.

o High level Symbolic Programming Languages such as LISt Programming (LISP)


and PROgrammation en LOGique (PROLOG).

o Large databases.

 Tools − They reduce the effort and cost involved in developing an expert system to large
extent.

o Powerful editors and debugging tools with multi-windows.

o They provide rapid prototyping

o Have Inbuilt definitions of model, knowledge representation, and inference


design.

 Shells − A shell is nothing but an expert system without knowledge base. A shell
provides the developers with knowledge acquisition, inference engine, user interface,
and explanation facility. For example, few shells are given below −

o Java Expert System Shell (JESS) that provides fully developed Java API for
creating an expert system.
o Vidwan, a shell developed at the National Centre for Software Technology,
Mumbai in 1993. It enables knowledge encoding in the form of IF-THEN rules.

Development of Expert Systems: General Steps


The process of ES development is iterative. Steps in developing the ES include −

Identify Problem Domain

 The problem must be suitable for an expert system to solve it.

 Find the experts in task domain for the ES project.

 Establish cost-effectiveness of the system.


Design the System
 Identify the ES Technology

 Know and establish the degree of integration with the other systems and databases.

 Realize how the concepts can represent the domain knowledge best.

Develop the Prototype


From Knowledge Base: The knowledge engineer works to −

 Acquire domain knowledge from the expert.

 Represent it in the form of If-THEN-ELSE rules.


Test and Refine the Prototype
 The knowledge engineer uses sample cases to test the prototype for any deficiencies in
performance.

 End users test the prototypes of the ES.

Develop and Complete the ES


 Test and ensure the interaction of the ES with all elements of its environment, including
end users, databases, and other information systems.

 Document the ES project well.

 Train the user to use ES.


Maintain the System
 Keep the knowledge base up-to-date by regular review and update.

 Cater for new interfaces with other information systems, as those systems evolve.

Benefits of Expert Systems


 Availability − They are easily available due to mass production of software.

 Less Production Cost − Production cost is reasonable. This makes them affordable.

 Speed − They offer great speed. They reduce the amount of work an individual puts in.

 Less Error Rate − Error rate is low as compared to human errors.

 Reducing Risk − They can work in the environment dangerous to humans.

 Steady response − They work steadily without getting motional, tensed or fatigued.

UNIT-VI

1. Write about Bayesian belief networks?


ANS:

Bayes’ theorem can be used to calculate the probability that a certain event will occur or
that a certain proposition is true.
The theorem is stated as follows:
 P(B) is called the prior probability of B. P(B|A), as well as being called
the conditional probability, is also known as the posterior probability of
B.
 P(A ∧B) = P(A|B)P(B)

 P(B) is called the prior probability of B. P(B|A), as well as being called


the conditional probability, is also known as the posterior probability of
B.  P(A ∧B) = P(A|B)P(B).
 Note that due to the commutativity of ∧, we can also write P(A ∧B) =
P(B|A)P(A) Hence, we can deduce: P(B|A)P(A) = P(A|B)P(B)
 This can then be rearranged to give Bayes’ theorem:

2. What are the operations in fuzzy set?


ANS:
 Fuzzy Operations A fuzzy set operations are the operations on fuzzy sets. The
fuzzy set operations are generalization of crisp set operations. Zadeh [1965]
formulated the fuzzy set theory in the terms of standard operations: Complement,
Union, Intersection, and Difference.
 In this section, the graphical interpretation of the following standard fuzzy set
terms and the Fuzzy Logic operations are illustrated:
 Inclusion : FuzzyInclude [VERYSMALL, SMALL]
 Equality : FuzzyEQUALITY [SMALL, STILLSMALL]
 Complement : FuzzyNOTSMALL = FuzzyCompliment [Small]
 Union : FuzzyUNION = [SMALL ∪ MEDIUM]
 Intersection : FUZZYINTERSECTON = [SMALL ∩ MEDIUM]

3. Write a short note on Fuzzy systems?


ANS:
• Fuzzy Systems include Fuzzy Logic and Fuzzy Set Theory.
• Knowledge exists in two distinct forms : − the Objective knowledge that exists in
mathematical form is used in engineering problems; and − the Subjective knowledge that
exists in linguistic form, usually impossible to quantify. Fuzzy Logic can coordinate these
two forms of knowledge in a logical way.
• Fuzzy Systems can handle simultaneously the numerical data and linguistic knowledge.
• Fuzzy Systems provide opportunities for modeling of conditions which are inherently
imprecisely defined.
• Many real world problems have been modeled, simulated, and replicated with the help
of fuzzy systems.
• The applications of Fuzzy Systems are many like : Information retrieval systems,
Navigation system, and Robot vision.
• Expert Systems design have become easy because their domains are inherently fuzzy
and can now be handled better; examples : Decision-support systems, Financial planners,
Diagnostic system, and Meteorological system.

4. Define about linguistic variables and hedges?


ANS:
Linguistic variables:
 In artificial intelligence, operations research, and related fields, a Linguistic
value, (for some authors Linguistic variable) is a natural language term which is
derived using quantitative or qualitative reasoning such as with probability and
statistics or fuzzy sets and systems.

 Example of Linguistic Value:


 For example, if a shuttle heat shield is deemed of having a linguistic value of a
"very low" percentage of damage in re-entry, based upon knowledge from
experts in the field, that probability would be given a value of say, 5%. From
there on out, if it were to be used in an equation, the variable of percentage of
damage will be at 5% if it deemed very low percentage.

Hedges:

 hedge funds have been using computer algorithms to make trade decisions.

 However, those algorithms were driven by static models developed and managed by data
scientists and weren’t adept at dealing with the volatilities of financial markets.

 Decisions made by these algorithms yielded results that were often inferior to those made
by human discretion.
 In recent years with the emergence of machine learning and deep learning, the branches
of AI have caused a breakthrough in the creation of software and are driving new
innovations in computational trading.

 In contrast to traditional software which rely on predefined rules given by programmers,


machine learning algorithms work by analyzing huge amounts of data and defining their
own rules based on the patterns and connections they find between different data points.

 Machine learning software autonomously update themselves as they ingest new data.

UNIT-VI

LONG ANSWERS

1. Explain the concept of fuzzy systems?

ANS:

 Fuzzy Systems include Fuzzy Logic and Fuzzy Set Theory.


 Knowledge exists in two distinct forms:

− the Objective knowledge that exists in mathematical form is used in engineering


problems; and

− the Subjective knowledge that exists in linguistic form, usually impossible to quantify.

 Fuzzy Logic can coordinate these two forms of knowledge in a logical way.
 Fuzzy Systems can handle simultaneously the numerical data and linguistic knowledge.
 Fuzzy Systems provide opportunities for modeling of conditions which are inherently
imprecisely defined.
 Many real world problems have been modeled, simulated, and replicated with the help of
fuzzy systems.
 The applications of Fuzzy Systems are many like : Information retrieval systems,
Navigation system, and Robot vision.

 Expert Systems design have become easy because their domains are inherently fuzzy and
can now be handled better;
 examples : Decision-support systems, Financial planners, Diagnostic system, and
Meteorological system.
 Introduction
 Any system that uses Fuzzy mathematics may be viewed as Fuzzy system.
 The Fuzzy Set Theory - membership function, operations, properties and the relations
have been described in previous lectures.
 These are the prerequisites for understanding Fuzzy Systems. The applications of Fuzzy
set theory is Fuzzy logic which is covered in this section.
 Here the emphasis is on the design of fuzzy system and fuzzy controller in a closed–loop.
The specific topics of interest are :

− Fuzzification of input information,

− Fuzzy Inferencing using Fuzzy sets ,

− De-Fuzzification of results from the Reasoning process,

− Fuzzy controller in a closed–loop.

 Fuzzy Inferencing, is the core constituent of a fuzzy system.


 A block schematic of Fuzzy System is shown in the next slide.
 Fuzzy Inferencing combines the facts obtained from the Fuzzification with the fuzzy rule
base and

Conducts the Fuzzy Reasoning Process:

• Fuzzy System

A block schematic of Fuzzy System is shown below.

Fig. Elements of Fuzzy System

Fuzzification

Fuzzy

Rule Base

Fuzzy

Inferencing Defuzzification

Membeship Function

X1

X2

Xn
Y1

Y2

Ym

Input:

Variables

Output:

Variables

Fuzzy System elements

− Input Vector: X = [x1, x2,…xn ] T

are crisp values, which are

transformed into fuzzy sets in the fuzzification block.

− Output Vector : Y = [y1 , y2, . . . ym ] T comes out from the

defuzzification block, which transforms an output fuzzy set back to

a crisp value.

− Fuzzification : a process of transforming crisp values into grades .Membership for linguistic
terms, "far", "near", "small" of fuzzy sets.

− Fuzzy Rule base : a collection of propositions containing linguistic.

Variables:

The rules are expressed in the form:

If (x is A) AND (y is B) . . . . . . THEN (z is C)

Where x, y and z represent variables (e.g. distance, size) and

A, B and Z are linguistic variables (e.g. `far', `near', `small').

− Membership function: provides a measure of the degree of similarity of elements in the


universe of discourse U to fuzzy set.

− Fuzzy Inferencing: combines the facts obtained from the Fuzzification with the rule base and
conducts the Fuzzy reasoning process.
− Defuzzyfication: Translate results back to the real world values.

---------------------------------------------------------------------------------------------------------------------
---------------------

2. What are the operations in fuzzy set?

At is Fuzzy Set ?

ANS

• The word "fuzzy" means "vagueness". Fuzziness occurs when the Boundary of a piece of
information is not clear-cut.

• Fuzzy sets have been introduced by Lotfi A. Made (1965) as an extension of the classical
notion of set.

• Classical set theory allows the membership of the elements in the set in binary terms, a
bivalent condition - an element either belongs or does not belong to the set.

Fuzzy set theory permits the gradual assessment of the membership of elements in a set,
described with the aid of a membership function valued in the real unit interval [0, 1].

• Example:

Words like young, tall, good, or high are fuzzy.

− There is no single quantitative value which defines the term young.

− For some people, age 25 is young, and for others, age 35 is young.

− The concept young has no clean boundary.

− Age 1 is definitely young and age 100 is definitely not young;

− Age 35 has some possibility of being young and usually depends on the context in which it
is being considered.

Introduction:

 In real world, there exists much fuzzy knowledge;


 Knowledge that is vague, imprecise, uncertain, ambiguous, inexact or probabilistic in
nature.
 Human thinking and reasoning frequently involve fuzzy information, originating from
inherently inexact human concepts.
 Humans can give satisfactory answers, which are probably true.
 However, our systems are unable to answer many questions.
 The reason is, most systems are designed based upon classical set theory and two-
valued logic which is unable to cope with unreliable and incomplete information and
give expert opinions.

Fuzzy Set:

 A Fuzzy Set is any set that allows its members to have different degree of membership,
called membership function, in the interval [0, 1].

• Definition of Fuzzy set:

 A fuzzy set A, defined in the universal space X, is a function defined in X which


assumes values in the range [0, 1].
 A fuzzy set A is written as a set of pairs {x, A(x)} as
 A = {{x , A(x)}} , x in the set X
 where x is an element of the universal space X, and
 A(x) is the value of the function A for this element.
 The value A(x) is the membership grade of the element x in a
 fuzzy set A.

Example : Set SMALL in set X consisting of natural numbers ≤ to 12.

Assume: SMALL(1) = 1, SMALL(2) = 1, SMALL(3) = 0.9, SMALL(4) = 0.6,

SMALL(5) = 0.4, SMALL(6) = 0.3, SMALL(7) = 0.2, SMALL(8) = 0.1,

SMALL(u) = 0 for u >= 9.

Then, following the notations described in the definition above :

Set SMALL = {{1, 1 }, {2, 1 }, {3, 0.9}, {4, 0.6}, {5, 0.4}, {6, 0.3}, {7, 0.2},

{8, 0.1}, {9, 0 }, {10, 0 }, {11, 0}, {12, 0}}

Note that a fuzzy set can be defined precisely by associating with each x , its grade of
membership in SMALL.

------------------------------------------------------------------------------------

3. Write about Bayesian belief networks?


Bayesian belief networks:

 Bayes’ theorem can be used to calculate the probability that a certain event will occur
or that a certain proposition is true

 The theorem is stated as follows:


P(B) is called the prior probability of B. P(B|A), as well as being called the
conditional probability, is also known as the posterior probability of B.
 P(A ∧B) = P(A|B)P(B)
 Note that due to the commutativity of ∧, we can also write
 P(A ∧B) = P(B|A)P(A)
 Hence, we can deduce: P(B|A)P(A) = P(A|B)P(B)
 This can then be rearranged to give Bayes’ theorem:
 Byes’ Theorem states:
 This reads that given some evidence E then probability that hypothesis is true
is equal to the ratio of the probability that E will be true given times thea
priori evidence on the probability of and the sum of the probability of E .
 over the set of all hypotheses times the probability of these hypotheses.
 The set of all hypotheses must be mutually exclusive and exhaustive.
 Thus to find if we examine medical evidence to diagnose an illness.
 We must know all the prior probabilities of find symptom and also the
probability ofhaving an illness based on certain symptoms being observed.
 Bayesian statistics lie at the heart of most statistical reasoning systems.
 How is Bayes’ theorem exploited?
 The key is to formulate problem correctly:
 P(A|B) states the probability of A given only B's evidence. If there is other
relevant evidence then it must also be considered.
 All events must be mutually exclusive. However in real world problems events
are not generally unrelated. For example in diagnosing measles, the symptoms
of spots and a fever are related. This means that computing the conditional
probabilities gets complex.
 In general if a prior evidence, p and some new observation, N then computing
grows exponentially for large sets of p.
 All events must be exhaustive. This means that in order to compute all
probabilities the set of possible events must be closed.
 Thus if new information arises the set must be created afresh and all
probabilities recalculated.
 Thus Simple Bayes rule-based systems are not suitable for uncertain reasoning.
 Knowledge acquisition is very hard.
 Too many probabilities needed -- too large a storage space.
 Computation time is too large.
 Updating new information is difficult and time consuming.
 Exceptions like ``none of the above'' cannot be represented.
Humans are not very good probability estimators.
However, Bayesian statistics still provide the core to reasoning in many
uncertain reasoning systems with suitable enhancement to overcome the above
problems.
 We will look at three broad categories:
 Certainty factors
 Dempster-Shafer models
 Bayesian networks.

Bayesian networks are also called Belief Networks or Probabilistic Inference Networks.

---------------------------------------------------------------------------------

4. Define about linguistic variables and hedges?

ANS:

Overview of linguistics:

In dealing with natural language, a computer system needs to be able to process and
manipulate language at a number of levels.

 Phonology. This is needed only if the computer is required to understand spoken


language.
 Phonology is the study of the sounds that make up words and is used to identify
words from sounds.
 We will explore this in a little more detail later.
 When we look at the ways in which computers can understand speech.
 Morphology. This is the first stage of analysis that is applied to words, once they have
been identified from speech, or input into the system.
 Morphology looks atthe ways in which words break down into components and how
that affects their grammatical status.
 For example, the letter “s” on the end of a word can often either indicate that it is a
plural noun or a third-person present-tense verb.
 Syntax. This stage involves applying the rules of the grammar from the language
being used.
 Syntax determines the role of each word in a sentence and, thus,enables a computer
system to convert sentences into a structure that can be more easily manipulated.
 Semantics: This involves the examination of the meaning of words and sentences.
 As we will see, it is possible for a sentence to be syntactically correct but to be
semantically meaningless.
 Conversely, it is desirable that a computer system be able to understand sentences
with incorrect syntax but that still convey useful information semantically.
 Pragmatics: This is the application of human-like understanding to sentences and
discourse to determine meanings that are not immediately clear from the semantics.
 For example, if someone says, “Can you tell me the time?”, most people know
that “yes” is not a suitable answer.
 Pragmatics enables a computer system to give a sensible answer to questions
like this.
 In addition to these levels of analysis, natural language processing systems
must apply some kind of world knowledge. In most real-world systems, this
world knowledge is limited to a specific domain (e.g., a system might have
detailed knowledge about the Blocks World and be able to answer questions
about this world).
 The ultimate goal of natural language processing would be to have a system
with enough world knowledge to be able to engage a human in discussion on
any subject.
 This goal is still a long way off.
 Morphological Analysis :
 In studying the English language, morphology is relatively simple.
 We have endings such as -ing, -s, and -ed, which are applied to verbs; endings such
as -s and -es, which are applied to nouns; we also have the ending -ly, which usually
indicates that a word is an adverb.
 We also have prefixes such as anti-, non-, un-, and in-, which tend to indicate
negation, or opposition.
 We also have a number of other prefixes and suffixes that provide a variety of
semantic and syntactic information.
 In practice, however, morphologic analysis for the English language is not terribly
complex, particularly when compared with agglutinative languages

such as German, which tend to combine words together into single words to indicate
combinations of meaning.

 Morphologic analysis is mainly useful in natural language processing for identifying


parts of speech (nouns, verbs, etc.) and for identifying which words belong together.
 In English, word order tends to provide more of this information than morphology,
however.
 In languages such as Latin, word order was almost entirely superficial, and the
morphology was extremely important.
 Languages such as French, Italian, and Spanish lie somewhere between these two
extremes.
 As we will see in the following sections, being able to identify the part of speech for
each word is essential to understanding a sentence.
 This can partly be achieved by simply looking up each word in a dictionary, which
might contain for example the following entries:
 (swims, verb, present, singular, third person)
 (swimmer, noun, singular)
 (swim, verb, present, singular, first and second persons)
 (swim, verb, present plural, first, second, and third persons)
 (swimming, participle)
 (swimmingly, adverb)
 (swam, verb, past)
 Clearly, a complete dictionary of this kind would be unfeasibly large.
 A more practical approach is to include information about standard endings, such as:
 (-ly, adverb)
 (-ed, verb, past)
 (-s, noun, plural)

Das könnte Ihnen auch gefallen