Sie sind auf Seite 1von 59

UNIT I

INTRODUCTION TO Al AND PRODUCTION SYSTEMS

Introduction to AI-Problem formulation, Problem Definition -Production systems, Control


strategies, Search strategies. Problem characteristics, Production system characteristics
-Specialized production system- Problem solving methods - Problem graphs, Matching,
Indexing and Heuristic functions –Hill Climbing-Depth first and Breath first, Constraints
satisfaction - Related algorithms, Measure of performance and analysis of search
algorithms.

TWO MARKS:

State the advantages of Breadth First Search.[Nov/Dec2017]


ADVANTAGES OF BREADTH-FIRST SEARCH
 Breadth first search will never get trapped exploring the useless path forever.
 If there is a solution, BFS will definitely find it out.
 If there is more than one solution then BFS can find the minimal one that requires
less number of steps.
What is Commutative production system?[Nov/Dec2017]
A commutative production system: A commutative production system is a production system that
is both monotonic and partially commutative.
List down the characteristics of intelligent agent.[Apr/May2017]
An agent always requires a certain amount of intelligence to perform its tasks. Consequently, one refers to
intelligent agent.

1.Mention how the search stratergies are evaluated.[Nov/Dec '15] (OR) Evaluate
performance of problem solving meyhod based on depth first search algorithm.[Dec '10]
The Search strategies are evaluated in the following dimensions:
Completeness – It is complete
Time complexity – It's time complexity is O(b)
Space complexity – It's space comlexity os O(b d+1)
Optimality – It is not optimal
.
2.Define admissible and consistent heuristics.[Nov/Dec '15]
Consistent heuristics:

1
The path-finding problems in artificial intelligence, a heuristic function is said to be
consistent, or monotone, if its estimate is always less than or equal to the estimated distance
from any neighboring vertex to the goal, plus the step cost of reaching that neighbor.
Admissible heuristics:
A consistent heuristic is also admissible, i.e. it never overestimates the cost of reaching
the goal.

3.What is ridge?[May/June '16]


A ridge or mountain ridge is a geological feature consisting of a chain of mountains or hills
that form a continuous elevated crest for some distance. Ridges are usually termed hills or
mountains as well, depending on size.

4.How much knowledge would be required by a perfect program for the problem of
playing chess? Assume that unlimited computing power is available.[May/June '16]
The rule for determining legal moves and some simple control mechanism then
implements an appropriate approach search procedure. Additional knowledge about such things
as good strategy and tactics could of course help considerably could constrain the search and
speed up the execution of program.

5.How to improve the effectiveness of a search based problem solving technique?


[May/June '16]
Identify and Define the root causes
Generate Alternative Solutions
Evaluate the AlternativesAgree on the best solution
Develop an Action Plan
Implement and Evaluate the Solution

6.Why problem formulation must follow goal formulation? [April/May ‘15]


Goal Formulation is based on the current situation and agent’s performance measure. It is
the first step in problem solving.
Problem formulation is the process of deciding what actions and states to consider
for a goal that has been developed in the first step of problem solving.

2
7.How will you measure problem solving performance. [Apr/May ‘10] (OR) List the
criteria’s to measure the performance of search strategy. [May/June ‘14]
The algorithm’s performance can be measured in four ways :

 Completeness : Is the algorithm guaranteed to find a solution when there is one?


 Optimality : Does the strategy find the optimal solution
 Time complexity : How long does it take to find a solution?
 Space complexity : How much memory is needed to perform the search?

8.Define an Ideal rational agent. [Apr/May ‘15] [May/June ‘09]


For each possible percept sequence, an ideal rational agent should do whatever action is
expected to maximize its performance measure on the basis of the evidence provided by the
percept sequence & whatever built-in knowledge that the agent has.

9.Define Constraint Satisfaction Problem (CSP). [Apr/May ‘10] [Nov/Dec ‘14]


A constraint satisfaction problem is a special kind of problem satisfies some additional
structural properties beyond the basic requirements for problem in general. In a CSP, the states
are defined by the values of a set of variables and the goal test specifies a set of constraint that
the value must obey.

10.Give the structure of agent in an environment. [May/June ‘14]


Agents interacts with environment through sensors and actuators.
An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon the environment through actuators.

3
11.What are software agent? [April/May ‘13]
A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and sending
network packets.

12.What are the four components to define a problem? Define them. [May/June ‘13]
The four components of a problem are,
 An initial state – It is the state in which agent starts in.
 A description of possible actions – It is the description of possible actions which
are available to the agent.
 Goal test – It is the test that determines whether a given state is goal (final) state.
 Path cost – It is the function that assigns a numeric cost(value) to each path. The
problem-solving agent is expected to choose a cost-function that reflects its own
performance measure.

13.Define agent , agent function [May/June ‘13, '16]


An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon the environment through actuators.
Agent Function
1.The agent function is a mathematical function that maps a sequence of perceptions into
action.
2. The function is implemented as the agent program.
3. The part of the agent taking an action is called an actuator.
4. Environment--> sensors-->agent function -->actuators -->environment

14.What is the use of heuristic function? (or) [May/June ‘13]

What is the advantage of heuristic function? [Apr/May ‘08, Nov/Dec ‘09] (or)

State the significance of using heuristic functions? [Nov/Dec ‘11, Nov/Dec ‘12]

 A heuristic function or simply a heuristic is a function that ranks alternatives in various


search algorithms at each branching step basing on available information in order to
make a decision which branch is to be followed during a search. A heuristic function is
denoted by h(n).

4
 Heuristics are used by informed search algorithms such as Greedy best-first search and
A* to choose the best node to explore.
 Heuristic functions reduces the search cost
 Optimally solution for the problem are derived.

15.Define basic agent program. [May/June ‘13]


The basic agent program is a concrete implementation of the agent function which runs
on the agent architecture. Agent program puts bound on the length of percept sequence and
considers only required percept sequences. Agent program implements the functions of percept
sequence and action which are external characteristics of the agent.
Agent program takes input as the current percept from the sensor and return an action to
the effectors (Actuators).

16.What are the functionalities of agent function? [Nov/Dec ‘12]


The agent function is a mathematical function that maps a sequence of perceptions into
action. The major functionalities of the agent function is to generate the possible action to each
and every percept. It helps the agent to get the list of possible actions the agent take. Agent
function can be represented in the tabular form.

17.How can we ridge and plateau in hill climbing? [Nov/Dec’12]


Ridge and plateau in hill climbing can be avoided using methods like backtracking,
making big jumps. Backtracking and making big jumps helps to avoid plateau, whereas,
application of multiple rules help to avoid the problem of ridges.

18.What is artificial intelligence? [April/May ‘03] [April/May ‘04] [April/May ‘08] [Nov/Dec
‘08]
The exciting new effort to make computers think machines with minds in the full and
literal sense. Artificial intelligence systemizes and automates intellectual tasks and is
therefore potentially relevant to any sphere of human intellectual activities.

19.List down the characteristics of intelligent agent. [April/May ‘11]


 Intelligent Agents are autonomous because they function without requiring that the
Console or Management Server be running.

5
 An Agent that services a database can run when the database is down, allowing the
Agent to start up or shut down the database.
 The Intelligent Agents can independently perform administrative job tasks at any time,
without active participation by the administrator.
 Agents can autonomously detect and react to events, allowing them to monitor the
system and execute a fixit job to correct problems without the intervention of the
administrator.

20.What do you mean by local maxima with respect to search technique? [April/May ‘11]
Local maximum is the peak that is higher than each of its neighbour states, but lower than
the global maximum i.e, a local maxima is a tiny hill on the surface whose peak is not as high as
the main peak (Which is optimal Solution). Hill climbing fails to find optimum solution when it
encounters local maxima. Any small move, from here also makes things worse (temporarily). At
local maxima all the search procedure turns out to be wasted here. It is like dead end.

21.State the reason when hill climbing often gets stuck. [Apr/May ‘10]
Local maxima is the state where the hill climbing algorithm is sure to get stucks. Local
maximum is the peak that is higher than each of its neighbour states, but lower than the global
maximum. At local maxima all the search procedure turns out to be wasted here. It is like dead
end.

22.Define rational agent. [Nov/Dec ’11 & ‘09] [April/May ‘10]


A rational agent is one that does the right thing. Here right thing is one that will cause
agent to be more successful. That leaves us with the problem of deciding how and when to
evaluate the agent‘s success.

23.How does one characterize the quality of heuristic? (Or)Define branching factor with
example. [May/June ‘09]
One way to characterize the quality of a heuristic is the effective branching factor b*. A
well designed heuristic would have a value of b* close to 1. If the total number of nodes
generated by A* for a particular problem is N, and the solution depth is d, then b* is the
branching factor that a uniform tree of depth d would have to have in order to contain N+1
nodes. Thus,

6
N + 1 = 1 + b* + (b*)2+…+(b*)d
For example, depth=5
N = 52
Then b*=1.91

24.Formally define a game as a kind of search problems. [May/June ‘09]


Formal Definition of Game:
We will consider games with two players, whom we will call MAX and MIN. At the end of the
game, points are awarded to the winning player and penalties are given to the loser. A game can
be formally defined as a search problem with the following components:
 The initial state, which includes the board position and identifies the player to move.
 A successor function, which returns a list of (move, state) pairs, each indicating a legal
move and the resulting state.
 A terminal test, which determines the end of the game.
 A utility function (also called an objective function or payoff function), which give a
numeric value for the terminal states.

25.What is meant by heuristic search techniques? [Nov/Dec ‘14]


A heuristic function or simply a heuristic is a function that ranks alternatives in various
search algorithms at each branching step basing on available information in order to make a
decision which branch is to be followed during a search. A heuristic function is denoted by h(n).

26.List the capabilities that a computer should possess for conducting a Turing Test? OR
What are the requirements by a computer to pass a turing test? [April/May ‘03]
The capabilities that a computer should possess for conducting a Turing Test are,
 Natural Language Processing;
 Knowledge Representation;
 Automated Reasoning;
 Machine Language.

27. List the measures to determine agent’s behavior. OR What is role of agent program?
[April/May ‘04]
7
The measures to determine agent‘s behavior are,
 Performance measure,
 Rationality,
 Omniscience,
 Learning and Autonomy.
28.What is the use of online search agents in unknown environments? [NOV/DEC 2007]
Online search agents suits well for the domains:
 Dynamic or semi dynamic domain
 Stochastic domain
Online search is a necessary idea for an exploration problem, where the state and actions are
unknown to the agent

29.Differentiate blind search& heuristic search. Or Define any two search strategies.
[April/May ‘03] What is the difference between informed and uninformed search?
[Nov/Dec ‘08]
Uninformed or Blind Search Informed or Heuristic Search
No information about the number of Uses problem-specific knowledge
steps (or) path cost from the current beyond the definition of the problem
state to goal state itself.
Less effective More effective
Given information is used to solve the Additional information can be
problem. added as assumption to solve the
problem.
e.g. (search strategies) e.g. (search strategies)
Breadth-first search Best first search
Uniform-cost search Greedy search
Depth-first search A* search
Depth-limited search
Iterative deepening search

30.List the components of a learning agent?


The components of a learning agent are,
 Learning element;
 Performance element;
 Critic;
8
 Problem generator.

31.What are the factors that a rational agent should depend on at any given time?
The factors that a rational agent should depend on at any given time are,
 The performance measure that defines criterion of success;
 Agent‘s prior knowledge of the environment;
 Action that the agent can perform;
 The agent‘s percept sequence to date.

32.List out some of the applications of Artificial Intelligence.


Some of the applications of Artificial Intelligence are,
 Autonomous planning and scheduling;
 Game playing;
 Autonomous control;
 Diagnosis;
 Logistics planning;
 Robotics.

33.List some drawbacks of hill climbing process.


 Local maxima: A local maxima as opposed to a goal maximum is a peak that is lower
that the highest peak in the state space. Once a local maxima is reached the algorithm
will halt even though the solution may be far from satisfactory.
 Plateaux: A plateaux is an area of the state space where the evaluation fn is essentially
flat. The search will conduct a random walk.

9
16 MARKS:

1.Solve the given problem. Describe the operators involved in it.


Consider a Water Jug Problem : You are given two jugs, a 4-gallon one and a 3-gallon
one. Neither has any measuring markers on it. There is a pump that can be used to fill the
jugs with water. How can you get exactly 2 gallons of water into the 4 gallon jug? Explicit
Assumptions: A jug can be filled from the pump,water can be poured out of a jug onto the
ground,water can be poured from one jug to another and that there are no other
measuring devices available.[May/June '16]
Solution:-
The state space for this problem can be defined as

{ ( i ,j ) i = 0,1,2,3,4 j = 0,1,2,3}

‘i’ represents the number of liters of water in the 4-liter jug and ‘j’ represents the number of liters
of water in the 3-liter jug. The initial state is ( 0,0) that is no water on each jug. The goal state is
to get ( 2,n) for any value of ‘n’.

To solve this we have to make some assumptions not mentioned in the problem. They are

1. We can fill a jug from the pump.

2. we can pour water out of a jug to the ground.

3. We can pour water from one jug to another.

4. There is no measuring device available.

The various operators (Production Rules) that are available to solve this problem may be stated
as given in the following figure .

10
11
12
13
14
15
2.What is an agent? Discuss on different types of agent program. [Nov / Dec ‘14] (Or)
Explain the structure of a simple reflex agent and utility based agent with an example.
[Nov/Dec ‘11] (Or) Outline the components and functions of anyone of the basic kinds of
agent programs. [Nov/Dec ‘08, Msy/june '16] (OR) Explain goal and model based reflex
16
agent with an example. [Apr/May ‘12] (OR) Define Agents.Specify the PAGE descriptions
for intelligen agent design with examples and explain basic types of agents.[Nov/Dec '15]
Agent programs
The agent programs all have the same skeleton: they take the current percept as input from the
sensors and return an action to the actuators.
Notice the difference between the agent program, which takes the current percept as
input, and the agent function, which takes the entire percept history.
The agent program takes just the current percept as input because nothing more is
available from the environment; if the agent's actions depend on the entire percept sequence, the
agent will have to remember the percepts.

Function TABLE-DRIVEN_AGENT(percept) returns an action


static: percepts, a sequence initially empty
table, a table of actions, indexed by percept sequence
append percept to the end of percepts
action LOOKUP(percepts, table)
return action

Types Of Agent:
• Table-driven agents use a percept sequence/action table in memory to find the next action.
They are implemented by a (large) lookup table.
• Simple reflex agents are based on condition-action rules, implemented with an appropriate
production system. They are stateless devices which do not have memory of past world states.
• Agents with memory have internal state, which is used to keep track of past states of the
world.
• Agents with goals are agents that, in addition to state information, have goal information that
describes desirable situations. Agents of this kind take future events into consideration.
• Utility-based agents base their decisions on classic axiomatic utility theory in order to act
rationally.

1) Simple Reflex Agent


The simplest kind of agent is the simple reflex agent. These agents select actions on the basis
of the current percept, ignoring the rest of the percept history. For example, the vacuum agent
whose agent function is a simple reflex agent, because its decision is based only on the current
location and on whether that contains dirt.
17
 Select action on the basis of only the current percept.
E.g. the vacuum-agent
 Large reduction in possible percept/action situations(next page).
 Implemented through condition-action rules If dirty then suck

function SIMPLE-REFLEX-AGENT(percept) returns an action


static: rules, a set of condition-action rules
state INTERPRET-INPUT(percept)
rule RULE-MATCH(state, rule)
action RULE-ACTION[rule]
return action

Characteristics:
 Only works if the environment is fully observable.
 Lacking history, easily get stuck in infinite loops
 One solution is to randomize actions

2) Model-based reflex agents


The agent should maintain some sort of internal state that depends on the percept
history and thereby reflects at least some of the unobserved aspects of the current state.
Updating this internal state information as time goes by requires two kinds of knowledge
to be encoded in the agent program.

18
 First, need some information about how the world evolves independently of the
agent.
 Second, need some information about how the agent's own actions affect the
world.
This knowledge about "how the world working - whether implemented in simple Boolean
circuits or in complete scientific theories-is called a model of the world. An agent that uses such
a MODEL-BASED model is called a model-based agent.

function REFLEX-AGENT-WITH-STATE(percept) returns an action


static: rules, a set of condition-action rules
state, a description of the current world state
action, the most recent action.
state UPDATE-STATE(state, action, percept)
rule RULE-MATCH(state, rule)
action RULE-ACTION[rule]
return action

3) Goal-based agents
Knowing about the current state of the environment is not always enough to decide what
to do. In other words, as well as a current state description, the agent needs some sort of goal
information that describes situations that are desirable-for example, being at the passenger's
destination. The agent program can combine this with information about the results of possible
19
actions (the same information as was used to update internal state in the reflex agent) in order
to choose actions that achieve the goal.

4) Utility-based agents
Goals alone are not really enough to generate high-quality behavior in most
environments. Goals just provide a crude binary distinction between "happy" and "unhappy"
states, whereas a more general performance measure should allow a comparison of different
world states according to exactly how happy they would make the agent if they could be
achieved. Because "happy" does not sound very scientific, the customary terminology is to say
that if one world state is preferred to another, then it has higher utility for the agent.

20
A model-based, utility-based agent.
It uses a model of the world, along with a utility function that measures its preferences
among states of the world. Then it chooses the action that leads to the best expected utility,
where expected utility is computed by averaging over all possible outcome states, weighted by
the probability of the outcome.

Learning Agents
All agents can improve their performance through learning.
A learning agent can be divided into four conceptual components i.e.
 Learning agent
 Performance element
 Critic
 Problem generator
The most important distinction is between the learning element, which is responsible for
making improvements, and the performance element, which is responsible for selecting
external actions.
The performance element is what we have previously considered to be the entire agent: it
takes in percept and decides on actions.
The learning element uses feedback from the critic on how the agent is doing and
determines how the performance element should be modified to do better in the future.

21
The last component of the learning agent is the problem generator. It is responsible for
suggesting actions that will lead to new and informative experiences.
But if the agent is willing to explore a little, it might discover much better actions for the
long run.
The problem generator’s job is to suggest these exploratory actions. This is what
scientists do when they carry out experiments.

A general model of learning agents.


3.Explain the following uninformed search strategies. [May/June ’14, '16] [Apr/May ‘13]

(i)Breadth-First Search (or) Give an example of a problem for which BFS would
work better than DFS. [NOV/DEC 2008]

(ii) Uniform Cost Search


(iii)Depth-First search
(iv) Depth-Limited Search [Nov/Dec‘14]
(OR)
Explain the following uninformed search strategies. [Nov/Dec ‘07] [May/June ‘15]
[May/Jun ‘10]
(i)Depth-First search
(ii)Iterative deepening depth first search
(iii)Bidirectional search
(OR)
22
Explain Uninformed search strategies. [May/June ‘09] (OR) Discuss any two
uninformed search strategies. [Nov/Dec ‘09] (OR) What are five uninformed search
strategies? Explain any two in detail with example. [Nov/Dec ‘13] Explain the
following uninformed search strategies. [May/June ‘10]
(i)Iterative deepening depth-first search.[8 Marks]
(ii)Bidirectional search. [Marks 8]
(OR)
Analyze the uniformed search algorithms with respect to different criteria.Explain
heuristics for constraint satisfaction problems.[Nov/Dec '15]

There are six Uninformed Search Algorithms


 Breadth First Search
 Uniform-cost search
 Depth-first search
 Depth-limited search
 Iterative deepening depth-first search
 Bidirectional Search
Breadth-First Search :
Breadth-first search is a simple strategy in which the root node is expanded first, then all
successors of the root node are expanded next, then their successors and so on.
In general, all the nodes are expanded at a given depth in the search tree before any
nodes at the next level are expanded.
Breath-first-search is implemented by calling TREE-SEARCH with an empty fringe that is
a first-in-first-out (FIFO) queue, assuring that the nodes that are visited first will be expanded
first. In other words, calling TREE-SEARCH (problem, FIFO-QUEUE ()) results in breadth-first-
search. The FIFO queue puts all newly generated successors at the end of the queue, which
means that Shallow nodes are expanded before deeper nodes.
Algorithm :
function BFS{problem) returns a solution or failure
return TREE-SEARCH (problem, FIFO-QUEUE( ))

23
Example: Route Finding problem

Task: Find a ,path from. S to G using BFS

Properties of BFS:
 Completeness : Yes
 Time : b + b2 + b3 + …+ bd + (bd+1 - b) = O(bd+1)
 Space : O(bd+1) ( keep every node in memory)
 Optimally : Yes, Provided the path cost is a no decreasing function of
the depth of the node
Advantages: Guaranteed to find the single solution at the shallowest depth level.
Disadvantages: Suitable for only smallest instance problem
Uniform-Cost Search :
Breadth-first search is optimal when all step costs are equal, because it always expands
the shallowest unexpanded node. By a simple extension, we can find an algorithm that is

24
optimal with any step cost function. Uniform-cost search expands the node n with the lowest
path cost.
If all step costs are equal, this is identical to breadth-first search.
Uniform-cost search does not care about the number of steps a path has, but only about
their total cost.
Example: Route Finding problem

Task: Find a ,path from. S to G using BFS

Since the value of A is less it is expanded first, but it is not optimal.


B to be expanded next

Properties of Uniform-cost Search:


Completeness : yes
Time complexity : O(b[1+C*/€])
Space complexity : O(b[1+C*/€])
Optimally : Yes
25
Advantages: Guaranteed to find the single solution at minimum path cost
Disadvantages: Suitable for only smallest instance problem

Depth-First-Search:
Depth-first-search always expands the deepest node in the current fringe of the search
tree. The search proceeds immediately to the deepest level of the search tree, where the nodes
have no successors. As those nodes are expanded, they are dropped from the fringe, so then
the search ―backs up to the next shallowest node that still has unexplored successors.
This strategy can be implemented by TREE-SEARCH with a last-in-first-out (LIFO)
queue, also known as a stack.

Algorithm:
function DFS(problem) return a solution or failure
TREE-SEARCH(problem, LIFO-QUEUE())

EXAMPLE:

26
Backtracking Search:
A variant of depth-first search called backtracking search uses less memory and only one
successor is generated at a time rather than all successors.; Only O(m) memory is needed
rather than O(bm).

Properties of Depth first Search:


Completeness : no
Time complexity : O(bm)
Space complexity : O(bm)
Optimally : no

Advantages:
If more than one solution exists (or) number of levels is high then DFS is best because
exploration is done only in a small portion of the whole space.

Drawback of Depth-first-search:
The drawback of depth-first-search is that it can make a wrong choice and get stuck
going down very long(or even infinite) path when a different choice would lead to solution near
the root of the search tree. For example ,depth-first-search will explore the entire left subtree
even if node C is a goal node.

Depth-Limited-Search:
Definition: A cut off (maximum level of the depth) is introduced in this search technique
to overcome the disadvantage of depth first search. The cutoff value depends on the number of
states.
The problem of unbounded trees can be alleviated by supplying depth-first-search with a
pre-determined depth limit, i.e., nodes at depth l are treated as if they have no successors. This
approach is called depth-limited-search. The depth limits solves the infinite path problem.
Recursive implementation of Depth-limited-search:
function DEPTH-LIMITED-SEARCH(problem, limit)
returns a solution, or failure/cutoff

27
return RECURSIVE-DLS(MAKE-NODE(INITIAL-STATE [problem]), problem, limit)

function RECURSIVE-DLS(node, problem, limit)


returns a solution, or failure/cutoff cutoff-occurred? <- false
if GOAL-TEST[problem](STATE[node]) then
return SOLUTION(node)
else if DEPTH[node] =limit then return cutoff
else
for each successor in EXPAND(node, problem) do result <- RECURSIVE-DLS(successor,
problem, limit)
if result = cutoff then cutoff-occurred?<- true
else if result failure then return result if cutoff-occurred? then
return cutoff
else return failure ≠

Example:

28
Properties of Depth limited Search:
Completeness : yes, guaranteed to find the solution if it exists.
Time complexity : O(bl), The node which is expanded in one particular
dierection above to be stored
Space complexity : O(bl)
Optimality : no, because not gauranteed to find the shortest solution
first in the search technique.

Advantages: Cutoff level is introduced in the DFS technique


Disadvantages: Not guaranteed to find the solution if it exists.

Bidirectional Search:
Definition: Bidirectional search is a strategy that simultaneously search both the
directions (i.e.) forward from the initial state and backward from the goal, and stops when the
two searches meet in the middle.
The idea behind bidirectional search is to run two simultaneous searches – one forward
from the initial state and the other backward from the goal.
Example:

29
Properties of Bidirectional Search:
Completeness : yes, guaranteed to find the solution if it exists.
Time and space complexity : The forward and backward searches done at the same time,
will lead to the solution in O(2bd/2) = O(bd/2) step, because search is done to go only halfway. If
the two searches meet at all, the nodes of at least one of them must all be retained in memory
requires O(bd/2) space.
Optimality : yes, because the order of expansion of states is done in
both the directions.
Advantages: Time and space complexity is reduced
Disadvantages: If the forward and backward searches meet at all, complexity arises in the
search technique. In backward search calculating predecessor is difficult task. If more than one
goal state exists then explicit, multiple state searches are required.

Iterative Deepening Depth-First Search


Definition: Iterative deepening search is a strategy that sidesteps the issue of choosing
the best depth limit by trying all possible depth limits.
Iterative deepening search (or iterative-deepening-depth-first-search) is a general
strategy often used in combination with depth-first-search, that finds the better depth limit. It

30
does this by gradually increasing the limit – first 0, then 1,then 2, and so on – until a goal is
found. This will occur when the depth limit reaches d,the depth of the shallowest goal node.
Iterative deepening combines the benefits of depth-first and breadth-first-search.
Like depth-first- search, its memory requirements are modest;O(bd) to be precise.
Like Breadth-first-search, it is complete when the branching factor is finite and optimal
when the path cost is a non decreasing function of the depth of the node.
Example:

The goal state G can be reached from A in four ways. They are:

Iterative deepening search algorithm:


31
function ITERATIVE-DEEPENING-SEARCH (problem)
returns a solution, or failure
inputs : problem for depth <- 0 to ∞ do
result <-DEPTH-LIMITED-SEARCH(problem, depth)
if result cutoff then return result ∞ ≠

Properties of Iterative deepening Search:


Completeness : yes, guaranteed to find the solution if it exists.
Time complexity : O(bd), The node which is expanded in one
particular direction above to be stored
Space complexity : O(bd)
Optimality : yes, because the order of expansion of states is
similar to breadth first search
Advantages: This method is preferred for large state space and the depth of the search is not
known.

Disadvantages: Many states are expanded twice in limit 2.

4.Explain the following search strategies.

(i) Best-First search [Nov/Dec ‘12]

(ii) A* search [Apr/May ‘08]

(OR) Write in detail about any two informed search strategies.[May/June ’09 & ‘15] (16
Marks) (OR) Explain the A* search and give the proof of optimality of A* [Apr/May ‘10] (16
Marks).

An informed search strategy uses problem-specific knowledge beyond the definition of


the problem itself and it can find solutions more efficiently than an uninformed strategy.The
general approach is best first search that uses an evaluation function in TREE-SEARCH or
GRAPH-SEARCH.

Best First Search:

Best-first search is an instance of the general TREE-SEARCH or GRAPH-SEARCH


algorithm in which a node is selected for expansion based on an evaluation function, f(n) The
32
node with the lowest evaluation is selected for expansion, because the evaluation measures
distance to the goal.

The root node is expanded first, then all other nodes are generated by the root node are
expanded and so on.

function BEST-FIRST-SEARCH(problem, EVAL-FN) returns a solution sequence


inputs: problem, a problem EVAL-FN,
an evaluation function QUEUEING -FN<- a function that orders nodes by EVAL-FN
return TREE-SEARCH(problem, QUEUEING-FN)
The key component of these algorithms is a heuristic functions denoted h(n). h(n) =
estimated cost of the cheapest path from node n to a goal node.

The two types of evaluation functions are:

(i) Expand the node closest to the goal state using estimated cost as the evaluation is
called greedy best first search.
(ii) Expand the node on the least cost solution path using estimated cost and actual
cost as the evaluation function is called A*search

Greedy Best first search:

Definition: A best first search that uses h(n) to select next node to expand is called
greedy search. Greedy best-first-search tries to expand the node that is closest to the goal.

Evaluation function: The estimated cost to reach the goal state, denoted by the letter
h(n).

f (n) = h (n)
Algorithm:

Function GREEDY BEST FIRST SEARCH (problem) returns a solution or failure


Return BEST FIRST SEARCH (problem,h)
Properties of greedy search:

Completeness : No, because it can start down with an infinite path and
never return to try other possibilities.

33
Optimality : It is not optimal, because the next level node for an
expansion is selected only depends on the estimated cost
and not the actual cost.

Time Complexity :O(bm),but a good heuristic can give dramatic improvement.

Space Complexity : O(bm)—keeps all nodes in memory

Greedy search resembles depth first search, since it follows one path to the goal state,
backtracking occurs when it finds a dead end.

Example:

From the given graph and estimated cost, the goal state IS identified as B from A. Apply
the evaluation function h(n) to find a path from A to B.

34
(iii) S is selected for next level of expansion, since h(n) is minimum from S, when
comparing to T and Z.

(iv) F is selected for next level of expansion, since h(n) is minimum from F.

From F, goal state B is reached. Therefore the path from A to Busing greedy search is A - S - F -
B = 450 (i.e) (140 + 99 + 211)

35
A* search and the proof of optimality of A*.[Nov/Dec ‘07] [May/Jun ‘10]

A* search:

A* search is the most widely-known form of best-first search. Expand the node on the
least cost solution path using estimated cost and actual cost as the evaluation function is called
A* search.

The evaluation function f (n) is obtained by combining

f (n) = g(n) + h(n)


Where f(n) = estimated cost of the cheapest solution through n.

g (n) = the cost to reach the node .

h (n) = the cost to get from the node to the goal.

A* search is both complete and optimal.

Algorithm:

function A* SEARCH (problem) returns a solution or failure


return BEST-FIRST SEARCH (problem, g+h)

Example:

36
From the given graph and estimated cost, the goal state is identified as B from A Apply
the evaluation function f(n) = g(n) +h(n) to find a path from A to B.

(iii) S is selected for next level of expansion, since f(s) is minimum from S, when comparing to T
and Z.

(iv)R is selected for next level of expansion, since f(R) is minimum when comparing to A and F.

37
(v)P is selected for next level of expansion, since f(P) is minimum.

From P, goal state B is reached. Therefore the path from A to B using A* search is A – S - R - P
-B : 418 (ie) {140 + 80 + 97 + 101), that the path cost is less than Greedy search path cost.

Properties of A* search:

Time and Space complexity: Time complexity depends on the heuristic function and the
admissible heuristic value. Space complexity remains in the exponential order.

THE BEHAVIOR OF A* SEARCH:

38
1) Monotonicity:

In search tree any path from the root, the f-cost never decreases. This condition is true for
almost all admissible heuristics. A heuristic which satisfies this property is called monotonicity.

Example:

Monotonicity Non – Monotonicity heuristic:

To reach the node n the f cost value is 7, from there to reach the node n’ the value of f cost has
to increase as per monotonicity property. But the above example does not satisfy this property.
So, it is called as non-monotonic heuristic. Non-heuristic can be avoided using path-max
equation. Thus the equation is

f(n) = max { f(n), g(n’) + h(n’) }

2) Optimality:

The optimality of A* search is derived with two approaches. They are:

a. A* used with Tree-search


b. A* used with Graph-search
 A* used with Tree- search:
A* is optimal with Tree – Search if h(n) is an admissible heuristic that never overestimates
the cost to reach the goal.

Proof:

A* using Tree – search if h(n) is an admissible.

i. Suboptimal goal node G2 appears on the fringe.


ii. Cost of optimal solution be C*
iii. G2 is suboptimal goal node, therefore h(G2) =0

39
iv. f(G2) = g(G2) + h(G2) = g(G2) > C*
v. Let us consider one of the fringe node n, an on optimal solution path h(n) is
admissible heuristic.
f(n) = g(n) + h(n) <= C*

vi. From the step no (iv) and (v)


f(n) < C* < f(G2), G2 will not be expanded and A* must return an optimal solution.

 A* used with Graph- search:


Suboptimal solutions can be returned because GRAPH-SEARCH can discard the optimal
path to a repeated state if it is not the first one generated. Two methods are proposed to avoid
this problem.

i. Extend GRAPH-SEARCH to discards the more expensive of any two paths found to
the same node.
ii. To ensure that the optimal path to any repeated state is always the first one followed-
as is the case with uniform-cost search i.e. ensure consistency or monotonicity on
h(n).
3) Consistency:

Aheuristic h(n) is consistent if, for every node n and every successor n'of n generated by
any action a, the estimated cost of reaching the goal from n is no greater than the step cost of
getting to n'plus the estimated cost of reaching the goal from n':

h(n) <= c(n ,a , n') + h(nf )


This is a form of the general triangle inequality, which stipulates that each side of
atriangle cannot be longer than the sum of the other two sides. Here, the triangle is formed by n,
n', and the goal closest to n.

Proof:

A* using GRAPH-SEARCH is optimal if h(n) is consistent.

If h(n) is consistent, then the values of f (n) along any path are non decreasing.

Suppose n'is a successor of n; then

g(n') = g(n)+ c(n, a , n')

40
for some a, and we have

f ( n’) = g(n’)+ h(n’ )

= g(n)+ c(n, a , n') + h(n’ )

≥ g(n)+ h(n)= f (n).

The sequence of nodes expanded by A* using GRAPH-SEARCH is in non decreasing order of f


(n). Hence, the first goal node selected for expansion must be an optimal solution, since all later
nodes will be at least as expensive.

A* is optimally efficient for any given heuristic function. That is, no other optimal algorithm is
guaranteed to expand fewer nodes than A*.

4) Completeness:

A* search expands nodes in order of increasing f i.e. f(n) < f*. This condition may fails when
infinite number of nodes exists with f(n) < f*. The infinite number of nodes is generated by
search techniques are

i. There is a node with an infinite branching factor.


ii. There is path with a finite path cost but an infinite number of nodes along it.
Thus the A* is complete on locally finite graphs (graphs with a finite branching factor)
provided there is some constant d such that every operator cost atleast d.

5.Discuss about constraint satisfaction problem. [May/June ’15 & ‘14] Give the algorithm
for solving CSP by local search. [May/June ‘13](OR) What are Constraint Specification
Problems? How can you formulate them as search problems? [May/June ‘07] [Nov/Dec
‘13] (OR) Discuss the various issues associated with the backtracking search for CSPs.
How are they addressed? (16 Marks) [May/Jun ‘15]

CSP:

A Constraint Satisfaction Problem is a special kind of problem satisfies some additional


structure properties beyond the basic requirements for problem in general. A constraint

41
satisfaction problem (or CSP) is defined by a set of variables, X1, X2,. . . , Xn, and a set of
constraints, C1, C2,. . . , Cm.Each constraint Ci involves some subset of the variables and
specifies the allowable combinations of values for that subset.

General example for CSP:

i. The n-queens problem


ii. A crossword puzzle
iii. A map coloring problem
iv. The Boolean satisfiability problem
v. A crypt arithmetic problem

A state of the problem is defined by an assignment of values to some or all of the


variables, {Xi= vi, X j =vj, . . .). An assignment that does not violate any constraints is called a
consistent or legal assignment.

Example:

Map coloring problem

 The map defines the variables to be the regions: WA, NT, Q, NSW, V, SA, and T.
 The domain of each variable is the set {red, green, blue).
 The constraints require neighboring regions to have distinct colors; {(red, green), (red,
blue), (green, red), (green, blue), (blue, red), (blue, green)) .
 There are many possible solutions, such as { WA= red, NT = green, Q = red, NSW =
green, V= red, SA= blue, T= red ).

42
Constraint Graph:

It is helpful to visualize a CSP as a constraint graph. The nodes of the graph correspond to
variables of the problem and the arcs correspond to constraints.

Example:

CSP can be given an incremental formulation as a standard search problem as follows:

 Initial state: the empty assignment { }, in which all variables are unassigned.
 Successor function: a value can be assigned to any unassigned variable, provided that
it does not conflict with previously assigned variables.
 Goal test: the current assignment is complete.
 Path cost: a constant cost (e.g., 1) for every step.

Varieties of CSP’s

(i) Discrete Variables


Finite domains

43
The simplest kind of CSP involves variables that are discrete and have finite domains.
Map-coloring problems are of this kind. If the maximum domain size of any variable in a CSP is
d, then the number of possible complete assignments is O (d n)-that is, exponential in the number
of variables. Finite-domain CSPs include Boolean CSPs, whose variables can be either true or
false.

Infinite domains

Discrete variables can also have infinite domains-for example, the set of integers or the set
of strings. For example, when scheduling construction jobs onto a calendar, each job's start date
is a variable and the possible values are integer numbers of days from the current date. With
infinite domains, it is no longer possible to1 describe constraints by enumerating all allowed
combinations of values.

(ii) Continuous Variables:


Constraint satisfaction problems with continuous domainsare very common in the real
world and are widely studied in the field of operations research. For example, the scheduling of
experiments on the Hubble Space Telescope requires very precise timing of observation.

Varieties of constraints

 The simplest type is the unary constraint, which restricts the value of a single variable.
Ex: SA# green

 A binary constraint relates two variables.


Ex: SA# WA

 Higher-order constraints involve three or more variables. A familiar example is provided


by cryptarithmetic puzzles.

Backtracking search for CSPs

Backtracking search is a depth-first search that chooses values for one variable at a time
and backtracks when a variable has no legal values left to assign. The branching factor at the
top level is nd, because any of d values can be assigned to any of n variables. We generate a
tree with n!. dn leaves, even though there are only dn possible complete assignments.

44
Algorithm:

A simple backtracking algorithm for constraint satisfaction problem. The algorithm is


modeled on the recursive depth-first search

Figure : Part of search tree generated by simple backtracking for the map coloring problem

45
Variable and value ordering:

The backtracking algorithm contains the line

var ? SELECT-UNASSIGNED-VARIABLE( VARIABLES[csp],assignments,csp)

By default, SELECT-UNASSIGNED-VARIABLE simply selects the next unassigned variable in


the order given by the list VARIABLES [csp]. This static variable ordering seldom results in the
most efficient search.

` The intuitive idea-choosing the variable with the fewest "legal" values-is called the
minimum remaining values (MRV) heuristic. It also has been called the "most constrained
variable" or "fail-first" heuristic, the latter because it picks a variable that is most likely to cause a
failure soon, thereby pruning the search tree.

Propagating information through constraints:

The constraints earlier in the search, or even before the search has started, we can
drastically reduce the search space.

Forward Checking:

One way to make better use of constraints during search is called forward checking.
Whenever a variable X is assigned, the forward checking process looks at each unassigned
variable Y that is connected to X by a constraint and deletes from Y's domain any value that is
inconsistent with the value chosen for X.

46
The progress of a map-coloring search with forward checking. WA=red is assigned first;
then forward checking deletes red from the domains of the neighboring variables NT and SA.
After Q= green, green deleted from the domains of NT, SA and NSW . After V=blue, blue is
deleted from the domains of NSW and SA with no legal values.

6.Explain Memory bounded heuristic search in detail (OR) Explain any two heuristic
search in detail. [Nov/Dec ‘03] (OR) What is meant by heuristic search? Explain.
[May/June ‘06] (OR) Describe a state space in which iterative deepening search performs
much worse than depth-first search. [May/June ‘12] Give the algorithm for recursive best
first search. [Apr/May ‘13] [Nov/Dec‘14](OR) Explain the Heuristic function with examples.
[May/June '16](6 Marks)
Memory bounded heuristic search

The memory requirements of A* is reduced by combining the heuristic function with


iterative deepening resulting an Iterative-deepening A* (IDA*) algorithm.

Iterative-deepening A* search (IDA*):

 Iterative improvement algorithms keep only a single state in memory, but can get stuck on
local maxima. In this algorithm each iteration is a DFS just as in regular iterative
deepening.
 The depth first search is modified to use an f-cost limit rather than a depth limit. Thus
each iteration expands all nodes inside the contour for the current f-cost.
Properties of IDA*:

 Completeness : yes, because it implies A* search.


 Optimality : yes, because it implies A* search.
 Time complexity : Depends on the number of different values that the heuristic
function take on.
 Space complexity : Proportional to the longest path of exploration.

Disadvantage:

It will require more storage space in complex domains.


47
Memory-bounded algorithms:

The two more recent memory-bounded algorithms are

 Recursive best-first search (RBFS)


 Memory bounded A* search (MA*)

Algorithm for recursive best first search. [Apr/May ‘13] [Nov/Dec‘14]


Recursive best-first search (RBFS):

 RBFS is a simple recursive algorithm that attempts to mimic the operation of standard
best-first search, but using only linear space.
 It is similar to that of a recursive depth-first search, but rather than continuing indefinitely
down the current path, it keeps track of the f-value of the best alternative path available
from any ancestor of the current node. If the current node exceeds this limit, the recursion
unwinds back to the alternative path and replaces the f -value of each node along the
path with the best f -value of its children.
 RBFS remembers the f -value of the best leaf in the forgotten subtree and can therefore
decide whether it's worth re-expanding the subtree at some later time.
 RBFS is somewhat more efficient than IDA*, but still suffers from excessive node
regeneration.
Algorithm for RBFS:

48
Time and Space complexity: RBFS is an optimal algorithm if the heuristic function h(n) is
admissible.

Space complexity : linear in the depth of the deepest optimal solution,

Time complexity : It is rather difficult to characterize and it depends both on the

accuracy of the heuristic function and on how often the best

path changes as nodes are expanded.

Search Techniques:

Two algorithms uses all available memory are

 MA* (memory-bounded A*)


 SMA* (simplified Memory-bounded A*)
SMA* (simplified Memory-bounded A*):

SMA* algorithm can make use of all available memory to carry out the search.

Properties of Simplified Memory - bounded A* (SMA*) search :

 It will utilize whatever memory is made available to it.


 It avoids repeated states as for as its memory allow.
49
Completeness : It is complete if the available memory is sufficient to
store the shallowest path.
Optimality : It is optimal if enough memory is available to store the
shallowest optimal solution path. Otherwise it returns
the best solution that can be reached with the available
memory.
Space and Time complexity: Depends on the available number of node.

Advantage:

SMA* uses only available memory

Disadvantage:

If enough memory is not available it leads to unoptimal solution.

7.(i) Explain about the steepest ascent hill climbing technique. [May/June ‘13] (8 Marks)
(OR)Write the algorithm for generate and test simple Hill Climbing.[May/June '16](10
Marks)

Hill Climbing Search (Greedy Local Search)

The hill-climbing search algorithm is simply a loop that continually moves in the direction
of increasing value. It terminates when it reaches a "peak" where no neighbor has a higher
value. The algorithm does not maintain a search tree, so the current node data structure need
only record the state and its objective function value. At each step the current node is replaced
by the best neighbor.

Hill-climbing search algorithm:

function HILL-CLIMBING(problem)returns a state that is a local maximum


inputs: problem, a problem local
variables:current, a node and neighbor, a node
current <- MAKE-NODE(INITIAL-STATE[problem])
loop do neighbor <- a highest-valued successor of current
if VALUE[neighbor] <= VALUE[current] then
return STATE[current] current <- neighbor

50
Algorithm: Simple Hill Climbing
1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue
with the initial state as the current state.

2. Loop until a solution is found or until there are no operators left to be applied in the current
state:
(a) Select an operator that has not yet been applied to the current state and apply it to
produce a new state.

(b) Evaluate the new state.


(i) If it is a goal state, then return it and quit.

(ii) If it is not a goal state, but it is better than the current state, then make it the
current state.

(iii) If it is not better than the current state, then continue

Steepest Ascent Hill Climbing


A useful variation on simple hill climbing considers all the moves from the current state
and selects the best one as the next one. This method is called as steepest-ascent hill climbing
or gradient search. This contrasts with the basic method in which the first state that is better than
the current state is selected.
In simple hill climbing, the first order node is chosen whereas in steepest ascent hill
climbing all successors are compared and the closest to the solution is chosen. Both forms fail if
there is no closer node. This may happen if there are local maxima in the search space which
are not solutions. Steepest ascent hill climbing is similar to best first search but the latter tries all
possible extensions of the current path in order, whereas steepest ascent only tries one.
Algorithm: Steepest-Ascent Hill Climbing
1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue
with the initial state as the current state.
2. Loop until a solution is found or until a complete iteration produces no change to current state:
a. Let SUCC be a state such that any possible successor of the current state will be better
than SUCC.

b. For each operator that applies to the current state do:


(i) Apply the operator and generate a new state.
51
(ii) Evaluate the new state. If it is a goal state, then return it and quit. If not,
compare it to SUCC. If it is better, then set SUCC to this state. If it is not better, leave
SUCC alone.
c. If the SUCC is better than the current state, then set current state to SUCC.

Usually there is a trade-off between the time required to select a move (usually longer for
steepest-ascent hill climbing) and the number of moves required to get to a solution (usually
longer for basic hill climbing) that must be considered when deciding which method will work
better for a particular problem. Both basic and steepest-ascent hill climbing may fail to find a
solution. Either algorithm may terminate not by finding a goal state but by getting to a state from
which no better states can be generated. This will happen if the program has reached either a
local maximum, a plateau, or a ridge.

8.Explain the process of problem definition with example. [May/June ‘14] (8 Marks) (OR)
Explain the components necessary to define a problem. [Nov/Dec‘14] (8 Marks)

Problems are typically defined in terms of state, and solution corresponds to goal states.
Problem solving using search technique performs two sequence of steps:
(i)Define the problem - Given problem is identified with its required initial and goal state.
(ii) Analyze the problem - The best search technique for the given: problem is chosen
from different an AI search technique which derives one or more goal state in minimum
number of states.

Problem solving Agents:

Problem solving agent is one kind of goal based agent, where the agent decides what to
do by finding sequence of actions that lead to desirable states.

The sequence of steps done by the intelligent agent to maximize the performance
measure:

i) Goal formulation - based on current situation is the first step in problem solving. Actions that
result to a failure case can be rejected without further consideration.
ii) Problem formulation - is the process of deciding what actions and states to consider and
follows goal formulation.

52
iii) Search - is the process of finding different possible sequence of actions that lead to state
of known value, and choosing the best one from the states.
iv) Solution - a search algorithm takes a problem as input and returns a solution in the form of
action sequence.
v) Execution phase - if the solution exists, the action it recommends can be carried out.

A simple problem solving agent

function SIMPLE-PROBLEM-SOLVING-AGENT(p) returns an action


input : p, a percept
static: s, an action sequence, initially empty state, some description of the current world state
g, a goal initially null problem,
a problem formulation state <- UPDATE-STATE(state, p)
if s is empty then
g <- FORMULATE-GOAL(state)
problem <-FORMULATE-PROBLEM(state,g)
s <- SEARCH(problem)
action <- RECOMMENDATION(s, state)
s <- REMAINDER(s, state)
return action

Components to define a problem:


The four components of a problem are,
 An initial state – It is the state in which agent starts in.
 A description of possible actions – It is the description of possible actions which
are available to the agent.
 Goal test – It is the test that determines whether a given state is goal (final) state.
 Path cost – It is the function that assigns a numeric cost(value) to each path. The
problem-solving agent is expected to choose a cost-function that reflects its own
performance measure.
Example: 8-puzzle Problem

53
The 8-puzzle problem consists of a 3 x 3 board with eight numbered tiles and a blank
space. A tile adjacent to the blank space can slide into the space. The object is to reach a
specified goal state.

 States: A state description specifies the location of each of the eight tiles and the blank in
one of the nine squares.
 Initial state: Any state can be designated as the initial state.
 Successor function: This generates the legal states that result from trying the four
actions (blank moves Left, Right, Up, or Down).
 Goal test: This checks whether the state matches the goal configuration (Other goal
configurations are possible.)
 Path cost: Each step costs 1, so the path cost is the number of steps in the path.

9.Explain the structure of agents with suitable diagram. [May/June ‘10]


AGENTS
Agent = perceive + act
 Thinking
 Reasoning
 Planning
Agent: entity in a program or environment capable of generating action. An agent uses
perception of the environment to make decisions about actions to take. The perception capability
is usually called a sensor. The actions can depend on the most recent perception or
on the entire history (percept sequence).
Definition:
An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon the environment through actuators.
54
Example:
 Robotic agent
 Human agent
Agents interacts with environment through sensors and actuators.
.

A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth,
and other body parts for actuators.
A robotic agent might have cameras and infrared range finders for sensors and various
motors for actuators.
A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and sending
network packets.

Percept
We use the term percept to refer to the agent's perceptual inputs at any given instant.
Percept Sequence

55
An agent's percept sequence is the complete history of everything the agent has ever
perceived.
Agent function
Mathematically speaking, we say that an agent's behavior is described by the agent
function that maps any given percept sequence to an action.

Rational Agent
A rational agent is one that does the right thing-conceptually speaking, every entry in the
table for the agent function is filled out correctly. Obviously, doing the right thing is better than
doing the wrong thing. The right action is the one that will cause the agent to be most
successful.
Performance measures
A performance measure embodies the criterion for success of an agent's behavior.
When an agent is plunked down in an environment, it generates a sequence of actions
according to the percepts it receives. This sequence of actions causes the environment to go
through a sequence of states. If the sequence is desirable, then the agent has performed well.
Rationality
Rational at any given time depends on four things:
 The performance measure that defines the criterion of success.
 The agent's prior knowledge of the environment.
 The actions that the agent can perform.
 The agent's percept sequence to date.
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.

EXAMPLE:
A vacuum-cleaner world with just two locations.

56
Agent function

Partial tabulation of a simple agent function for the vacuum-cleaner.

Agent Program
Function Reflex-vavum-Agent([location,status])returns an action
If status = Dirty then return Suck
Else if location = A then return Right
Else if location = B then return Left
10.Surely animals, humans and computers cannot be intelligent- they can do only what
their constituent atoms are told to do by the law of physics “Is the better statement true,
and does it imply the former?” [Nov/Dec ‘12](or) Elaborate the approaches for AI with
example.
Artificial intelligence defined as "the study and design of intelligent agents", where an
intelligent agent is a system that perceives its environment and takes actions which maximize its
chances of success.
The definitions of AI are categorized into four approaches

i.Systems that act like humans

57
The art of creating machines that performs functions that require intelligence when
performed by people.
ii.Systems that think like humans
The exciting new effort to make computers think machines with minds, in the full
and literal sense.
iii.Systems that think rationally
The study of mental faculties through the use of computer models.
iv.Systems that act rationally
Computational intelligence is the study of the design of intelligent agents.
The four approaches in more detail are as follows
i.ACTING HUMANLY: The Turing Test Approach
The Turing test was designed to provide a satisfactory operational definition of
intelligence. Turing defined intelligent behavior as the ability to achieve human-level
performance in all cognitive tasks, sufficient to fool an interrogator. If the machine succeeds this,
the machine is acting humanly.
The computer would need to possess the following capabilities:
 Natural language processing
To enable it to communicate successfully in English.
 Knowledge representation
To store what it knows or hears
 Automated reasoning
To use the stored information to answer questions and to draw new conclusions.
 Machine learning
To adapt to new circumstances and to detect and extrapolate patterns
Turing's test deliberately avoided direct physical interaction between the interrogator and
the computer, because physical simulation of a person is unnecessary for intelligence. However,
the so-called total Turing Test includes a video signal so that the interrogator can test the
subject's perceptual abilities, as well as the opportunity for the interrogator to pass physical
objects "through the hatch." To pass the total Turing Test, the computer will need

 Computer vision : To perceive the objects


 Robotics : To manipulate objects and move about.

ii.THINKING HUMANLY: The Cognitive modelling Approach
58
The knowledge requires about the actual workings of human mind to construct a
machines program to think like a human. If the program’s input / output and timing behaviour
match with the human behaviour then the program’s machine is working like a human mind.
Example: ‘General Problem Solver’ (GPS) – A problem solver always keep track of human
mind regardless of right answers. The interdisciplinary field of cognitive science brings together
computer models from AI and experimental techniques from psychology to try to construct
precise and testable theories of the workings of the human mind.

iii.THINKING RATIONALLY : The laws of thought approach


The right thinking i.e. irrefutable reasoning processes provided patterns for argument
structures that always yielded correct conclusions when given correct premises.
For example,
Socrates is a man.
All men are mortal.
Therefore Socrates is mortal.
These laws of thought were supposed to govern the operation of the mind; their study initiated
a field called logic.

iv.ACTING RATIONALLY : The rational agent approach


An agent is something that acts. Computer agents are not mere programs ,but they are
expected to have the following attributes also :

(a) operating under autonomous control,

(b) perceiving their environment,

(c) persisting over a prolonged time period,

(e) adapting to change.

A rational agent is one that acts so as to achieve the best outcome.

The study of rational agent has two advantages

 Correct inference is selected and applied


 It concentrates on scientific development rather than other methods

59

Das könnte Ihnen auch gefallen