Beruflich Dokumente
Kultur Dokumente
TWO MARKS:
1.Mention how the search stratergies are evaluated.[Nov/Dec '15] (OR) Evaluate
performance of problem solving meyhod based on depth first search algorithm.[Dec '10]
The Search strategies are evaluated in the following dimensions:
Completeness – It is complete
Time complexity – It's time complexity is O(b)
Space complexity – It's space comlexity os O(b d+1)
Optimality – It is not optimal
.
2.Define admissible and consistent heuristics.[Nov/Dec '15]
Consistent heuristics:
1
The path-finding problems in artificial intelligence, a heuristic function is said to be
consistent, or monotone, if its estimate is always less than or equal to the estimated distance
from any neighboring vertex to the goal, plus the step cost of reaching that neighbor.
Admissible heuristics:
A consistent heuristic is also admissible, i.e. it never overestimates the cost of reaching
the goal.
4.How much knowledge would be required by a perfect program for the problem of
playing chess? Assume that unlimited computing power is available.[May/June '16]
The rule for determining legal moves and some simple control mechanism then
implements an appropriate approach search procedure. Additional knowledge about such things
as good strategy and tactics could of course help considerably could constrain the search and
speed up the execution of program.
2
7.How will you measure problem solving performance. [Apr/May ‘10] (OR) List the
criteria’s to measure the performance of search strategy. [May/June ‘14]
The algorithm’s performance can be measured in four ways :
3
11.What are software agent? [April/May ‘13]
A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and sending
network packets.
12.What are the four components to define a problem? Define them. [May/June ‘13]
The four components of a problem are,
An initial state – It is the state in which agent starts in.
A description of possible actions – It is the description of possible actions which
are available to the agent.
Goal test – It is the test that determines whether a given state is goal (final) state.
Path cost – It is the function that assigns a numeric cost(value) to each path. The
problem-solving agent is expected to choose a cost-function that reflects its own
performance measure.
What is the advantage of heuristic function? [Apr/May ‘08, Nov/Dec ‘09] (or)
State the significance of using heuristic functions? [Nov/Dec ‘11, Nov/Dec ‘12]
4
Heuristics are used by informed search algorithms such as Greedy best-first search and
A* to choose the best node to explore.
Heuristic functions reduces the search cost
Optimally solution for the problem are derived.
18.What is artificial intelligence? [April/May ‘03] [April/May ‘04] [April/May ‘08] [Nov/Dec
‘08]
The exciting new effort to make computers think machines with minds in the full and
literal sense. Artificial intelligence systemizes and automates intellectual tasks and is
therefore potentially relevant to any sphere of human intellectual activities.
5
An Agent that services a database can run when the database is down, allowing the
Agent to start up or shut down the database.
The Intelligent Agents can independently perform administrative job tasks at any time,
without active participation by the administrator.
Agents can autonomously detect and react to events, allowing them to monitor the
system and execute a fixit job to correct problems without the intervention of the
administrator.
20.What do you mean by local maxima with respect to search technique? [April/May ‘11]
Local maximum is the peak that is higher than each of its neighbour states, but lower than
the global maximum i.e, a local maxima is a tiny hill on the surface whose peak is not as high as
the main peak (Which is optimal Solution). Hill climbing fails to find optimum solution when it
encounters local maxima. Any small move, from here also makes things worse (temporarily). At
local maxima all the search procedure turns out to be wasted here. It is like dead end.
21.State the reason when hill climbing often gets stuck. [Apr/May ‘10]
Local maxima is the state where the hill climbing algorithm is sure to get stucks. Local
maximum is the peak that is higher than each of its neighbour states, but lower than the global
maximum. At local maxima all the search procedure turns out to be wasted here. It is like dead
end.
23.How does one characterize the quality of heuristic? (Or)Define branching factor with
example. [May/June ‘09]
One way to characterize the quality of a heuristic is the effective branching factor b*. A
well designed heuristic would have a value of b* close to 1. If the total number of nodes
generated by A* for a particular problem is N, and the solution depth is d, then b* is the
branching factor that a uniform tree of depth d would have to have in order to contain N+1
nodes. Thus,
6
N + 1 = 1 + b* + (b*)2+…+(b*)d
For example, depth=5
N = 52
Then b*=1.91
26.List the capabilities that a computer should possess for conducting a Turing Test? OR
What are the requirements by a computer to pass a turing test? [April/May ‘03]
The capabilities that a computer should possess for conducting a Turing Test are,
Natural Language Processing;
Knowledge Representation;
Automated Reasoning;
Machine Language.
27. List the measures to determine agent’s behavior. OR What is role of agent program?
[April/May ‘04]
7
The measures to determine agent‘s behavior are,
Performance measure,
Rationality,
Omniscience,
Learning and Autonomy.
28.What is the use of online search agents in unknown environments? [NOV/DEC 2007]
Online search agents suits well for the domains:
Dynamic or semi dynamic domain
Stochastic domain
Online search is a necessary idea for an exploration problem, where the state and actions are
unknown to the agent
29.Differentiate blind search& heuristic search. Or Define any two search strategies.
[April/May ‘03] What is the difference between informed and uninformed search?
[Nov/Dec ‘08]
Uninformed or Blind Search Informed or Heuristic Search
No information about the number of Uses problem-specific knowledge
steps (or) path cost from the current beyond the definition of the problem
state to goal state itself.
Less effective More effective
Given information is used to solve the Additional information can be
problem. added as assumption to solve the
problem.
e.g. (search strategies) e.g. (search strategies)
Breadth-first search Best first search
Uniform-cost search Greedy search
Depth-first search A* search
Depth-limited search
Iterative deepening search
31.What are the factors that a rational agent should depend on at any given time?
The factors that a rational agent should depend on at any given time are,
The performance measure that defines criterion of success;
Agent‘s prior knowledge of the environment;
Action that the agent can perform;
The agent‘s percept sequence to date.
9
16 MARKS:
{ ( i ,j ) i = 0,1,2,3,4 j = 0,1,2,3}
‘i’ represents the number of liters of water in the 4-liter jug and ‘j’ represents the number of liters
of water in the 3-liter jug. The initial state is ( 0,0) that is no water on each jug. The goal state is
to get ( 2,n) for any value of ‘n’.
To solve this we have to make some assumptions not mentioned in the problem. They are
The various operators (Production Rules) that are available to solve this problem may be stated
as given in the following figure .
10
11
12
13
14
15
2.What is an agent? Discuss on different types of agent program. [Nov / Dec ‘14] (Or)
Explain the structure of a simple reflex agent and utility based agent with an example.
[Nov/Dec ‘11] (Or) Outline the components and functions of anyone of the basic kinds of
agent programs. [Nov/Dec ‘08, Msy/june '16] (OR) Explain goal and model based reflex
16
agent with an example. [Apr/May ‘12] (OR) Define Agents.Specify the PAGE descriptions
for intelligen agent design with examples and explain basic types of agents.[Nov/Dec '15]
Agent programs
The agent programs all have the same skeleton: they take the current percept as input from the
sensors and return an action to the actuators.
Notice the difference between the agent program, which takes the current percept as
input, and the agent function, which takes the entire percept history.
The agent program takes just the current percept as input because nothing more is
available from the environment; if the agent's actions depend on the entire percept sequence, the
agent will have to remember the percepts.
Types Of Agent:
• Table-driven agents use a percept sequence/action table in memory to find the next action.
They are implemented by a (large) lookup table.
• Simple reflex agents are based on condition-action rules, implemented with an appropriate
production system. They are stateless devices which do not have memory of past world states.
• Agents with memory have internal state, which is used to keep track of past states of the
world.
• Agents with goals are agents that, in addition to state information, have goal information that
describes desirable situations. Agents of this kind take future events into consideration.
• Utility-based agents base their decisions on classic axiomatic utility theory in order to act
rationally.
Characteristics:
Only works if the environment is fully observable.
Lacking history, easily get stuck in infinite loops
One solution is to randomize actions
18
First, need some information about how the world evolves independently of the
agent.
Second, need some information about how the agent's own actions affect the
world.
This knowledge about "how the world working - whether implemented in simple Boolean
circuits or in complete scientific theories-is called a model of the world. An agent that uses such
a MODEL-BASED model is called a model-based agent.
3) Goal-based agents
Knowing about the current state of the environment is not always enough to decide what
to do. In other words, as well as a current state description, the agent needs some sort of goal
information that describes situations that are desirable-for example, being at the passenger's
destination. The agent program can combine this with information about the results of possible
19
actions (the same information as was used to update internal state in the reflex agent) in order
to choose actions that achieve the goal.
4) Utility-based agents
Goals alone are not really enough to generate high-quality behavior in most
environments. Goals just provide a crude binary distinction between "happy" and "unhappy"
states, whereas a more general performance measure should allow a comparison of different
world states according to exactly how happy they would make the agent if they could be
achieved. Because "happy" does not sound very scientific, the customary terminology is to say
that if one world state is preferred to another, then it has higher utility for the agent.
20
A model-based, utility-based agent.
It uses a model of the world, along with a utility function that measures its preferences
among states of the world. Then it chooses the action that leads to the best expected utility,
where expected utility is computed by averaging over all possible outcome states, weighted by
the probability of the outcome.
Learning Agents
All agents can improve their performance through learning.
A learning agent can be divided into four conceptual components i.e.
Learning agent
Performance element
Critic
Problem generator
The most important distinction is between the learning element, which is responsible for
making improvements, and the performance element, which is responsible for selecting
external actions.
The performance element is what we have previously considered to be the entire agent: it
takes in percept and decides on actions.
The learning element uses feedback from the critic on how the agent is doing and
determines how the performance element should be modified to do better in the future.
21
The last component of the learning agent is the problem generator. It is responsible for
suggesting actions that will lead to new and informative experiences.
But if the agent is willing to explore a little, it might discover much better actions for the
long run.
The problem generator’s job is to suggest these exploratory actions. This is what
scientists do when they carry out experiments.
(i)Breadth-First Search (or) Give an example of a problem for which BFS would
work better than DFS. [NOV/DEC 2008]
23
Example: Route Finding problem
Properties of BFS:
Completeness : Yes
Time : b + b2 + b3 + …+ bd + (bd+1 - b) = O(bd+1)
Space : O(bd+1) ( keep every node in memory)
Optimally : Yes, Provided the path cost is a no decreasing function of
the depth of the node
Advantages: Guaranteed to find the single solution at the shallowest depth level.
Disadvantages: Suitable for only smallest instance problem
Uniform-Cost Search :
Breadth-first search is optimal when all step costs are equal, because it always expands
the shallowest unexpanded node. By a simple extension, we can find an algorithm that is
24
optimal with any step cost function. Uniform-cost search expands the node n with the lowest
path cost.
If all step costs are equal, this is identical to breadth-first search.
Uniform-cost search does not care about the number of steps a path has, but only about
their total cost.
Example: Route Finding problem
Depth-First-Search:
Depth-first-search always expands the deepest node in the current fringe of the search
tree. The search proceeds immediately to the deepest level of the search tree, where the nodes
have no successors. As those nodes are expanded, they are dropped from the fringe, so then
the search ―backs up to the next shallowest node that still has unexplored successors.
This strategy can be implemented by TREE-SEARCH with a last-in-first-out (LIFO)
queue, also known as a stack.
Algorithm:
function DFS(problem) return a solution or failure
TREE-SEARCH(problem, LIFO-QUEUE())
EXAMPLE:
26
Backtracking Search:
A variant of depth-first search called backtracking search uses less memory and only one
successor is generated at a time rather than all successors.; Only O(m) memory is needed
rather than O(bm).
Advantages:
If more than one solution exists (or) number of levels is high then DFS is best because
exploration is done only in a small portion of the whole space.
Drawback of Depth-first-search:
The drawback of depth-first-search is that it can make a wrong choice and get stuck
going down very long(or even infinite) path when a different choice would lead to solution near
the root of the search tree. For example ,depth-first-search will explore the entire left subtree
even if node C is a goal node.
Depth-Limited-Search:
Definition: A cut off (maximum level of the depth) is introduced in this search technique
to overcome the disadvantage of depth first search. The cutoff value depends on the number of
states.
The problem of unbounded trees can be alleviated by supplying depth-first-search with a
pre-determined depth limit, i.e., nodes at depth l are treated as if they have no successors. This
approach is called depth-limited-search. The depth limits solves the infinite path problem.
Recursive implementation of Depth-limited-search:
function DEPTH-LIMITED-SEARCH(problem, limit)
returns a solution, or failure/cutoff
27
return RECURSIVE-DLS(MAKE-NODE(INITIAL-STATE [problem]), problem, limit)
Example:
28
Properties of Depth limited Search:
Completeness : yes, guaranteed to find the solution if it exists.
Time complexity : O(bl), The node which is expanded in one particular
dierection above to be stored
Space complexity : O(bl)
Optimality : no, because not gauranteed to find the shortest solution
first in the search technique.
Bidirectional Search:
Definition: Bidirectional search is a strategy that simultaneously search both the
directions (i.e.) forward from the initial state and backward from the goal, and stops when the
two searches meet in the middle.
The idea behind bidirectional search is to run two simultaneous searches – one forward
from the initial state and the other backward from the goal.
Example:
29
Properties of Bidirectional Search:
Completeness : yes, guaranteed to find the solution if it exists.
Time and space complexity : The forward and backward searches done at the same time,
will lead to the solution in O(2bd/2) = O(bd/2) step, because search is done to go only halfway. If
the two searches meet at all, the nodes of at least one of them must all be retained in memory
requires O(bd/2) space.
Optimality : yes, because the order of expansion of states is done in
both the directions.
Advantages: Time and space complexity is reduced
Disadvantages: If the forward and backward searches meet at all, complexity arises in the
search technique. In backward search calculating predecessor is difficult task. If more than one
goal state exists then explicit, multiple state searches are required.
30
does this by gradually increasing the limit – first 0, then 1,then 2, and so on – until a goal is
found. This will occur when the depth limit reaches d,the depth of the shallowest goal node.
Iterative deepening combines the benefits of depth-first and breadth-first-search.
Like depth-first- search, its memory requirements are modest;O(bd) to be precise.
Like Breadth-first-search, it is complete when the branching factor is finite and optimal
when the path cost is a non decreasing function of the depth of the node.
Example:
The goal state G can be reached from A in four ways. They are:
(OR) Write in detail about any two informed search strategies.[May/June ’09 & ‘15] (16
Marks) (OR) Explain the A* search and give the proof of optimality of A* [Apr/May ‘10] (16
Marks).
The root node is expanded first, then all other nodes are generated by the root node are
expanded and so on.
(i) Expand the node closest to the goal state using estimated cost as the evaluation is
called greedy best first search.
(ii) Expand the node on the least cost solution path using estimated cost and actual
cost as the evaluation function is called A*search
Definition: A best first search that uses h(n) to select next node to expand is called
greedy search. Greedy best-first-search tries to expand the node that is closest to the goal.
Evaluation function: The estimated cost to reach the goal state, denoted by the letter
h(n).
f (n) = h (n)
Algorithm:
Completeness : No, because it can start down with an infinite path and
never return to try other possibilities.
33
Optimality : It is not optimal, because the next level node for an
expansion is selected only depends on the estimated cost
and not the actual cost.
Greedy search resembles depth first search, since it follows one path to the goal state,
backtracking occurs when it finds a dead end.
Example:
From the given graph and estimated cost, the goal state IS identified as B from A. Apply
the evaluation function h(n) to find a path from A to B.
34
(iii) S is selected for next level of expansion, since h(n) is minimum from S, when
comparing to T and Z.
(iv) F is selected for next level of expansion, since h(n) is minimum from F.
From F, goal state B is reached. Therefore the path from A to Busing greedy search is A - S - F -
B = 450 (i.e) (140 + 99 + 211)
35
A* search and the proof of optimality of A*.[Nov/Dec ‘07] [May/Jun ‘10]
A* search:
A* search is the most widely-known form of best-first search. Expand the node on the
least cost solution path using estimated cost and actual cost as the evaluation function is called
A* search.
Algorithm:
Example:
36
From the given graph and estimated cost, the goal state is identified as B from A Apply
the evaluation function f(n) = g(n) +h(n) to find a path from A to B.
(iii) S is selected for next level of expansion, since f(s) is minimum from S, when comparing to T
and Z.
(iv)R is selected for next level of expansion, since f(R) is minimum when comparing to A and F.
37
(v)P is selected for next level of expansion, since f(P) is minimum.
From P, goal state B is reached. Therefore the path from A to B using A* search is A – S - R - P
-B : 418 (ie) {140 + 80 + 97 + 101), that the path cost is less than Greedy search path cost.
Properties of A* search:
Time and Space complexity: Time complexity depends on the heuristic function and the
admissible heuristic value. Space complexity remains in the exponential order.
38
1) Monotonicity:
In search tree any path from the root, the f-cost never decreases. This condition is true for
almost all admissible heuristics. A heuristic which satisfies this property is called monotonicity.
Example:
To reach the node n the f cost value is 7, from there to reach the node n’ the value of f cost has
to increase as per monotonicity property. But the above example does not satisfy this property.
So, it is called as non-monotonic heuristic. Non-heuristic can be avoided using path-max
equation. Thus the equation is
2) Optimality:
Proof:
39
iv. f(G2) = g(G2) + h(G2) = g(G2) > C*
v. Let us consider one of the fringe node n, an on optimal solution path h(n) is
admissible heuristic.
f(n) = g(n) + h(n) <= C*
i. Extend GRAPH-SEARCH to discards the more expensive of any two paths found to
the same node.
ii. To ensure that the optimal path to any repeated state is always the first one followed-
as is the case with uniform-cost search i.e. ensure consistency or monotonicity on
h(n).
3) Consistency:
Aheuristic h(n) is consistent if, for every node n and every successor n'of n generated by
any action a, the estimated cost of reaching the goal from n is no greater than the step cost of
getting to n'plus the estimated cost of reaching the goal from n':
Proof:
If h(n) is consistent, then the values of f (n) along any path are non decreasing.
40
for some a, and we have
A* is optimally efficient for any given heuristic function. That is, no other optimal algorithm is
guaranteed to expand fewer nodes than A*.
4) Completeness:
A* search expands nodes in order of increasing f i.e. f(n) < f*. This condition may fails when
infinite number of nodes exists with f(n) < f*. The infinite number of nodes is generated by
search techniques are
5.Discuss about constraint satisfaction problem. [May/June ’15 & ‘14] Give the algorithm
for solving CSP by local search. [May/June ‘13](OR) What are Constraint Specification
Problems? How can you formulate them as search problems? [May/June ‘07] [Nov/Dec
‘13] (OR) Discuss the various issues associated with the backtracking search for CSPs.
How are they addressed? (16 Marks) [May/Jun ‘15]
CSP:
41
satisfaction problem (or CSP) is defined by a set of variables, X1, X2,. . . , Xn, and a set of
constraints, C1, C2,. . . , Cm.Each constraint Ci involves some subset of the variables and
specifies the allowable combinations of values for that subset.
Example:
The map defines the variables to be the regions: WA, NT, Q, NSW, V, SA, and T.
The domain of each variable is the set {red, green, blue).
The constraints require neighboring regions to have distinct colors; {(red, green), (red,
blue), (green, red), (green, blue), (blue, red), (blue, green)) .
There are many possible solutions, such as { WA= red, NT = green, Q = red, NSW =
green, V= red, SA= blue, T= red ).
42
Constraint Graph:
It is helpful to visualize a CSP as a constraint graph. The nodes of the graph correspond to
variables of the problem and the arcs correspond to constraints.
Example:
Initial state: the empty assignment { }, in which all variables are unassigned.
Successor function: a value can be assigned to any unassigned variable, provided that
it does not conflict with previously assigned variables.
Goal test: the current assignment is complete.
Path cost: a constant cost (e.g., 1) for every step.
Varieties of CSP’s
43
The simplest kind of CSP involves variables that are discrete and have finite domains.
Map-coloring problems are of this kind. If the maximum domain size of any variable in a CSP is
d, then the number of possible complete assignments is O (d n)-that is, exponential in the number
of variables. Finite-domain CSPs include Boolean CSPs, whose variables can be either true or
false.
Infinite domains
Discrete variables can also have infinite domains-for example, the set of integers or the set
of strings. For example, when scheduling construction jobs onto a calendar, each job's start date
is a variable and the possible values are integer numbers of days from the current date. With
infinite domains, it is no longer possible to1 describe constraints by enumerating all allowed
combinations of values.
Varieties of constraints
The simplest type is the unary constraint, which restricts the value of a single variable.
Ex: SA# green
Backtracking search is a depth-first search that chooses values for one variable at a time
and backtracks when a variable has no legal values left to assign. The branching factor at the
top level is nd, because any of d values can be assigned to any of n variables. We generate a
tree with n!. dn leaves, even though there are only dn possible complete assignments.
44
Algorithm:
Figure : Part of search tree generated by simple backtracking for the map coloring problem
45
Variable and value ordering:
` The intuitive idea-choosing the variable with the fewest "legal" values-is called the
minimum remaining values (MRV) heuristic. It also has been called the "most constrained
variable" or "fail-first" heuristic, the latter because it picks a variable that is most likely to cause a
failure soon, thereby pruning the search tree.
The constraints earlier in the search, or even before the search has started, we can
drastically reduce the search space.
Forward Checking:
One way to make better use of constraints during search is called forward checking.
Whenever a variable X is assigned, the forward checking process looks at each unassigned
variable Y that is connected to X by a constraint and deletes from Y's domain any value that is
inconsistent with the value chosen for X.
46
The progress of a map-coloring search with forward checking. WA=red is assigned first;
then forward checking deletes red from the domains of the neighboring variables NT and SA.
After Q= green, green deleted from the domains of NT, SA and NSW . After V=blue, blue is
deleted from the domains of NSW and SA with no legal values.
6.Explain Memory bounded heuristic search in detail (OR) Explain any two heuristic
search in detail. [Nov/Dec ‘03] (OR) What is meant by heuristic search? Explain.
[May/June ‘06] (OR) Describe a state space in which iterative deepening search performs
much worse than depth-first search. [May/June ‘12] Give the algorithm for recursive best
first search. [Apr/May ‘13] [Nov/Dec‘14](OR) Explain the Heuristic function with examples.
[May/June '16](6 Marks)
Memory bounded heuristic search
Iterative improvement algorithms keep only a single state in memory, but can get stuck on
local maxima. In this algorithm each iteration is a DFS just as in regular iterative
deepening.
The depth first search is modified to use an f-cost limit rather than a depth limit. Thus
each iteration expands all nodes inside the contour for the current f-cost.
Properties of IDA*:
Disadvantage:
RBFS is a simple recursive algorithm that attempts to mimic the operation of standard
best-first search, but using only linear space.
It is similar to that of a recursive depth-first search, but rather than continuing indefinitely
down the current path, it keeps track of the f-value of the best alternative path available
from any ancestor of the current node. If the current node exceeds this limit, the recursion
unwinds back to the alternative path and replaces the f -value of each node along the
path with the best f -value of its children.
RBFS remembers the f -value of the best leaf in the forgotten subtree and can therefore
decide whether it's worth re-expanding the subtree at some later time.
RBFS is somewhat more efficient than IDA*, but still suffers from excessive node
regeneration.
Algorithm for RBFS:
48
Time and Space complexity: RBFS is an optimal algorithm if the heuristic function h(n) is
admissible.
Search Techniques:
SMA* algorithm can make use of all available memory to carry out the search.
Advantage:
Disadvantage:
7.(i) Explain about the steepest ascent hill climbing technique. [May/June ‘13] (8 Marks)
(OR)Write the algorithm for generate and test simple Hill Climbing.[May/June '16](10
Marks)
The hill-climbing search algorithm is simply a loop that continually moves in the direction
of increasing value. It terminates when it reaches a "peak" where no neighbor has a higher
value. The algorithm does not maintain a search tree, so the current node data structure need
only record the state and its objective function value. At each step the current node is replaced
by the best neighbor.
50
Algorithm: Simple Hill Climbing
1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue
with the initial state as the current state.
2. Loop until a solution is found or until there are no operators left to be applied in the current
state:
(a) Select an operator that has not yet been applied to the current state and apply it to
produce a new state.
(ii) If it is not a goal state, but it is better than the current state, then make it the
current state.
Usually there is a trade-off between the time required to select a move (usually longer for
steepest-ascent hill climbing) and the number of moves required to get to a solution (usually
longer for basic hill climbing) that must be considered when deciding which method will work
better for a particular problem. Both basic and steepest-ascent hill climbing may fail to find a
solution. Either algorithm may terminate not by finding a goal state but by getting to a state from
which no better states can be generated. This will happen if the program has reached either a
local maximum, a plateau, or a ridge.
8.Explain the process of problem definition with example. [May/June ‘14] (8 Marks) (OR)
Explain the components necessary to define a problem. [Nov/Dec‘14] (8 Marks)
Problems are typically defined in terms of state, and solution corresponds to goal states.
Problem solving using search technique performs two sequence of steps:
(i)Define the problem - Given problem is identified with its required initial and goal state.
(ii) Analyze the problem - The best search technique for the given: problem is chosen
from different an AI search technique which derives one or more goal state in minimum
number of states.
Problem solving agent is one kind of goal based agent, where the agent decides what to
do by finding sequence of actions that lead to desirable states.
The sequence of steps done by the intelligent agent to maximize the performance
measure:
i) Goal formulation - based on current situation is the first step in problem solving. Actions that
result to a failure case can be rejected without further consideration.
ii) Problem formulation - is the process of deciding what actions and states to consider and
follows goal formulation.
52
iii) Search - is the process of finding different possible sequence of actions that lead to state
of known value, and choosing the best one from the states.
iv) Solution - a search algorithm takes a problem as input and returns a solution in the form of
action sequence.
v) Execution phase - if the solution exists, the action it recommends can be carried out.
53
The 8-puzzle problem consists of a 3 x 3 board with eight numbered tiles and a blank
space. A tile adjacent to the blank space can slide into the space. The object is to reach a
specified goal state.
States: A state description specifies the location of each of the eight tiles and the blank in
one of the nine squares.
Initial state: Any state can be designated as the initial state.
Successor function: This generates the legal states that result from trying the four
actions (blank moves Left, Right, Up, or Down).
Goal test: This checks whether the state matches the goal configuration (Other goal
configurations are possible.)
Path cost: Each step costs 1, so the path cost is the number of steps in the path.
A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth,
and other body parts for actuators.
A robotic agent might have cameras and infrared range finders for sensors and various
motors for actuators.
A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and sending
network packets.
Percept
We use the term percept to refer to the agent's perceptual inputs at any given instant.
Percept Sequence
55
An agent's percept sequence is the complete history of everything the agent has ever
perceived.
Agent function
Mathematically speaking, we say that an agent's behavior is described by the agent
function that maps any given percept sequence to an action.
Rational Agent
A rational agent is one that does the right thing-conceptually speaking, every entry in the
table for the agent function is filled out correctly. Obviously, doing the right thing is better than
doing the wrong thing. The right action is the one that will cause the agent to be most
successful.
Performance measures
A performance measure embodies the criterion for success of an agent's behavior.
When an agent is plunked down in an environment, it generates a sequence of actions
according to the percepts it receives. This sequence of actions causes the environment to go
through a sequence of states. If the sequence is desirable, then the agent has performed well.
Rationality
Rational at any given time depends on four things:
The performance measure that defines the criterion of success.
The agent's prior knowledge of the environment.
The actions that the agent can perform.
The agent's percept sequence to date.
This leads to a definition of a rational agent:
For each possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.
EXAMPLE:
A vacuum-cleaner world with just two locations.
56
Agent function
Agent Program
Function Reflex-vavum-Agent([location,status])returns an action
If status = Dirty then return Suck
Else if location = A then return Right
Else if location = B then return Left
10.Surely animals, humans and computers cannot be intelligent- they can do only what
their constituent atoms are told to do by the law of physics “Is the better statement true,
and does it imply the former?” [Nov/Dec ‘12](or) Elaborate the approaches for AI with
example.
Artificial intelligence defined as "the study and design of intelligent agents", where an
intelligent agent is a system that perceives its environment and takes actions which maximize its
chances of success.
The definitions of AI are categorized into four approaches
57
The art of creating machines that performs functions that require intelligence when
performed by people.
ii.Systems that think like humans
The exciting new effort to make computers think machines with minds, in the full
and literal sense.
iii.Systems that think rationally
The study of mental faculties through the use of computer models.
iv.Systems that act rationally
Computational intelligence is the study of the design of intelligent agents.
The four approaches in more detail are as follows
i.ACTING HUMANLY: The Turing Test Approach
The Turing test was designed to provide a satisfactory operational definition of
intelligence. Turing defined intelligent behavior as the ability to achieve human-level
performance in all cognitive tasks, sufficient to fool an interrogator. If the machine succeeds this,
the machine is acting humanly.
The computer would need to possess the following capabilities:
Natural language processing
To enable it to communicate successfully in English.
Knowledge representation
To store what it knows or hears
Automated reasoning
To use the stored information to answer questions and to draw new conclusions.
Machine learning
To adapt to new circumstances and to detect and extrapolate patterns
Turing's test deliberately avoided direct physical interaction between the interrogator and
the computer, because physical simulation of a person is unnecessary for intelligence. However,
the so-called total Turing Test includes a video signal so that the interrogator can test the
subject's perceptual abilities, as well as the opportunity for the interrogator to pass physical
objects "through the hatch." To pass the total Turing Test, the computer will need
59