Sie sind auf Seite 1von 31

Department Of Computer Science and Engineering

CS2351 ARTIFICIAL INTELLIGENCE


2-MARK QUESTIONS

UNIT 1: PROBLEM SOLVING


1. What is artificial intelligence?
The exciting new effort to make computers think machines with minds in the full and literal
sense. Artificial intelligence systemizes and automates intellectual tasks and is therefore potentially
relevant to any sphere of human intellectual activities.
2. List down the characteristics of intelligent agent.
Intelligent Agents are autonomous because they function without requiring that the
Console or Management Server be running.
An Agent that services a database can run when the database is down, allowing the Agent to start up
or shut down the database.
The Intelligent Agents can independently perform administrative job tasks at any time, without
active participation by the administrator.
Similarly, the Agents can autonomously detect and react to events, allowing them to monitor the
system and execute a fixit job to correct problems without the intervention of the administrator.
3. What do you mean by local maxima with respect to search technique?
The golden section search is a technique for finding the extremum (minimum or maximum) of a
strictly unimodal function by successively narrowing the range of values inside which the extremum is
known to exist. The technique derives its name from the fact that the algorithm maintains the function
values for triples of points whose distances form a golden ratio. The algorithm is the limit of Fibonacci
search (also described below) for a large number of function evaluations.
4. Define Turing test.
The Turing test proposed by Alan Turing was designed to provide a satisfactory operational
definition of intelligence. Turing defined intelligent behavior as the ability to achieve human- level
performance in all cognitive tasks, sufficient to fool an interrogator.
5. List the capabilities that a computer should possess for conducting a Turing Test?
The capabilities that a computer should possess for conducting a Turing Test are,
Natural Language Processing;
Knowledge Representation;
Automated Reasoning;
Machine Language.
6. Define an agent.
An agent is anything that can be viewed as perceiving its environment through Sensors and
acting upon the environment through effectors.

7. Define rational agent.


A rational agent is one that does the right thing. Here right thing is one that will
cause agent to be more successful. That leaves us with the problem of deciding how and
when to
evaluate the agents success.
8. Define an Omniscient agent.
An omniscient agent knows the actual outcome of its action and can act
accordingly; but omniscience is impossible in reality.
9. What are the factors that a rational agent should depend on at any given time?
The factors that a rational agent should depend on at any given time are,
The performance measure that defines criterion of success;
Agents prior knowledge of the environment;
Action that the agent can perform;
The agents percept sequence to date.
10. List the measures to determine agents behavior.
The measures to determine agents behavior are,
Performance measure,
Rationality,
11. List the various types of agent programs.
The various types of agent programs are,
Simple reflex agent program;
Agent that keep track of the world;
Goal based agent program;
Utility based agent program.
12. List the components of a learning agent?
The components of a learning agent are,
Learning element;
Performance element;
Critic;
Problem generator.
13. List out some of the applications of Artificial Intelligence.
Some of the applications of Artificial Intelligence are,
Autonomous planning and scheduling;
Game playing;
Autonomous control;
Diagnosis;
Logistics planning;
Robotics.
14. What is depth-limited search?
Depth-limited avoids the pitfalls of DFS by imposing a cut off of the maximum
depth of a path. This cutoff can be implemented by special depth limited search algorithm
or by using the general search algorithm with operators that keep track of the depth.
15. Define breadth-first search.
The breadth-first search strategy is a simple strategy in which the root-node is
expanded first, and then all the successors of the root node are expanded, then their
successors and so on. It is implemented using TREE-SEARCH with an empty fringe that
is a FIFO queue, assuring that the nodes that are visited first will be expanded first.
16. Define problem formulation.
Problem formulation is the process of deciding what actions and
states to consider for a goal that has been developed in the first step of
problem solving.

17. List the four components of a problem?


The four components of a problem are,
An initial state;
Actions;
Goal test;
Path

cost.
18. Define iterative deepening search.
Iterative deepening is a strategy that sidesteps the issue of choosing the best
depth
limit by trying all possible depth limits: first depth 0, then depth 1, then depth 2& so on.
19. Mention the criterias for the evaluation of search strategy.
The criterias for the evaluation of search strategy are,
1.Completeness;
2.Time complexity;
3.Space complexity;
4.Optimality.
20. Define the term percept.
The term percept refers to the agents perceptual inputs at any given instant. An
agents percept sequence is the complete history of everything that the agent has perceived.
21. Define Constraint Satisfaction Problem (CSP).
A constraint satisfaction problem is a special kind of problem satisfies some
additional structural properties beyond the basic requirements for problem in general. In a
CSP, the states are defined by the values of a set of variables and the goal test specifies a
set of constraint that the value must obey.
22. List some of the uninformed search techniques.
Some of the uninformed search techniques are,
Breadth-First Search(BFS);
Depth-First Search(DFS);
Uniform Cost Search;
Depth Limited Search;
Iterative

Deepening Search;
Bidirectional Search.
23.Define Artificial Intelligence formulated by Haugeland.
The exciting new effort to make computers think machines with minds in
the full and literal sense.
24. Define Artificial Intelligence in terms of human performance.
The art of creating machines that perform functions that require intelligence
when performed by people.
25. Define Artificial Intelligence in terms of rational acting.
A field of study that seeks to explain and emulate intelligent behaviors in terms
of computational processes-Schalkoff. The branch of computer science that is concerned
with the automation of intelligent behavior-Luger&Stubblefield.
26. Define Artificial in terms of rational thinking.
The study of mental faculties through the use of computational modelsharniak&McDermott.The study of the computations that make it possible to perceive,
reason and act-Winston.
27. What does Turing test mean?
The Turing test proposed by Alan Turing was designed to provide a satisfactory
operational definition of intelligence. Turing defined intelligent behavior as the ability to
achieve human-level performance in all cognitive tasks, sufficient to fool an interrogator.
28. Define Architecture.
The action program will run on some sort of computing device which is called as

Architecture.
29. List the various type of agent program.
Simple reflex agent program.
Agent that keep track of the world.
Goal based agent program.
Utility based agent program.
30. State the various properties of environment.
Accessible Vs Inaccessible:
If an agents sensing apparatus give it access to the complete state of the
environment then we can say the environment is accessible to he agent.
Deterministic Vs Non deterministic:
If the next state of the environment is completely determined by the
current state and the actions selected by the agent, then the environment
is deterministic.
Episodic Vs Non episodic:
In this, agents experience is divided into episodes. Each episodes
consists of agents perceiving and then acting. The quality of the action depends
on the episode itself because subsequent episode do not depend
on what action occur in previous experience.
Discrete Vs Continuous:
If there is a limited no. of distinct clearly defined percepts & action we
say that the environment is discrete.
31. What are the phases involved in designing a problem solving agent?
The three phases are:
1. Problem formulation,
2. Search solution,
3. Execution.
32. What are the different types of problem?
1. Single state problem,
2. Multiple state problem,
3. Contingency problem,
4. Exploration problem.
33. Define problem.
A problem is really a collection of information that the agent will use to
decide what to do.
34. List the basic elements that are to be include in problem definition.
Initial state, operator, successor function, state space, path, goal test,
path cost.
35. What is called materialism?
An alternative to dualism is materialism, which holds that the entire world
operate according to physical law. Mental process and consciousness are therefore part of
physical world, but inherently unknowable they are beyond rational understanding.
36. Define an agent.
An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon the environment through effectors.
37. What is an agent function?
An agents behavior is described by the agent function that maps any given percept
sequence to an action.

38. Differentiate an agent function and an agent program.

Agent Function
An abstract mathematical description

Agent Program
A concrete implementation,running on the agent
Architecture.

39. What is a task environment? How it is specified?


Task environments a r e essentially the "problems" to which rational agents are the "solutions."
A Task environment is specified using PEAS (Performance, Environment, Actuators,
Sensors) description.

40. What can Ai do today?

UNIT 2: LOGICAL REASONING

1. What factors determine the selection of forward or backward


reasoning approach for an AI problem?
A search procedure must find a path between initial and goal states. There are
two directions in which a search process could proceed. Reason forward from the initial
states: Being formed the root of the search tree. General the next level of the tree by finding
all the rules whose left sides match the root node, and use their right sides to generate the
siblings. Repeat the process until a configuration that matches the goal state is generated.
2. What are the limitations in using propositional logic to represent the knowledge
base?
Al is small Ted is small Someone is small Everyone is small No-one is not small
Propositional Logic would represent each of these as a different Proposition, so the five
propositions might be represented by P. Q, R, S and T
3. What are Logical agents?
Logical agents apply inference to a knowledge base to derive
new information and make decisions.
4.What is first-order logic?
The first-order logic is sufficiently expressive to represent a good deal
of our commonsense knowledge. It also either subsumes or forms the foundation of
many other representation languages.
5. What is a symbol?
The basic syntactic elements of first-order logic are the
symbols. It stands for objects, relations and functions.
6. What are the types of Quantifiers?
The types of Quantifiers are,
Universal Quantifiers;
Existential Quantifiers.
7. What are the three kinds of symbols?
The three kinds of symbols are,
Constant symbols standing for objects;
Predicate symbols standing for relations;
Function symbols standing for functions.
8. What is Logic?
Logic is one which consist of
A formal system for describing states of affairs, consisting
of a) Syntax b) Semantics;
Proof Theory a set of rules for deducing the entailment of set
sentences.
9. Define a Sentence?
Each individual representation of facts is called a sentence.
The sentences are expressed in a language called as knowledge
representation language.
10. Define a Proof.
A sequence of application of inference rules is called a proof. Finding proof is
exactly finding solution to search problems. If the successor function is defined to generate
all possible applications of inference rules then the search algorithms can be applied to find
proofs.
11. Define Interpretation
Interpretation specifies exactly which objects, relations and
functions are referred to by the constant predicate, and function symbols.
12. What are the three levels in describing knowledge based agent?
The three levels in describing knowledge based agent

Logical level;
Implementation level;
Knowledge level or epistemological level.
13. Define Syntax?
Syntax is the arrangement of words. Syntax of a knowledge describes the
possible configurations that can constitute sentences. Syntax of the language describes
how to make sentences.
14. Define Semantics
The semantics of the language defines the truth of each sentence with respect to
each possible world. With this semantics, when a particular configuration exists within an
agent, the agent believes the corresponding sentence.
15. Define Modus Ponens rule in Propositional logic?
The standard patterns of inference that can be applied to derive chains of
conclusions that lead to the desired goal is said to be Modus Ponens rule.
16. Define a knowledge Base.
Knowledge base is the central component of knowledge base agent and it is
described as a set of representations of facts about the world.
17. Define an inference procedure.
An inference procedure reports whether or not a sentence is entitled by
knowledge base provided a knowledge base and a sentence . An inference procedure i
can be
described by the sentences that it can derive. If i can derive from knowledge base, we can
write, KB Alpha is derived from
KB or i derives alpha from KB
18. What are the basic Components of propositional logic?
The basic Components of propositional logic
Logical Constants (True, False)
Propositional symbols (P, Q)
Logical Connectives (^,=,,)
19. Define AND Elimination rule in propositional logic.
AND elimination rule states that from a given conjunction it is
possible to inference any of the conjuncts.
1 ^2------^n
i
20. Define AND-Introduction rule in propositional logic.
AND-Introduction rule states that from a list of sentences
we can infer their conjunctions.
1,2,..n
1^2^.^n
21. What is forward chaining?
A deduction to reach a conclusion from a set of antecedents is called forward
chaining. In other words, the system starts from a set of facts, and a set of rules, and tries to
find the way of using these rules and facts to deduce a conclusion or come up with a
suitable course of action.
22. What is backward chaining?
In backward chaining, we start from a conclusion, which is the hypothesis we
wish
to prove, and we aim to show how that conclusion can be reached from the rules and facts in the
data base.

23.Define Logic
Logic is one which consist of
i. A formal system for describing states of affairs, consisting
of a) Syntax b)Semantics. a set sentences.
24.What is entailment
The relation between sentences is called entailment. The formal
definition of entailment is this:
if and only if in every model in which is true, is also true or if is
true then must also be true.
25.What is truth Preserving
An inference algorithm that derives only entailed sentences is called
sound or truth preserving .
26.Define a Proof
A sequence of application of inference rules is called a proof. Finding
proof is exactly finding solution to search problems. If the successor function is
defined to generate all possible applications of inference rules then the
search algorithms can be applied to find proofs.
Define a Complete inference procedure
27.Define Interpretation
Interpretation specifies exactly which objects, relations and functions
are reffered to by the constant predicate, and function symbols.
28.Define Validity of a sentence
A sentence is valid or necessarily true if and only if it is true under all
possible interpretation in all posssible world.
29.Define Satistiability of a sentence
A sentence is satisfiable if and only if there is some interpretation in
some world for which it is true.
30.Define true sentence
A sentence is true under a particular interpretation if the state of
affairs it represents is the case.
31.What are the basic Components of propositonal logic?
i. Logical Constants (True,
False)
32.Define OR-Introduction rule in ropositonal logic
1
1
v 2
v
v n

OR-Introduction rule states that from, a sentence, we can infer its disjunction with
anything.
33.How Knowledge is represented?
A variety of ways of knowledge(facts) have been exploited in AI programs.
Facts : truths in some relevant world. These are things we want to represent.
34.What is propositional logic?
It is a way of representing knowledge.In logic and mathematics, a propositional calculus or
logic is a formal system in which formulae representing propositions can be formed by
combining atomic propositions using logical connectives

35.What are the elements of propositional logic?


Simple sentences which are true or false are basic propositions. Larger and more
complex sentences are constructed from basic propositions by combining them with connectives.
Thus propositions and connectives are the basic elements of propositional logic. Though there
are many connectives, we are going to use the following five basic connectives here:
NOT, AND, OR, IF_THEN (or IMPLY), IF_AND_ONLY_IF.
They are also denoted by the symbols:
, , ,
,
, respectively.

36.What is inference?
Inference is deriving new sentences from old.
37.What are modus ponens?
There are standard patterns of inference that can be applied to derive chains of
conclusions that lead to the desired goal. These patterns of inference are called
inference

rules. The best-known rule is called Modus Ponens and is written as follows:

38.What is entailment?
Propositions tell about the notion of truth and it can be applied to logical reasoning.We can
have logical entailment between sentences. This is known as entailment where a sentence
follows logically from another sentence. In mathematical notation we write :

39.What are knowledge based agents?


The central component of a knowledge-based agent is its knowledge base, or KB.
Informally,a knowledge base is a set of sentences. Each sentence is expressed in a language
called a nowledge representation language and represents some assertion about the world.

40.What are quantifiers?


There is need to express properties of entire collections of objects,instead of enumerating
the objects by name. Quantifiers let us do this.
FOL contains two standard quantifiers called
a) Universal () and
b) Existential ()

UNIT 3: PLANNING
1.
Define partial order
planner.
Basic Idea
Search in plan space and use least commitment, when possible
Plan Space Search
Search space is set of partial plans
Plan is tuple <A, O, B>
A: Set of actions, of the form (ai : Opj)
O: Set of orderings, of the form (ai < aj)
B: Set of bindings, of the form (vi = C), (vi C), (vi = vj) or (vi vj)
Initial plan:
<{start, finish}, {start < finish}, {}>
start has no preconditions; Its effects are the initial state
finish has no effects; Its preconditions are the goals
2. What are the differences and similarities between problem solving and
planning?
we put these two ideas together to build planning agents. At the most abstract level, the task
of planning is the same as problem solving. Planning can be viewed as a type of problem
solving in which the agent uses beliefs about actions and their consequences to search for a
solution over the more abstract space of plans, rather than over the space of situations
3. Define state-space search.
The most straightforward approach is to use state-space search. Because the
descriptions of actions in a planning problem specify both preconditions and effects, it is
possible to search in either direction: either forward from the initial state or backward
from the goal
4. What are the types of state-space search?
The types of state-space search are,
Forward state space search;
Backward state space search.
5.What is Partial-Order Planning?
A set of actions that make up the steps of the plan. These are taken from the set of
actions in the planning problem. The empty plan contains just the Start and Finish actions.
Start has no preconditions and has as its effect all the literals in the initial state of the
planning
problem. Finish has no effects and has as its preconditions the goal literals of the
planning problem.

6. What are the advantages and disadvantages of Partial-Order Planning?


Advantage: Partial-order planning has a clear advantage in
being able to decompose problems into sub problems.
Disadvantage: Disadvantage is that it does not represent
states directly, so it is harder to estimate how far a partialorder plan is from achieving a goal.
7. What is a Planning graph?
A Planning graph consists of a sequence of levels that correspond to time steps in
the plan where level 0 is the initial state. Each level contains a set of literals and a set of
actions.
8. What is Conditional planning?
Conditional planning is also known as contingency planning, conditional planning
deals with incomplete information by constructing a conditional plan that accounts for
each possible situation or contingency that could arise
9. What is action monitoring?
The process of checking the preconditions of each action as it is executed, rather
than checking the preconditions of the entire remaining plan. This is called action monitoring.
10. Define planning.
Planning can be viewed as a type of problem solving in which the agent
uses beliefs about actions and their consequences to search for a
solution.
11. List the features of an ideal planner?
The features of an ideal planner are,
The planner should be able to represent the states, goals
and actions;
The planner should be able to add new actions at any time;
The planner should be able to use Divide and Conquer method
for solving very big problems.
12. What are the components that are needed for representing an action?
The components that are needed for representing an action are,
Action description;
Precondition;
13. Define a solution.
A solution is defined as a plan that an agent can execute and that
guarantees the achievement of goal.
14. Define complete plan and consistent plan.
A complete plan is one in which every precondition of every step is
achieved by some other step.
A consistent plan is one in which there are no contradictions in the
ordering or binding constraints.
15. What are Forward state-space search and Backward state-space search?
Forward state-space search: It searches forward from the
initial situation to the goal situation.
Backward state-space search: It searches backward from the
goal situation to the initial situation.
16. What is Induction heuristics? What are the different types of induction heuristics?
Induction heuristics is a method, which enable procedures to learn
descriptions from positive and negative examples.
There are two different types of induction heuristics. They are:
Require-link heuristics.
Forbid-link heuristics.

17. Define Reification.


The process of treating something abstract and difficult to talk about as
though it were concrete and easy to talk about is called as reification.

18. What is reified link?


The elevation of a link to the status of a describable node is a kind of
reification. When a link is so elevated then it is said to be a reified link.
19. Define action monitoring.
The process of checking the preconditions of each action as it is executed, rather
than checking the preconditions of the entire remaining plan. This is called action monitoring.
20. What is meant by Execution monitoring?
Execution monitoring is related to conditional planning in the following way. An
agent that builds a plan and then executes it while watching for errors is, in a sense, taking
into account the possible conditions that constitute execution errors.
21. What are the components that are needed for representing a plan?
The components that are needed for representing a plan are,
A set of plans steps;
A set of ordering constraints;
22.What are the different types of planning?
The different types of planning are as follows:

Situation space planning.

Progressive planning.

Regressive planning

Partial order planning.

Fully instantiated planning.


23.What are the ways in which incomplete and incorrect informations can be handled
in planning?
They can be handled with the help of two planning agents namely,
Conditional planning agent.
Replanning agent.
24. Define a solution.
A solution is defined as a plan that an agent can execute and thjat
guarantees the achievement of goal.
25. Define a complete plan.
A complete plan is one in which every precondition of every step is
achieved by some other step.
26. Define a consistent plan.
A consistent plan is one in which there are no contradictions in the
ordering or binding constraints.
27. Define conditional planning.
Conditional planning is a way in which the incompleteness of information is
incorporated in terms of adding a conditional step, which involves if then rules.
28.Give the classification of learning process.
The learning process can be classified as:
Process which is based on coupling new information to previously acquired
knowledge
a. Learning by analyzing differences.
b. Learning by managing models.
c. Learning by correcting mistakes.
d. Learning by explaining experience.

29. What is Induction heuristics?


Descriptions from positive and negative examples.
30. What are the different types of induction heuristics?
There are two different types of induction heuristics. They are:
i.
R
ii
e
.
q
u
i
r
e
l
i
n
k
h
e
u
r
i
s
t
i
c
s
.
F
o
r
b
i
d
l
i
n
k
h
e
u
r
i
s
t
i
c
s
.

31. What are the principles that are followed by any learning procedure?
The wait and see principle.
The no altering principle.
Martins law.
32.State the wait and see principle.
The law states that, When there is doubt about what to do, do nothing
33. State the no altering principle.
The law states that, When an object or situation known to be an example, fails to match a general
model, create a special case exception model.
34. State Martins law.
The law states that, You cannot learn anything unless you almost know it already.
35.Define Similarity nets.
Similarity net is an approach for arranging models. Similarity net is a
representation in which nodes denotes models, links connect similar models and
links are tied to different descriptions.
36.Define Reification.
The process of treating something abstract and difficult to talk about as though it
were concrete and easy to talk about is called as reification.
37.What is reified link?
The elevation of a link to the status of a describable node is a kind of reification.
When a link is so elevated then it is said to be a reified link.
38. What is a decision tree?
A decision tree takes as input an object or situation described by a set of attributes and returns a
decision the predicted output value for the input.

39. Define Goal cost? (A.U. MAY/JUNE ,2010)


The goal test checks whether the State satisfies the goal of the planning problem.
40. Define Step Cost.
The step cost of each action is 1.Although it would be easy o allow different
costs for different actions.

UNIT 4: UNCERTAIN KNOWLEDGE AND REASONING


1. List down two applications of temporal probabilistic models.
A suitable way to deal with this problem is to identify a temporal causal model that may
effectively explain the patterns observed in the data. Here we will concentrate on probabilistic
models that provide a convenient framework to represent and manage underspecified
information; in particular, we will consider the class of Causal Probabilistic Networks (CPN).
2. Define Dempster-Shafer theory.
The DempsterShafer theory (DST) is a mathematical theory of evidence. It allows one to
combine evidence from different sources and arrive at a degree of belief (represented by a belief
function) that takes into account all the available evidence. The theory was first developed by
Arthur P. Dempster and Glenn Shafer
3. Define Uncertainty.
Uncertainty means that many of the simplifications that are possible with
deductive inference are no longer valid.
4. State the reason why first order, logic fails to cope with that the mind like medical
diagnosis.
Three reasons:
Laziness: It is hard to lift complete set of antecedents of consequence,
needed to ensure and exception less rule.
Theoretical Ignorance: Medical science has no complete theory for the
domain.
Practical ignorance: Even if we know all the rules, we may be uncertain
about a particular item needed.
5. What is the need for probability theory in uncertainty?
Probability provides the way of summarizing the uncertainty that comes from our
laziness and ignorance. Probability statements do not have quite the same kind of semantics
known as evidences.
6. What is the need for utility theory in uncertainty?
Utility theory says that every state has a degree of usefulness, or utility to In agent,
and that the agent will prefer states with higher utility. The use utility theory to represent and
reason with preferences.
7. What Is Called As Decision Theory?
Preferences As Expressed by Utilities Are Combined with Probabilities in the
General Theory of Rational Decisions Called Decision Theory. Decision Theory = Probability
Theory + Utility Theory.
8. Define conditional probability?
Once the agents has obtained some evidence concerning the previously unknown
propositions making up the domain conditional or posterior probabilities with the notation
p(A/B) is used. This is important that p(A/B) can only be used when all be is known.
9. When probability distribution is used?
If we want to have probabilities of all the possible values of a random
variable probability distribution is used.
Eg:
P(weather) = (0.7,0.2,0.08,0.02). This type of notations simplifies many equations.
10. What is an atomic event?
An atomic event is an assignment of particular values to all variables, in
other words, the complete specifications of the state of domain.

11. Define joint probability distribution.


Joint probability distribution completely specifies an agent's probability
assignments to all propositions in the domain. The joint probability distribution p(x1,x2,-------xn) assigns probabilities to all possible atomic events; where x1,x2------xn=variables.
12. What is meant by belief network?
A belief network is a graph in which the following holds
A set of random variables
A set of directive links or arrows connects pairs of nodes.
The conditional probability table for each node
The graph has no directed cycles.
13. What are called as Poly trees?
The algorithm that works only on singly connected networks known as
Poly trees. Here at most one undirected path between any two nodes is present.
14. What is a multiple connected graph?
A multiple connected graph is one in which two nodes are connected by more
than one path.
15. List the three basic classes of algorithms for evaluating multiply connected graphs.
The three basic classes of algorithms for evaluating multiply connected graphs
Clustering methods;
Conditioning methods;
Stochastic simulation methods.
16. What is called as principle of Maximum Expected Utility (MEU)?
The basic idea is that an agent is rational if and only if it chooses the action that
yields the highest expected utility, averaged over all the possible outcomes of the action. This is
known as MEU
17. What is meant by deterministic nodes?
A deterministic node has its value specified exactly by the values of its parents,
with no uncertainty.
18. What are all the uses of a belief network?
The uses of a belief network are,
Making decisions based on probabilities in the network and on the
agent's utilities;
Deciding which additional evidence variables should be observed in
order to gain useful information;
Performing sensitivity analysis to understand which aspects of the
model have the greatest impact on the probabilities of the query
variables (and therefore must be accurate);
Explaining the results of probabilistic inference to the user.
19. Give the Baye's rule equation
W.K.T P(A ^ B) = P(A/B) P(B) -------------------------- 1
P(A ^ B) = P(B/A) P(A) -------------------------- 2
DIVIDING BYE P(A) ; WE GET
P(B/A) = P(A/B) P(B)
-------------------P(A)
20. What is called as Markov Decision problem?
The problem of calculating an optimal policy in an accessible, stochastic
environment with a known transition model is called a Markov Decision Problem(MDP).
21. Define Dynamic Belief Network.
A Belief network with one node for each state and sensor variable for each
time step is called a Dynamic Belief Network.(DBN).

22. Define Dynamic Decision Network?


A decision network is obtained by adding utility nodes, decision nodes for action in
DBN. DDN calculates the expected utility of each decision sequence.
23.Define the term utility?
The term utility is used in the sense of "the quality of being useful .", utility
of a state is relative to the agents, whose preferences the utility function is
supposed to represent.
Give the Baye's rule equation
W.K.T P(A ^ B) = P(A/B) P(B)
-------------------------- 1
P(A ^ B) = P(B/A) P(A) -------------------------- 2
DIVIDING BYE P(A) ; WE GET
P(B/A) = P(A/B) P(B)
-------------------P(A)
24.What is meant by belief network?
A belief network is a graph in which the following holds
A set of random variables
A set of directive links or arrows connects pairs of nodes.
The conditional probability table for each node
The graph has no directed cycles.
25. What are the ways in which one can understand the semantics of a belief
network?
There are two ways to see the network as a representation of the joint probability
distribution to view it as an encoding of collection of conditional independence statements.
26.What is the basic task of a probabilistic inference?
The basic task is to reason in terms of prior probabilities of conjunctions, but for the most part,
we will use conditional probabilities as a vehicle for probabilistic
inference.
27. What are called as Poly trees?
The algorithm that works only on singly connected networks known as
Poly trees. Here at most one undirected path between any two nodes is present.
28.Define casual support
E+X is the casual support for X- the evidence variables "above" X that are
connected to X through its parent.

29.Define decision theory .


Decision theory=probability theory+utility theory
30.What is random sampling?
This is a process for Bayesian networks generates events from a
network that has no evidence associated with it.

31.Define rejection sampling.


It is for producting samples from a hard to sample from a hard to samples
distribution given an easy to sample distribution.
32.Define clustering
It is a join individual nodes of the network to form clusters nodes in such a way that
the resulting network is a polytree.
33.What is likelihood weighting?
It generates only events that are consistent with the evidence.it generates
consistent probability estimates.
34.What is product rule?
P(a^b)=p(a/b) p(b).
35.What are the two ways to understand the semantics of Bayesian networks?
1.Representing the full joint distribution.
2.Method for constructing Bayesian network.
36.Construct a parse tree for the sentence Every agent smells a Wumpus
37.What are the models regarding burglary?
Diagnostic model, causal model
38.What are the types of random variables?
1)Boolean random variables 2)discrete random variables 3)continuous
random variables
39. Define turning machine?
The "Turing Machine" showed that you could use recursive system to program any
algorithmic task.
40. what Bayesian networks.
The basic idea is that an agent is rational if and only if it chooses the action that yields the
highest expected utility, averaged over all the possible outcomes of the action. This is known as
Bayesian networks

UNIT 5: LEARNING
1. Explain the concept of learning from example.
Each person will interpret a piece of information according to their level of understanding and their own
way of interpreting things.

2. What is meant by learning?


Learning is a goal-directed process of a system that improves the knowledge or the
Knowledge representation of the system by exploring experience and prior knowledge.
3. How statistical learning method differs from reinforcement learning method?
Reinforcement learning is learning what to do--how to map situations to actions--so as to
maximize a numerical reward signal.
4. Define informational equivalence and computational equivalence.
A transformation from on representation to another causes no loss of information;
they can be constructed from each other.
5. Define knowledge acquisition and skill refinement.
knowledge acquisition (example: learning physics)learning new
symbolic information coupled with the ability to apply that information in
an effective manner
skill refinement (example: riding a bicycle, playing the piano)occurs at a
subconscious level by virtue of repeated practice
6. What is Explanation-Based Learning?
The background knowledge is sufficient to explain the hypothesis of ExplanationBased Learning. The agent does not learn anything factually new from the instance. It extracts
general rules from single examples by explaining the examples and generalizing the explanation.
7. Define Knowledge-Based Inductive Learning.
Knowledge-Based Inductive Learning finds inductive hypotheses that explain
set of observations with the help of background knowledge.
8. What is truth preserving?
An inference algorithm that derives only entailed sentences is called sound or
truth preserving.
9. Define Inductive learning. How the performance of inductive learning algorithms can be
measured?
Learning a function from examples of its inputs and outputs is called inductive
learning.
It is measured by their learning curve, which shows the prediction accuracy as a
function of the number of observed examples.
10. List the advantages of Decision Trees
The advantages of Decision Trees are,
It is one of the simplest and successful forms of learning algorithm.
It serves as a good introduction to the area of inductive learning and is
easy to implement.
11. What is the function of Decision Trees?
A decision tree takes as input an object or situation by a set of properties,
and outputs a yes/no decision. Decision tree represents Boolean functions.
12. List some of the practical uses of decision tree learning.
Some of the practical uses of decision tree learning are,

Designing oil platform equipment


Learning to fly
13.What is the task of reinforcement learning?
The task of reinforcement learning is to use rewards to learn a successful agent
function.
14. Define Passive learner and Active learner.
A passive learner watches the world going by, and tries to learn the utility of being
in various states.
An active learner acts using the learned information, and can use its problem
generator to suggest explorations of unknown portions of the environment.
15. State the factors that play a role in the design of a learning system.
The factors that play a role in the design of a learning system are,
Learning element
Performance element
Critic
Problem generator
16. What is memorization?
Memorization is used to speed up programs by saving the results of computation.
The basic idea is to accumulate a database of input/output pairs; when the function is called, it
first checks the database to see if it can avoid solving the problem from scratch.
17. Define Q-Learning.
The agent learns an action-value function giving the expected utility of taking a
given action in a given state. This is called Q-Learning.
18. Define supervised learning & unsupervised learning.
Any situation in which both inputs and outputs of a component can be perceived is
called supervised learning.
Learning when there is no hint at all about the correct outputs is called unsupervised learning.
19. Define Bayesian learning.
Bayesian learning simply calculates the probability of each hypothesis, given the
data, and makes predictions on that basis. That is, the predictions are made by using all the
hypotheses, weighted by their probabilities, rather than by using just a single best
hypothesis.
20. What is utility-based agent?
A utility-based agent learns a utility function on states and uses it to select actions
that maximize the expected outcome utility.
21. What is reinforcement learning?
Reinforcement learning refers to a class of problems in machine learning which
postulate an agent exploring an environment in which the agent perceives its current state and
takes actions. The environment, in return, provides a reward (which can be positive or negative).
Reinforcement learning algorithms attempt to find a policy for maximizing cumulative reward for
the agent over the curse of the problem.
22. What is the important task of reinforcement learning?
The important task of reinforcement learning is to use rewards to learn a successful
agent function.
23. What is machine learning?
Machine learning
Like human learning from past experiences, a computer does not
have
experiences.
A computer system learns from data, which represent some past experiences
of an

application domain.
Objective of machine learning: learn a target function that can be
used to predict the values of a discrete class attribute, e.g.,
approve or not-approved, and high-risk or low risk.
The task is commonly called: Supervised learning, classification, or
inductive
learning
24. Define the term utility? (A.U.NOV/DEC 2009)
The term utility is used in the sense of "the quality of being useful. utility of a state is
relative
to the agents, whose preferences the utility function is supposed to represent.
25. What is the need for probability theory in uncertainty? (A.U.MAY/JUNE 2009)
Probability provides the way of summarizing the uncertainty that comes
from our laziness and ignorance. Probability statements do not have quite
the same kind of semantics known as evidences.
26. What is the need for utility theory in uncertainty? (A.U.MAY/JUN 2009)
Utility theory says that every state has a degree of usefulness, or utility
to in agent, and that the agent will prefer states with higher utility. The
use utility theory to represent and reason with preferences.
27. Define conditional probability?
Once the agents has obtained some evidence concerning the previously
unknown propositions making up the domain conditional or posterior
probabilities with the notation p(A/B) is used. This is important that p(A/B)
can only be used when all be is known.
28. Define probability distribution: (A.U. MAY/JUNE ,2012)
If we want to have probabilities of all the possible values of a random variable
probability distribution is used.
Eg.
P(weather) = (0.7,0.2,0.08,0.02). This type of notations simplifies many equations.
29 . Define joint probability distribution
This completely specifies an agent's probability assignments to all propositions in
the domain.The joint probability distribution p(x1,x2,--------xn) assigns probabilities to all
possible atomic events;where X1,X2------Xn =variables.
30. Give the Baye's rule equation(A.U APR/ MAY2011)
W.K.T P(A ^ B) = P(A/B) P(B) -------------------------- 1
P(A ^ B) = P(B/A) P(A) -------------------------- 2
DIVIDING BYE P(A) ; WE GET
P(B/A) = P(A/B) P(B)
-------------------- P(A)
31. Define Supervised learning .
Supervised learning is a machine learning technique for learning a function from
training data. The training data consist of pairs of input objects (typically vectors),
and desired outputs.
32. Compare Supervised learning and unsupervised learning..
Supervised vs. unsupervised Learning
Supervised learning:

classification is seen as supervised learning from examples.


o Supervision: The data (observations, measurements, etc.) are labeled with predefined classes. It is like that a teacher gives the classes (supervision).

33. What is a decision tree? (A.U. MAY/JUNE ,2012)


A decision tree takes as input an object or situation described by a set of
attributes
and returns a decision the predicted output value for the
input.
A decision tree reaches its decision by performing a sequence of
tests.
Example : HOW TO manuals (for car
repair)
34. Give an example of decision tree induction from examples. (A.U APR/MAY 2013)
An example for a Boolean decision tree consists of a vector of' input attributes, X,
and a single Boolean output value y. A set of examples (X1,Y1) . . . , (X2, y2) is
shown in Figure 18.3. The positive examples are the ones in which the goal
Will Wait is true (XI, X3, . . .); the negative examples are the ones in which it is
false (X2, X5, . . .). The complete set of examples is called the training set.

35. How the performance of learning algorithm is assessed?


Assessing the performance of the learning algorithm
A learning algorithm is good if it produces hypotheses that do a good job of predicting the
classifications of unseen examples.
36. What is Ensemble Learning? (A.U APR/ MAY2011), (A.U APR/MAY 2013)
Ensemble means a group producing a single effect.
Ensemble learning is the process by which multiple models, such as classifiers or experts, are
strategically generated and combined to solve a particular co mputational intelligence problem.
Ensemble learning is primarily used to improve the (classification, prediction, function
approximation, etc.) performance of a model, or reduce the likelihood of an unfortunate selection of
a poor one.
37. What is Explanation based Learning?(A.U APR/MAY 2010), (A.U NOV/DEC 2010)
Explanation based learning is a method for extracting general rules from
individual
observations. The basic idea behind EBL is first construct an explanation of the observation
using prior knowledge.
EXAMPLE A caveman toasting a lizard on the end of a pointed stick:

38. What is Relevance Based Learning? (A.U. MAY/JUNE ,2012)


Relevance-based learning (RBL) uses prior knowledge in the form of
determinations to identify the relevant attributes, thereby generating a reduced hypothesis
space and speeding up learning. RBL also allows deductive generalizations from single
39.What is reinforcement learning? (A.U NOV/DEC 2011)
Reinforcement learning refers to a class of problems in machine learning which
postulate an agent exploring an environment in which the agent perceives its current
state and takes actions.
40. What is active reinforcement learning?
Using passive reinforcement learning, utilities of states and transition
probabilities are learned.Those utilities and transitions can be plugged into
Bellman equations.

SIXTEEN MARKS
UNIT 1: PROBLEM SOLVING
1. Explain Agents in detail.
An agent is anything that can be viewed as perceiving its environment
through sensors and SENSOR acting upon that environment through
actuators.
Percept
We use the term percept to refer to the agent's perceptual inputs at any given instant.
Percept Sequence
An agents percept sequence is the complete history of everything the agent has ever
perceived.
Agent function
Mathematically speaking, we say that an agent's behavior is described by the
agent function
Properties of task environments
Fully observable vs partially observable;
Deterministic vs stochastic;
Episodic vs sequential;
Static vs dynamic;
Discrete vs continuous;
Single agent vs multi agent.
2. Explain uninformed search strategies.
Uninformed Search Strategies have no additional information about states beyond
that provided in the problem that knows whether one non goal state is more promising than
another are called informed search or heuristic search
strategies. There are five uninformed search strategies as
given below.
Breadth-first search;
Uniform-cost search;
Depth-first search;

Depth-limited search;
Iterative deepening search.
3. Explain informed search strategies.
Informed search strategy is one that uses problem-specific knowledge beyond the
definition of the problem itself. It can find solutions more efficiently than uninformed strategy.
Best-first search;
Heuristic function;
Greedy-Best First Search(GBFS);
A* search;
Memory Bounded Heuristic Search.
4. Explain CSP in detail.
A constraint satisfaction problem is a special kind of problem satisfies some
additional structural properties beyond the basic requirements for problem in general. In a
CSP, the states are defined by the values of a set of variables and the goal test specifies a set
of constraint that the value must obey.
CSP can be viewed as a standard search problem as follows:
Initial state: the empty assignment {}, in which all variables
are unassigned.
Successor function: a value can be assigned to any unassigned
variable, provided that it does not conflict with previously assigned
variables.
Goal test: the current assignment is complete.
Path cost: a constant cost for every step.
Varieties of CSPs:
Discrete variables.
CSPs with continuous domains.
Varieties of constraints :
Unary constraints involve a single variable.
Binary constraints involve pairs of variables.
Higher order constraints involve 3 or more variables.

UNIT 2: LOGICAL REASONING


1. Explain Reasoning patterns in propositional logic with example.
Modus ponens;
AND elimination;
OR elimination;
AND introduction;
Resolution;
Unit resolution;
Double negation.
2. Explain in detail about knowledge engineering process in FOL.
Identify the task;
Assemble the relevant knowledge;
Decide on a vocabulary of predicates, constraints and functions;
Encode general knowledge about the domain;
Encode a description of a specific problem;
Pose queries;
Debug the knowledgebase.

3. Discuss in detail about unification and lifting.


Unification;
Generalized modus ponens;
Lifting.
4. Explain in detail about forward and backward chaining with example.
Example;
Efficient forward chaining;
Incremental forward chaining;
Backward chaining;
Logic programming.
5. What is resolution? Explain it in detail.
Definition;
Conjunctive Normal Form;
Resolution interference rule.

UNIT 3: PLANNING
1. Explain partial order planning.
Partial-Order Planning;
A partial-order planning example;
Heuristics for partial-order planning.
2. Discuss about planning graphs in detail.
Planning graphs for heuristic estimation;
The GRAPHPLAN algorithm;
Termination of GRAPHPLAN.
3. Explain planning with State-Space Search in detail.
Forward State-Space Search;
Backward State-Space Search;
Heuristics for State-Space Search.
4. Describe Hierarchical Task Network Planning in detail.
Representing action decomposition;
Modifying the planner for decompositions.
5. Explain conditional planning in detail.
Conditional planning in fully observable environment;
Conditional planning in partially observable environment.

UNIT 4: UNCERTAIN KNOWLEDGE AND REASONING


1. Explain Bayesian Network in detail.
Semantics of Bayesian network;
presenting the full joint distribution,
Conditional independence relation in Bayesian networks.
Exact interference in Bayesian network;
Interference by enumeration,
The variable elimination algorithm,
The complexity of exact inference,
Clustering algorithm.
Approximate interference in Bayesian network;
Direct sampling methods,
Interference by Markov chain simulation.
2. Discuss about Hidden Markov Model in detail.
The Hidden Markov Model or HMM is a temporal probabilistic model in which
the state of the process is described by a single discrete random variable. The possible value

of the variable is the possible states. The HMM is used in speech recognition.
Simplified matrix algorithms.
3. Explain inference in Temporal Model.
Filtering and prediction;
Smoothing;
Finding the most likely sequence.
4. Discuss in detail about uncertainty.
Acting under uncertainty;
Basic probability notation;
The axioms of probability;
Inference using Full Joint Distribution;
Independence;
Bayes Rule and its use;
The Wumpus World Revisited.
5. Explain Basic Probability Notation and Axioms of Probability in detail.
Basic Probability Notation;
Propositions,
Atomic events,
Prior probability,
Conditional probability.
Axioms of probability;
Using the axioms of probability,
Why the axioms of probability are reasonable.

UNIT 5: LEARNING
1. Explain about Learning Decision Trees in detail.
Decision tree as performance elements;
Expressiveness of decision trees;
Inducing decision trees from examples;
Chong attribute tests;
Assessing the performance of the learning algorithm;
Noise and over fitting;
Broadening the applicability of decision trees.
2. Explain Explanation Based learning in detail.
The background knowledge is sufficient to explain the hypothesis of ExplanationBased Learning. The agent does not learn anything factually new from the instance. It extracts
general rules from single examples by explaining the examples and generalizing the
explanation.
Extracting general rules from examples;
Improving efficiency.
3. Explain Learning with complete data in detail.
Discrete model;
Naive Bayes models;
Continuous model;
Bayesian parameter learning;
Learning Bayes net structures.
4. Explain neural networks in detail.
Units in neural networks;
Network structures;
Single layer feed-forward neural networks;

Multilayer feed-forward neural networks;


Learning neural network structures.
5. Explain Reinforcement learning in detail.
Reinforcement learning refers to a class of problems in machine learning which postulate an
agent exploring an environment in which the agent perceives its current state and takes
actions. The environment, in return, provides a reward (which can be positive or
negative).Reinforcement learning algorithms attempt to find a policy for maximizing
cumulative reward for the agent over the curse of the problem.
Passive reinforcement learning;
Active reinforcement learning;
Generalization in reinforcement learning;
Policy search.

Das könnte Ihnen auch gefallen