Beruflich Dokumente
Kultur Dokumente
SHORT ANSWERS
1) What are the applications of AI?
ANS :
Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc., where
machine can think of large number of possible positions based on heuristic knowledge.
Natural Language Processing − It is possible to interact with the computer that understands
natural language spoken by humans.
Expert Systems − There are some applications which integrate machine, software, and special
information to impart reasoning and advising. They provide explanation and advice to the users.
Vision Systems − these systems understand, interpret, and comprehend visual input on the
computer. For example,
A spying aeroplane takes photographs, which are used to figure out spatial information or map of
the areas.
Police use computer software that can recognize the face of criminal with the stored portrait
made by forensic artist.
2.Write about tic-tac-tie diagram?
ANS:
3.Define intelligent systems?
ANS:
The ability of a system to calculate, reason, perceive relationships and analogies, learn
from experience, store and retrieve information from memory, solve problems,
comprehend complex ideas, use natural language fluently, classify, generalize, and adapt
new situations.
Types of Intelligence:
ANS:
Below are the top 5 best programming languages in the field of Artificial Intelligence?
1. PYTHON
Python is focused on DRY (don’t repeat yourself) and RAD (rapid application
development). Python is one of the most widely used programming languages in the AI
field of Artificial Intelligence thanks to its simplicity.
2. LISP
Lisp is one of the oldest (developed in 1958) and prominent language created by the Dr. John
MaCarthy, who coined the term ‘Artificial Intelligence’.
3. JAVA
This language stays alongside with the Lisp when we talk about development in AI field. The
features provided by it include efficient pattern matching, tree-based data structuring, and
automatic backtracking.
5. C++
C++ is the fastest programming language in the world. Its ability to talk at the hardware level
enables developers to improve their program execution time. C++ is extremely useful for AI
projects, which are time-sensitive. Search engines, for example, can utilize C++ extensively.
--------------------------------------------------------------------------------------------------
UNIT-I
LONG ANSWERS
1. Define Artificial Intelligence. Explain the techniques of A.I. Also describe the
characteristics of Artificial Intelligence?
ANS:
1.Adaptability:
By far the most commonly expressed attribute of intelligences adaptability, which for us means
the speed and scope of adaptability to unforeseen situations, including recognition (of the
unforeseen situation), assessment, proposals(for reacting to it), selection (of an activity), and
execution.
Accurate prediction of effects is even better (and more successful), but we save that one for a
later section.
2. Learning:
Another common attribute of intelligence is learning, which for us is the rate of effective
learning of observations, behavior patterns, facts, tools, methods, etc.
There is an enormous literature on learning in humans and animals, but our interest here is
mainly on the measurements for computing systems that can learn.
3 .Predictive modeling:
An important way to be less surprised at environmental phenomena is predictive
modeling, which for us means accurate modeling and prediction of the relevant external
environment.
This kind of modeling includes the ability to make more effective abstractions (which are treated
below in a later section).
4 .Problem Identification:
The best way to respond to problems quickly is to identify them quickly, which requires
speed and clarity of problem identification and formulation.
---------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------
2.Discuss the areas of application of Artificial Intelligence?
Applications of AI :
AI has been dominant in various fields such as −
Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc.,
where machine can think of large number of possible positions based on heuristic
knowledge.
Expert Systems − There are some applications which integrate machine, software, and
special information to impart reasoning and advising.
Vision Systems − These systems understand, interpret, and comprehend visual input on
the computer. For example,
o A spying aeroplane takes photographs, which are used to figure out spatial
information or map of the areas.
o Police use computer software that can recognize the face of criminal with the
stored portrait made by forensic artist.
It can handle different accents, slang words, noise in the background, change in
human’s noise due to cold, etc.
Handwriting Recognition –
1. The handwriting recognition software reads the text written on paper by a pen or on screen
by a stylus.
2. It can recognize the shapes of the letters and convert it into editable text.
Intelligent Robots − Robots are able to perform the tasks given by a human.
They have sensors to detect physical data from the real world such as light, heat,
temperature, movement, sound, bump, and pressure.
They have efficient processors, multiple sensors and huge memory, to exhibit
intelligence.
In addition, they are capable of learning from their mistakes and they can adapt to the
new environment.
ANS:
The ability of a system to calculate, reason, perceive relationships and analogies, learn from
experience, store and retrieve information from memory, solve problems, comprehend complex
ideas, use natural language fluently, classify, generalize, and adapt new situations.
Types of Intelligence:
As described by Howard Gardner, an American developmental psychologist, the Intelligence
comes in multifold −
You can say a machine or a system is artificially intelligent when it is equipped with at least
one and at most all intelligences in it.
Intelligence Composed of:
The intelligence is intangible. It is composed of −
Reasoning
Learning
Problem Solving
Perception
Linguistic Intelligence
Reasoning − It is the set of processes that enables us to provide basis for judgment,
making decisions, and prediction. There are broadly two types −
It conducts specific observations to makes broad It starts with a general statement and examines the
general statements. possibilities to reach a specific, logical conclusion.
Even if all of the premises are true in a statement, If something is true of a class of things in general, it is also
inductive reasoning allows for the conclusion to be true for all members of that class.
false.
Example − "All women of age above 60 years are
Example − "Nita is a teacher. Nita is studious. grandmothers. Stalin is 65 years. Therefore, Stalin is a
Therefore, All teachers are studious." grandmother."
Problem Solving − It is the process in which one perceives and tries to arrive at a
desired solution from a present situation by taking some path, which is blocked by
known or unknown hurdles.
Problem solving also includes decision making, which is the process of selecting the
best suitable alternative out of multiple alternatives to reach the desired goal are
available.
Linguistic Intelligence − It is one’s ability to use, comprehend, speak, and write the
verbal and written language. It is important in interpersonal communication.
Humans can figure out the complete object even if some part of it is missing or distorted;
whereas the machines cannot do it correctly.
ANS:
In the real world, the knowledge has some unwelcomed properties −
AI Technique is a manner to organize and use the knowledge efficiently in such a way that
AI techniques elevate the speed of execution of the complex program it is equipped with --
• AI Techniques
Even apparently radically different AI systems (such as rule based expert systems
and neural networks) have many common techniques.
four important ones
are:
1. Knowledge Representation: Knowledge needs to be represented Somehow – perhaps as
a series of if-then rules, as a frame based system, as
A semantic network, or in the connection weights of an artificial neural
network
3. Learning: Automatically building up knowledge from the environment –
Such as acquiring the rules for a rule based expert system, or
determining
The appropriate connection weights in an artificial neural
network.
3. Rule Systems: These could be explicitly built into an expert system by aknowledge engineer,
or implicit in the connection weights learnt by a neural network.
Search: This can take many forms – perhaps searching for a sequence of
states that leads quickly to a problem solution, or searching for a good set
of connection weights for a neural network by minimizing a fitness
function.
UNIT-II
State space search is a process used in the field of computer science, including artificial
intelligence(AI), in which successive configurations or states of an instance are
considered, with the intention of finding a goal state with a desired property.
Problems are often modelled as a state space, a set of states that a problem can be in.
The set of states forms a graph where two states are connected if there is an operation
that can be performed to transform the first state into the second.
State space search often differs from traditional computer science search methods
because the state space is implict: the typical state space graph is much too large to
generate and store in memory.
Instead, nodes are generated as they are explored, and typically discarded thereafter.
A solution to a combinatorial search instance may consist of the goal state itself, or of a
path from some initial state to the goal state.
Heuristic search is an AI search technique that employs heuristic for its moves.
Heuristic is a rule of thumb that probably leads to a solution. Heuristics play a major role
in search strategies because of exponential nature of the most problems.
Heuristics help to reduce the number of alternatives from an exponential number to a
polynomial number.
In Artificial Intelligence , heuristic search has a general meaning, and a more
specialized technical meaning.
In a general sense, the term heuristic is used for any advice that is often effective, but is
not guaranteed to work in every case.
Within the heuristic search architecture, however, the term heuristic usually refers to the
special case of a heuristic evaluation function .
Example: 8-puzzle
1 2 3
8 4
7 6 5
h=3
Just as iterative deepening solved the space problem of breadth-first search , iterative
deepening A* (IDA*) eliminates the memory constraints of A* search algorithm
without sacrificing solution optimality.
Each iteration of the algorithm is a depth-first search that keeps track of the cost, f(n)
= g(n) + h(n), of each node generated.
As soon as a node is generated whose cost exceeds a threshold for that iteration, its path
is cut off, and the search backtracks before continuing.
The cost threshold is initialized to the heuristic estimate of the initial state, and in each
successive iteration is increased to the total cost of the lowest-cost node that was pruned
during the previous iteration.
The algorithm terminates when a goal state is reached whose total cost does not exceed
the current threshold.
Example:
A* Example
Towers of Hanoi
Little Peg Peg
Disk 1 Peg 3
2
Big
Disk
Move both disks on to Peg 3
Never put the big disk on top the
little disk
4. Distinguish between problem reduction and game playing? .
When a problem can be divided into a set of sub problems, where each sub problem can
be solved separately and a combination of these will be a solution, AND-OR graphs or
AND - OR trees are used for representing the solution.
The decomposition of the problem or problem reduction generates AND arcs.
One AND are may point to any number of successor nodes.
All these must be solved so that the arc will rise to many arcs, indicating several possible
solutions.
Hence the graph is known as AND - OR instead of AND. Figure shows an AND - OR
graph.
An algorithm to find
a solution in an
AND - OR graph must
handle AND area
appropriately. A*
algorithm can
not search AND -
OR graphs
efficiently.
This can be
understand from
the give figure.
2ND UNIT LONG ANSWERS
UNIT-II
LONG ANSWERS
ANS:
Heuristics:
HEURISTIC INFORMATION
In order to solve larger problems, domain-specific knowledge must be added to
improve search efficiency.
Information about the problem include the nature of states, cost of transforming
from one state to another, and characteristics of the goals.
This information can often be expressed in the form of heuristic evaluation
function, say f(n,g), a function of the nodes n and/or the goals g.
Following is a list of heuristic search techniques:
1. Pure Heuristic Search
2. A* algorithm
3. Iterative-Deepening A*
4. Depth-First Branch-And-Bound
5. Heuristic Path Algorithm
6. Recursive Best-First Search
• The classic example is finding the route by road between two cities given the
straight line distances from each road intersection to the goal city. In this case, the
nodes are the intersections, and we can simply use the straight line distances as
h(n).
3. Iterative-Deepening A*:
Just as iterative deepening solved the space problem of breadth-first
search,iterative deepening A* (IDA*) eliminates the memory
constraints ofA* search algorithm without sacrificing solution
optimality.
Each iteration of the algorithm is a depth-first search that keeps track
of the cost, f(n) = g(n) + h(n), of each node generated.
As soon as a node is generated whose cost exceeds a threshold for
that iteration, its path is cut off, and the search backtracks before
continuing.
The cost threshold is initialized to the heuristic estimate of the initial
state, and in each successive iteration is increased to the total cost of
the lowest-cost node that was pruned during the previous iteration.
The algorithm terminates when a goal state is reached whose total
cost does not exceed the current threshold.
4. Depth-First Branch-And-Bound:
For many problems, the maximum search depth is known in advance or the search
is finite.
For example, consider the traveling salesman problem (TSP) of visiting each of
the given set of cities and returning to the starting city in a tour of shortest total
distance.
The most natural problem space for this problem consists of a tree where the root
node represents the starting city, the nodes at level one represent all the cities that
could be visited first, the nodes at level two represent all the cities that could be
visited second, etc.
In this tree, the maximum depth is the number of cities, and all candidate solution
occurs at this depth. In such a space, a simple depth-first search gurantees finding
an optional solution using space that is only linear with repsect to the number of
cities.
An early approach to this problem was the heuristic path algorithm (HPA).
Heuristic path algorithm is a best-first search algorithm, where the figure of merit of node
n is f(n) = (1-w)*g(n)+w*h(n).
Varying w produces a range of algorithms from uniform-cost search (w=0) through A*
(w=1/2) to pure heuristic search (w=1).
The trade off is often quite favorable, with small increases in solution cost yielding huge
savings in computation.
Moreover, it shows that the solutions found by this algorithm are guaranteed to be no
more than a factor of w/(1-w) greater than optimal, but often are significantly better.
The memory limitation of the heuristic path algorithm can be overcome simply by
replacing the best-first search with IDA* search using the sure weighted evaluation
function, with w>=1/2.
IDA* search is no longer a best-first search since the total cost of a child can beless
than that of its parent, and thus nodes are not necessarily expanded in best-first order.
Recursive Best-First Search (RBFS) is an alternative algorithm. Recursive best-first
search is a best-first search that runs in space that is linear with respect to the maximum
search depth, regardless of the cost funtion used. Even with an admissible cost function,
Recursive Best-First Search generates fewer nodes than IDA* , and is generally superior
to IDA* , except for a small increase in the cost per node generation.
It works by maintaining on the recursion stack the complete path to the current node
being expanded as well as all immediate siblings of nodes on that path, along with the
cost of the best node in the sub-tree explored below each sibling. Whenever the cost of
the current node exceeds that of some other node in the previously expanded portion of
the tree, the algorithm backs up to their deepest common ancestor, and continues the
search down the new path. In effect, the algorithm maintains a separate threshold for each
sub-tree diverging from the current search path.
Advantages Disadvantages
ANS:
Here we are going to go through a generalized run through of the algorithm. It may help your
understanding to use a diagram for reference early in the document.
For each node visited we will assign storage for the path taken to backtrack to that node, a value
called Alpha and a value called Beta, as well as the current score for that node.
Set the value of Alpha at the initial node to -Limit and Beta to +Limit. Because initially these
are the max values that Alpha or Beta could possibly obtain.
o Search down the tree to the given depth.
o Once reaching the bottom, calculate the evaluation for this node.(i.e. it's
utility)
o If the current score is less than the score stored at the parent, replace the score at
the parent with this and store the path from the bottom and the value of Beta in
the parent.
o If the score at the parent is now less than Alpha stored at that parent, ignore any
further children of this parent and backtrack the parent's value of Alpha and Beta
up the tree.
If the score at the parent is greater than Alpha, set the Alpha value of the parent to this
score and proceed with the next child, sending Alpha and Beta down. If no children
exist, propagate Alpha and Beta up the tree and propagate the value of Alpha up as the
min score.
o If the current score is more than the score stored at the parent, replace the score at
the parent with this and store the path from the bottom and the value of Alpha in
the parent.
o If the score at the parent is now more than Beta stored at that parent, ignore any
further children of this parent and backtrack the parent's value
of Alpha and Beta up the tree.
o If the score at the parent is less than Beta, set the Beta value of the parent to this
score and proceed with the next child, sending Alpha and Beta down. If no
children exist, propagate Alpha and Beta up the tree and propagate the value
of Beta up as the max score.
4. When the search is complete, the Alpha value at the top node gives the minimum score
that the player is guaranteed to attain if using the path stored at the top node.
Let’s make above algorithm clear with an example.
The initial call starts from A. The value of alpha here is -INFINITY and the value of beta
is +INFINITY. These values are passed down to subsequent nodes in the tree. At A the
maximizer must choose max of B and C, so A calls B first
At B it the minimizer must choose min of D and E and hence calls D first.
At D, it looks at its left child which is a leaf node. This node returns a value of 3. Now
the value of alpha at D is max( -INF, 3) which is 3.
To decide whether its worth looking at its right node or not, it checks the condition
beta<=alpha. This is false since beta = +INF and alpha = 3. So it continues the search.
D now looks at its right child which returns a value of 5.At D, alpha = max(3, 5) which is
5. Now the value of node D is 5
D returns a value of 5 to B. At B, beta = min( +INF, 5) which is 5. The minimizer is now
guaranteed a value of 5 or lesser. B now calls E to see if he can get a lower value than 5.
At E the values of alpha and beta is not -INF and +INF but instead -INF and 5
respectively, because the value of beta was changed at B and that is what B passed down
to E
Now E looks at its left child which is 6. At E, alpha = max(-INF, 6) which is 6. Here the
condition becomes true. beta is 5 and alpha is 6. So beta<=alpha is true. Hence it breaks
and E returns 6 to B
Note how it did not matter what the value of E‘s right child is. It could have been +INF
or -INF, it still wouldn’t matter, We never even had to look at it because the minimizer was
guaranteed a value of 5 or lesser. So as soon as the maximizer saw the 6 he knew the
minimizer would never come this way because he can get a 5 on the left side of B. This
way we dint have to look at that 9 and hence saved computation time.
E returns a value of 6 to B. At B, beta = min( 5, 6) which is 5.The value of node B is also
5
So far this is how our game tree looks. The 9 is crossed out because it was never computed.
ANS:
BFS expands the leaf node with the lowest path cost so far, and keeps going until a goal node
is generated. If the path cost simply equals the number of links, we can implement this as a
simple queue (“first in, first out”).
• This is guaranteed to find an optimal path to a goal state. It is memory intensive if the state
space is large. If the typical branching factor is b, and the depth of the shallowest goal state is
d – the space complexity is O(bd), and the time complexity is O(bd).
• BFS is an easy search technique to understand. The algorithm is presented below.
breadth_first_search ()
state ;
exhausted)
} else continue ;
• The algorithm is illustrated using the bridge components configuration problem.The initial
state is PDFG, which is not a goal state; and hence set it as the current state. Generate another
state DPFG (by swapping 1st and 2nd position values) and add it to the list. That is not a goal
state, hence; generate next successor state, which is FDPG (by swapping 1st and 3rd position
values). This is also not a goal state; hence add it to the list and generate the next successor
state GDFP.
• Only three states can be generated from the initial state. Now the queue Q will have three
elements in it, viz., DPFG, FDPG and GDFP. Now take DPFG (first state in the list) as the
current state and continue the process, until all the states generated from this are evaluated.
Continue this process, until the goal state DGPF is reached.
• The 14th evaluation gives the goal state. It may be noted that, all the states at one level in
the tree are evaluated before the states in the next level are taken up; i.e., the evaluations are
carried out breadth-wise. Hence, the search strategy is called breadth- first search.
• Depth First Search (DFS):
DFS expands the leaf node with the highest path cost so far, and keeps going until a goal
node is generated. If the path cost simply equals the number of links, we can implement this
as a simple stack (“last in, first out”).
• This is not guaranteed to find any path to a goal state. It is memory efficient
even if the state space is large. If the typical branching factor is b, and the
maximum depth of the tree is m – the space complexity is O(bm), and the time
complexity is O(bm).
• In DFS, instead of generating all the states below the current level, only the
first state below the current level is generated and evaluated recursively. The
search continues till a further successor cannot be generated.
• Then it goes back to the parent and explores the next successor. The algorithm is given
below.
depth_first_search ()
depth_first_search (current_state) ;
else continue ;
• Since DFS stores only the states in the current path, it uses much less memory during the
search compared to BFS.
• The probability of arriving at goal state with a fewer number of evaluations is higher
with DFS compared to BFS. This is because, in BFS, all the states in a level have to be
evaluated before states in the lower level are considered. DFS is very efficient when more
acceptable solutions exist, so that the search can be terminated once the first acceptable
solution is obtained.
• BFS is advantageous in cases where the tree is very deep.
• An ideal search mechanism is to combine the advantages of BFS and DFS.
ANS:
One of the points of logic is that you can reason about statements even when you don't
know what those statements mean.
So, for example, you can say "It's raining and I'm wet," which is a representation as
characters describing an utterance in natural language.
In logic, we might represent this like so:
(a∧ b ) {\displaystyle (a\wedge b)}
In this case the proposition "It's raining" is represented by "a" and "I'm wet" is
represented by b.
The ∧ {\displaystyle \wedge} symbol stands for "and."
ANS:
ANS:
Predicate Logic deals with predicates, which are propositions containing variables.
A predicate with variables can be made a proposition by either assigning a value to the
variable or by quantifying the variable.
Quantifiers
There are two types of quantifier in predicate logic − Universal Quantifier and
Existential Quantifier.
Universal Quantifier
Universal quantifier states that the statements within its scope are true for every value of
the specific variable.
It is denoted by the symbol ∀∀.
∀xP(x)∀xP(x) is read as for every value of x, P(x) is true.
Example − "Man is mortal" can be transformed into the propositional form
∀xP(x)∀xP(x) where P(x) is the predicate which denotes x is mortal and the universe of
discourse is all men.
ANS:
In mathematics, an axiomatic system is any set of axioms from which some or all
axioms can be used in conjunction to logically derive theorems.
A theory consists of an axiomatic system and all its derived theorems. An axiomatic
system that is completely described is a special kind of formal system.
A formal theory typically means an axiomatic system, for example formulated within
model theory.
A formal proof is a complete rendition of a mathematical proof within a formal system.
Stating definitions and propositions in a way such that each new term can be formally
eliminated by the priorly introduced terms requires primitive notions (axioms) to avoid
infinite regress.
This way of doing mathematics is called the axiomatic method.[1]
A common attitude towards the axiomatic method is logicism.
In their book Principia Mathematica, Alfred North Whitehead and Bertrand Russell
attempted to show that all mathematical theory could be reduced to some collection of
axioms.
More generally, the reduction of a body of propositions to a particular collection of
axioms underlies the mathematician's research program.
This was very prominent in the mathematics of the twentieth century, in particular in
subjects based around homological algebra.
AI LONG ANSWERS
UNIT-III
LONG ANSWERS
1. What is predicate logic? Explain the predicate logic representation with reference to
suitable example?
ANS:
Predicate logic:
The world is described in terms of elementary propositions and their logical
combinations.
In predicate logic the formalism of propositional logic is extended and is made it
more finely build than propositional logic.
Thus it is possible to present more complicated expressions of natural language
and use them in formal inference.
A predicate with variables can be made a proposition by either assigning a value to the
variable or by quantifying the variable.
Quantifiers
The variable of predicates is quantified by quantifiers. There are two types of quantifier
in predicate logic − Universal Quantifier and Existential Quantifier.
Universal Quantifier
Universal quantifier states that the statements within its scope are true for every value of
the specific variable.
It is denoted by the symbol ∀∀.
∀xP(x)∀xP(x) is read as for every value of x, P(x) is true.
Example –
"Man is mortal" can be transformed into the propositional form ∀xP(x)∀xP(x) where
P(x) is the predicate which denotes x is mortal and the universe of discourse is all men.
Existential Quantifier
Existential quantifier states that the statements within its scope are true for some values
of the specific variable. It is denoted by the symbol ∃∃.
∃xP(x)∃xP(x) is read as for some values of x, P(x) is true.
Example –
"Some people are dishonest" can be transformed into the propositional
form ∃xP(x)∃xP(x) where P(x) is the predicate which denotes x is dishonest and the
universe of discourse is some people.
Nested Quantifiers
If we use a quantifier that appears within the scope of another quantifier, it is called
nested quantifier.
Example
ANS:
Propositional Logic :
• The language that is used to express propositional logic is called the propositional
calculus.
• A logical system can be defined in terms of its syntax (the alphabet of symbols
and how they can be combined), its semantics (what the symbols mean), and a set of rules of
deduction that enable us to derive one expression from a set of other expressions and
thus make arguments and proofs.
• Syntax
If we need to represent a very large number of them, we will use the subscript notation
(e.g., p1).
P,Q,R. . .
true, false
(A)
¬A
Semantics
The semantics of the operators of propositional calculus can be defined interms of truth
tables.
The beauty of this representation is that it is possible for a computer to reasonabout them
in a very general way, without needing to know much about thereal world.
In other words, if we tell a computer, “I like ice cream, and I like chocolate,” itmight
represent this statement as A ∧ B, which it could then use to reasonwith, and, as we will see,
it can use this to make deductions.
----------------------------------------------------------------------------------
ANS:
I'm taking a course in Mathematical Logic right now and we have to use semantic tableau
to find out if a formula is satisfiable (some interpretations give a value of T).
Given these examples for logical formulas A and B: Ben-Ari: Mathematical Logic for
Computer Science Fig. 2.7
How do I determine how to build the tree? I know that the first time you decompose the
formula you remove the conjunction, like this:
p ∧ (¬q ∨ ¬p)
↓ p, ¬q ∨ ¬p
p, ¬q ∨ ¬p
/ \
p, ¬q p, ¬p
happens.
Why did this decomposition happen?
Can someone explain to me how the tree came to be, step by step?
I read the textbook (Ben-Ari Mathematical Logic for Computer Science) and I'm still
confused at how to build the tree.
logic
The method of semantic tableaux is an efficient decision procedure for satisfiability (and
by duality validity) in propositional logic.
The principle behind semantic tableaux is very simple: search for a model (satisfying
interpretation) by decomposing the formula into sets of atoms [e.g. propositional letters :
p,q,…] and negations of atoms.
It is easy to check if there is an interpretation for each set: a set of atoms and negations
of atoms is satisfiable iff the set does not contain an atom p and its negation ¬p.
The formula is satisfiable iff one of these sets is satisfiable.
For each formula, every step is uniquely defined, because you have to decompose the
formula according to the principal connective.
A literal is an atom or the negation of an atom.
An atom is a positive literal and the negation of an atom is a negative literal.
For any atom p,{p,¬p} is a complementary pair of literals.
Let :
A=p∧(¬q∨¬p).
The principal operator of A is conjunction, so [by truth-tables for connectives; see page
16] vI(A)=T if and only if both vI(p)=T and vI(¬q∨¬p)=T.
The principal operator of ¬q∨¬p is disjunction, so vI(¬q∨¬p)=T if and only if either
vI(¬q)=T or vI(¬p)=T.
Thus we have to apply the procedure to A with the goal of verifying if the formuala A is
satisfiable or not.
The procedure will always end, because a formula is a finite string of symbols, with only
a finite number of occurrences of connectives.
At every step we decompose the final a formula of a branch according to the principal
connective applying the above rules.
If the formula B has as principal connective the conjunction, it is B1∧B2.
Thus, according to the rules for evaluating the connectives, in order to satisfy B we have
that both B1 and B2 must be true.
Thus, we add a new node to the branch with both B1 and B2.If the formula B has as
principal connective the disjunction, it is B1∨B2.
Thus, according to the rules for evaluating the connectives, in order to satisfy B we have
that at least one of B1 and B2 must be true. Thus, we branch, one for each possibility.
Thus, for :
p∧(¬q∨¬p)
you can only apply the rule for ∧, because it is the princupal connective.
In the second step :
p,¬q∨¬p
p is atomic, i.e. indecomposable. Thus you can only decompose ¬q∨¬p, applying the rule
for ∨.
You have two formulae; thus you can choose how to start.
First decompose ¬p∧¬q and then p∨q, with the branching.
Thus, the "strategy" is : if you can (as in B) use the "branching" rules at the end, in order
to have more "compact" trees.
I checked the book, and I think the construction of the semantic tableaux is quite well
explained, but I will try to give you some hints about the procedure.
The objective is find a model, an interpretation (a truth assignment to the atoms p,q,...)
that satisfies the formula. You place the formula at the root (top node) of the tree, and you
decompose it, step by step starting from the main operator (connective). At each step
(node), depending on the form of the formula that you are decomposing, the tree splits
(the node has two child nodes) or not (see Table 2.8 of the book). You follow the
procedure until you only have atoms (p,q,...) or negations of atoms (¬p,¬q,...) at the
bottom (leaves) of the tree. Then you check all the leaves of the tree: if any of them does
not contain an atom and its negation, the set of atoms of that leave and all subformulas in
the path to the top node are satisfiable.
Let's take example A.
The main operator is ∧. For the formula to be T, both subformulas p and ¬q∨¬p must be
true, just one possibility, so that the tree does not split and you write both subformulas
separated by a comma in the next node.
In this node, the first subformula is already an atom (p), so nothing to do with it, it will
pass unchanged to the next node(s). The other ¬q∨¬p is a disjuction (main operator ∨),
so that for it to be T, either ¬q is T or ¬p is true. This means that there are two
possibilities to satisfy the formula, so that the tree must split: in one node we write ¬q and
the other ¬p (in both cases with the atom p which as I said passes unchanged)
Now we have reached the leaves, since we only have atoms or negation or atoms. We
have two sets, one for each leave: {p,¬q} and {p,¬p}.
If any of them is satisfiable, the original formula (and all subformulas in the path to the
top along that branch of the tree) are also satisfiable.
Obviously, {p,¬p} is not satisfiable, since both p and its negation cannot be T
simultaneously, so this path (branch) closes, and we mark it with an X at the bottom.
But {p,¬q} is satisfiable in the interpretation that assigns T to p and F to q, so we have
found an interpretation that satisfies the formula.
I hope this helps. I am sure that you will be able to do example B.
I just want to add something. I have been talking about truth values, meaning of the
logical connectives,... that is about the semantics of the propositional logic, in order to
guide you in the process.
But the rules for construction of the semantic tableaux can be given (and that is their
main objective) as an algorithm in which the rule to apply at each node of the tree only
depends on the form (syntaxis) of the formula there, see section 2.6.2 and algorithm 2.64
in the book.
In other words, it is a proof system which you (or a computer) can apply to find out if a
formula is satisfiable (or valid, or and inference valid, ... all semantic notions) based only
on the form (how it is made up of symbols, a syntactic notion) of the formula.
The following exercises are written to further develop an understanding of the terms and
concepts described in section 1.1.1 Introduction to Axiomatic Systems.
The theorems may not be numbered in the order you need to prove them, but make sure
you do not use circular reasoning.
Solutions for selected problems are available in the solutions section of the Chapter One
table of contents.
Exercise 1.1.
Exercise 1.2.
Exercise 1.3.
UNIT-IV
ANS:
ANS:
ANS:
This model was extensively used by Schank's students at Yale University such as
Robert Wilensky, Wendy Lehnert, and Janet Kolodner.
Schank developed the model to represent knowledge for natural language input into
computers.
Partly influenced by the work of Sydney Lamb, his goal was to make the meaning
independent of the words used in the input, i.e. two sentences identical in meaning,
would have a single representation.
times
locations
A set of conceptual transitions then act on this representation, e.g. an ATRANS is
used to represent a transfer such as "give" or "take" while a PTRANS is used to act on
locations such as "move" or "go".
An MTRANS represents mental acts such as "tell", etc.
A sentence such as "John gave a book to Mary" is then represented as the action of an
ATRANS on two real world objects John and Mary.
ANS:
Artificial Intelligence (AI) is once again attracting everyone’s interest.
This time around, it’s both connected and disconnected from fundamental ideas behind
Here’s a brief definition of some frequently used terms that provide context for this post
Artificial — not human
Intelligence — an ability to apply reasoning and inference to information (that is, data in
some context)
Language — systematic use of signs, syntax, and semantics for encoding and decoding
digital sentences that are comprehensible to both humans and machines (courtesy of logic
variety of ways; i.e., observation (or data) is a collection of entity relationships categorized
AI LONG ANSWERS
UNIT-IV
LONG ANSWERS
ANS:
Representation of knowledge
ANS:
ANS:
A second set of building block is the set of allowable dependencies among the
conceptualization describe in a sentence.
4.Explain about script structure?
ANS:
SCRIPT STRUCTURE:
So more then having a problem with a block of code I have a question regarding code
structure and code layout.
I see in AI scripts, co routines are the way to go.
It isn't to hard to tell why as I had my AI script to begin with THE "big ugly mess".
I had briefly read a few times about co-routines and figured the problems I would be
having with my script would likely be fixed by a distinct separation.
separate scripts or co routines. So now its done and its working great so I was coming
to the problem of "hey that's my enemy buddy, lets crash into him and push off
course", cause that is delightfully amusing.
I tried a couple things to see if i could get relative data which a check against the
enemy tag and if its index is not the index of this object then getting the other enemy
data, then testing to see how close that enemy.
Which this did work and I could stop my enemy, but it was very erratic.
So that's not going to be my answer.
I was thinking of running a ray cast then at least if your ray casting then you know
the enemy is in front of you.
Upon reading about how others implemented this idea, well it looks really ugly,
almost like its still not meant to be that way and your over stretching the methods
purposes (with ray-casting into the angles).
So now I'm wondering whats going to be good piece for this job..
I know it should not only be able to test for the enemy but also the player, then it
really only needs to be turned on when a condition some-wheres else it met so its not
running all the time.
And I thought why not use rectangles and triangles definitions in a class of
"enemyF.O.V" or whatever, its on its one is specific and its easily transferable to
different enemies and different styled F.O.V. make something to serve a purpose and
serve it well solely.
So what I am asking is in your individual experiences and opinions what did you use
and why? whats better for performance as well.
Obviously nothings perfect but I'd like to see what others have to say.
Please I want to see the theory not to much direct code, I'll figure something out :)
PS: as I wrote this i came up with a neat idea about this(sorry if this is not legible)
figured id throw it in ((kinda like the class contains a list of rect transforms, accept
the transforms acctualy type FOV-obj cause i'm not sure of a name so a new class
inheriting from rectTrasform that contains a Type of shape)) and this^ is added into
the Enemy class which would be defined in the enemy constructor because each
enemy would have there individual.
Roger Schank, Robert P. Abelson and their research group, extended Tomkins' scripts
and used them in early artificial intelligence work as a method of representing
procedural knowledge.
[1] In their work, scripts are very much like frames, except the values that fill the
slots must be ordered.
A script is a structured representation describing a stereotyped sequence of events in
a particular context.
Scripts are used in natural language understanding systems to organize a knowledge
base in terms of the situations that the system should understand.
The classic example of a script involves the typical sequence of events that occur
when a person drinks in a restaurant: finding a seat, reading the menu, ordering
drinks from the waitstaff...
In the script form, these would be decomposed into conceptual transitions, such as
MTRANS and PTRANS, which refer to mental transitions [of information] and
physical transitions [of things].
Schank, Abelson and their colleagues tackled some of the most difficult problems in
artificial intelligence (i.e., story understanding), but ultimately their line of work
ended without tangible success.
This type of work received little attention after the 1980s, but it is very influential in
later knowledge representation techniques, such as case-based reasoning.
Scripts can be inflexible. To deal with inflexibility, smaller modules called memory
organization packets (MOP) can be combined in a way that is appropriate for the
situation.
ANS:
Conceptual dependency theory is a model of natural language understanding used in
artificial intelligence systems.
Roger Schank at Stanford University introduced the model in 1969, in the early days of
artificial intelligence.
[1] This model was extensively used by Schank's students at Yale University such as
Robert Wilensky, Wendy Lehnert, and Janet Kolodner.
Schank developed the model to represent knowledge for natural language input into
computers.
Partly influenced by the work of Sydney Lamb, his goal was to make the meaning
independent of the words used in the input, i.e. two sentences identical in meaning, would
have a single representation.
The system was also intended to draw logical inferences.[2]
A sentence such as "John gave a book to Mary" is then represented as the action of an
ATRANS on two real world objects John and Mary.
UNIT-V
SHORT ANSWERS
ANS:
Expert System:
In artificial intelligence, an expert system is a computer system that emulates the
decision-making ability of a human expert.
Expert systems are designed to solve complex problems by reasoning through
bodies of knowledge, represented mainly as if–then rules rather than through
conventional procedural code.
The first expert systems were created in the 1970s and then proliferated in the
1980s.
Expert systems were among the first truly successful forms of artificial
intelligence (AI) software.
However, some experts point out that expert systems were not part of true
artificial intelligence since they lack the ability to learn autonomously from
external data.
Traditional system:
Traditional engineering, also known as sequential engineering, is the
process of marketing, engineering design, manufacturing, testing and
production where each stage of the development process is carried out
separately, and the next stage cannot start until the previous stage is
finished.
Therefore, the information flow is only in one direction, and it is not until the end of the
chain that errors, changes and corrections can be relayed to the start of the sequence,
causing estimated costs to be under predicted.
This can cause many problems; such as time consumption due to many modifications
being made as each stage does not take into account the next.
This method is hardly used today, as the concept of concurrent engineering is more
efficient.
Traditional engineering is also known as over the wall engineering as each stage blindly
throws the development to the next stage over the wall.
ANS:
ANS:
Application Description
AI LONG ANSWERS
UNIT-V
LONG ANSWERS
ANS:
Expert systems (ES) are one of the prominent research domains of AI. It is introduced
by the researchers at Stanford University, Computer Science Department.
High performance
Understandable
Reliable
Highly responsive
Capabilities of Expert Systems
The expert systems are capable of −
Advising
Demonstrating
Deriving a solution
Diagnosing
Explaining
Interpreting input
Predicting results
Knowledge Base
Inference Engine
User Interface
Let us see them one by one briefly −
Knowledge Base
It contains domain-specific and high-quality knowledge.
Knowledge is required to exhibit intelligence. The success of any ES majorly depends upon the
collection of highly accurate and precise knowledge.
What is Knowledge?
The data is collection of facts. The information is organized as data and facts about the task
domain. Data, information, and past experience combined together are termed as knowledge.
Knowledge representation
It is the method used to organize and formalize the knowledge in the knowledge base. It is in
the form of IF-THEN-ELSE rules.
Knowledge Acquisition
The success of any expert system majorly depends on the quality, completeness, and accuracy
of the information stored in the knowledge base.
The knowledge base is formed by readings from various experts, scholars, and the Knowledge
Engineers. The knowledge engineer is a person with the qualities of empathy, quick learning,
and case analyzing skills.
He acquires information from subject expert by recording, interviewing, and observing him at
work, etc. He then categorizes and organizes the information in a meaningful way, in the form
of IF-THEN-ELSE rules, to be used by interference machine. The knowledge engineer also
monitors the development of the ES.
Inference Engine
Use of efficient procedures and rules by the Inference Engine is essential in deducting a correct,
flawless solution.
In case of knowledge-based ES, the Inference Engine acquires and manipulates the knowledge
from the knowledge base to arrive at a particular solution.
Applies rules repeatedly to the facts, which are obtained from earlier rule application.
Resolves rules conflict when multiple rules are applicable to a particular case.
Backward Chaining
Forward Chaining
It is a strategy of an expert system to answer the question, “What can happen next?”
Here, the Inference Engine follows the chain of conditions and derivations and finally deduces
the outcome. It considers all the facts and rules, and sorts them before concluding to a solution.
This strategy is followed for working on conclusion, result, or effect. For example, prediction of
share market status as an effect of changes in interest rates.
Backward Chaining
With this strategy, an expert system finds out the answer to the question, “Why this
happened?”
On the basis of what has already happened, the Inference Engine tries to find out which
conditions could have happened in the past for this result. This strategy is followed for finding
out cause or reason. For example, diagnosis of blood cancer in humans.
User Interface
User interface provides interaction between user of the ES and the ES itself. It is generally
Natural Language Processing so as to be used by the user who is well-versed in the task domain.
The user of the ES need not be necessarily an expert in Artificial Intelligence.
It explains how the ES has arrived at a particular recommendation. The explanation may appear
in the following forms −
Its technology should be adaptable to user’s requirements; not the other way round.
ANS:
The simplest form of artificial intelligence which is generally used in industry is the rule-
based system, also known as the expert system.
Before we discuss in detail what these are, let's take a step back and point out that there
are different opinions as to what really constitutes artificial intelligence.
Some people, when they use the term AI, are referring to systems which have some
ability to learn.
That is, the system will improve its performance over time as it gains experience in
solving problems, just as a human would.
Other people, when they use the term AI, are referring just to systems which are capable
of exhibiting human-level performance in some very narrow area, but which are
incapable of learning or expanding their expertise.
Different people are always going to disagree about what AI is, but it's this fairly simple
form of AI which we want to talk about right now.
One is that the human expert's knowledge then becomes available to a very large range of
people.
Another advantage is that if you can capture the expertise of an expert in a field, then any
knowledge which they might have is not lost when they retire or leave the firm.
Instead, the knowledge of the expert is captured in a set of rules, each of which encodes
a small piece of the expert's knowledge.
Each rule has a left hand side and a ride hand side.
The left hand side contains information about certain facts and objects which must be true
in order for the rule to potentially fire (that is, execute).
Any rules whose left hand sides match in this manner at a given time are placed on
an agenda.
One of the rules on the agenda is picked (there is no way of predicting which one), and its
right hand side is executed, and then it is removed from the agenda.
The agenda is then updated (generally using a special algorithm called the Rete
algorithm), and a new rules is picked to execute.
A typical rule for a mortgage application might look something like this:
IF
(number-of-30-day-delinquencies > 4)
AND (number-of-30-day-delinquencies < 8)
THEN
increase mortgage rate by 1%
As you can see, a rule bears a close resemblance to an if-then-else statement, but unlike
an if-then-else statement, it stands alone and does not fire in any predetermined order
relative to other if-then-else statements.
The advantage to this type of approach, as opposed to a procedural approach, is that if the
system is designed well, then the expert's knowledge can be maintained fairly easily, just
by altering whichever rules need to be altered.
Indeed, many rule-based systems come along with a rules editor which allows for rules to
be easily maintained by non-technical people.
Rules are generally implemented in something called a rules engine, which provides a
basic framework for writing rules and then for running them in the manner described
above. In the past, it used to be very difficult to actually work with a rules engine, since
they tended to be technologies unto themselves and very hard to interface with the rest of
the IT world.
In the last couple of years, however, great strides have been made in making rules
engines much more easily compatible with other technologies.
ANS:
Blackboard System
Metaphor
The following scenario provides a simple metaphor that gives some insight into how a
blackboard functions:
The session begins when the problem specifications are written onto the blackboard.
The specialists all watch the blackboard, looking for an opportunity to apply their
expertise to the developing solution.
When someone writes something on the blackboard that allows another specialist to
apply their expertise, the second specialist records their contribution on the
blackboard, hopefully enabling other specialists to then apply their expertise.
This process of adding contributions to the blackboard continues until the problem
has been solved.
Components
A blackboard-system application consists of three major components.
The software specialist modules, which are called knowledge sources (KSs). Like the
human experts at a blackboard, each knowledge source provides specific expertise
needed by the application.
The blackboard, a shared repository of problems, partial solutions, suggestions, and
contributed information. The blackboard can be thought of as a dynamic "library" of
contributions to the current problem that have been recently "published" by other
knowledge sources.
The control shell, which controls the flow of problem-solving activity in the system.
Just as the eager human specialists need a moderator to prevent them from trampling
each other in a mad dash to grab the chalk, KSs need a mechanism to organize.
Their use in the most effective and coherent fashion. In a blackboard system, this is
provided by the control shell.
ANS:
Application Description
o Large databases.
Tools − They reduce the effort and cost involved in developing an expert system to large
extent.
Shells − A shell is nothing but an expert system without knowledge base. A shell
provides the developers with knowledge acquisition, inference engine, user interface,
and explanation facility. For example, few shells are given below −
o Java Expert System Shell (JESS) that provides fully developed Java API for
creating an expert system.
o Vidwan, a shell developed at the National Centre for Software Technology,
Mumbai in 1993. It enables knowledge encoding in the form of IF-THEN rules.
Know and establish the degree of integration with the other systems and databases.
Realize how the concepts can represent the domain knowledge best.
Cater for new interfaces with other information systems, as those systems evolve.
Less Production Cost − Production cost is reasonable. This makes them affordable.
Speed − They offer great speed. They reduce the amount of work an individual puts in.
Steady response − They work steadily without getting motional, tensed or fatigued.
UNIT-VI
Bayes’ theorem can be used to calculate the probability that a certain event will occur or
that a certain proposition is true.
The theorem is stated as follows:
P(B) is called the prior probability of B. P(B|A), as well as being called
the conditional probability, is also known as the posterior probability of
B.
P(A ∧B) = P(A|B)P(B)
Hedges:
hedge funds have been using computer algorithms to make trade decisions.
However, those algorithms were driven by static models developed and managed by data
scientists and weren’t adept at dealing with the volatilities of financial markets.
Decisions made by these algorithms yielded results that were often inferior to those made
by human discretion.
In recent years with the emergence of machine learning and deep learning, the branches
of AI have caused a breakthrough in the creation of software and are driving new
innovations in computational trading.
Machine learning software autonomously update themselves as they ingest new data.
UNIT-VI
LONG ANSWERS
ANS:
− the Subjective knowledge that exists in linguistic form, usually impossible to quantify.
Fuzzy Logic can coordinate these two forms of knowledge in a logical way.
Fuzzy Systems can handle simultaneously the numerical data and linguistic knowledge.
Fuzzy Systems provide opportunities for modeling of conditions which are inherently
imprecisely defined.
Many real world problems have been modeled, simulated, and replicated with the help of
fuzzy systems.
The applications of Fuzzy Systems are many like : Information retrieval systems,
Navigation system, and Robot vision.
Expert Systems design have become easy because their domains are inherently fuzzy and
can now be handled better;
examples : Decision-support systems, Financial planners, Diagnostic system, and
Meteorological system.
Introduction
Any system that uses Fuzzy mathematics may be viewed as Fuzzy system.
The Fuzzy Set Theory - membership function, operations, properties and the relations
have been described in previous lectures.
These are the prerequisites for understanding Fuzzy Systems. The applications of Fuzzy
set theory is Fuzzy logic which is covered in this section.
Here the emphasis is on the design of fuzzy system and fuzzy controller in a closed–loop.
The specific topics of interest are :
• Fuzzy System
Fuzzification
Fuzzy
Rule Base
Fuzzy
Inferencing Defuzzification
Membeship Function
X1
X2
Xn
Y1
Y2
Ym
Input:
Variables
Output:
Variables
a crisp value.
− Fuzzification : a process of transforming crisp values into grades .Membership for linguistic
terms, "far", "near", "small" of fuzzy sets.
Variables:
If (x is A) AND (y is B) . . . . . . THEN (z is C)
− Fuzzy Inferencing: combines the facts obtained from the Fuzzification with the rule base and
conducts the Fuzzy reasoning process.
− Defuzzyfication: Translate results back to the real world values.
---------------------------------------------------------------------------------------------------------------------
---------------------
At is Fuzzy Set ?
ANS
• The word "fuzzy" means "vagueness". Fuzziness occurs when the Boundary of a piece of
information is not clear-cut.
• Fuzzy sets have been introduced by Lotfi A. Made (1965) as an extension of the classical
notion of set.
• Classical set theory allows the membership of the elements in the set in binary terms, a
bivalent condition - an element either belongs or does not belong to the set.
Fuzzy set theory permits the gradual assessment of the membership of elements in a set,
described with the aid of a membership function valued in the real unit interval [0, 1].
• Example:
− For some people, age 25 is young, and for others, age 35 is young.
− Age 35 has some possibility of being young and usually depends on the context in which it
is being considered.
Introduction:
Fuzzy Set:
A Fuzzy Set is any set that allows its members to have different degree of membership,
called membership function, in the interval [0, 1].
Set SMALL = {{1, 1 }, {2, 1 }, {3, 0.9}, {4, 0.6}, {5, 0.4}, {6, 0.3}, {7, 0.2},
Note that a fuzzy set can be defined precisely by associating with each x , its grade of
membership in SMALL.
------------------------------------------------------------------------------------
Bayes’ theorem can be used to calculate the probability that a certain event will occur
or that a certain proposition is true
Bayesian networks are also called Belief Networks or Probabilistic Inference Networks.
---------------------------------------------------------------------------------
ANS:
Overview of linguistics:
In dealing with natural language, a computer system needs to be able to process and
manipulate language at a number of levels.
such as German, which tend to combine words together into single words to indicate
combinations of meaning.