Sie sind auf Seite 1von 8

Science and Information Conference 2014

August 27-29, 2014 | London, UK

Artificial Intelligence Theory


(Bаsic concepts)

Vitaliy Yashchenko
Artificial intelligence. Institute of Mathematical Machines and
System Problems NANU, IMMSP,
Kiev, Ukraine,
vitaly.yashchenko@gmail.com

Abstract— Taking the bionic approach as a basis, the article modulating system, motor system, conditioned and
discusses the main concepts of the theory of artificial intelligence unconditioned reflexes, reflexes arc, motivation, purposeful
as a field of knowledge, which studies the principles of creation behavior, of "reasoning", "consciousness", "subconscious and
and functioning of intelligent systems based on multidimensional artificial personality developed as a result of learning and
neural-like growing networks. The general theory of artificial training".
intelligence includes the study of neural-like elements and
multidimensional neural-like growing networks, temporary and Axiom 1. Artificial intelligence theory is based on the
long term memory, study of the functional organization of the analogy with the nervous system of human.
“brain” of the artificial intelligent systems, of the sensor system,
modulating system, motor system, conditioned and unconditioned
The core of human intelligence is the brain, consisting of
reflexes, reflexes arc (ring), motivation, purposeful behavior, of multiple neurons interconnected by synapses. Interacting with
“thinking”, “consciousness”, “subconscious and artificial each other through these connections, neurons create complex
personality developed as a result of training and education”. electric impulses, which control the functioning of the whole
organism and allow recognition, learning, reasoning,
Keywords—bionic approach; multidimensional neural-like structuring of information through its analysis, classification,
growing networks; sensory system; modulating system; motor location of connections, patterns and distinctions in it,
system; conditioned and unconditioned reflex; reflex arc associations with similar information pieces etc. [2].
I. INTRODUCTION The functional organization of the brain. In the works of
physiologists P.K. Anohin, A.R. Luriya, E. N. Sokolov [3, 4]
This work briefly discusses the basic concepts of the and others the functional organization of the brain includes
theory of artificial intelligence based on multidimensional different systems and subsystems. The classical interpretation
receptor-effector neural-like growing networks. of the interactive activity of the brain can be represented by
"Analysis of the problems in the field of artificial interactions of three basic functional units:
intelligence shows that at present time, on the one hand,
1) information input and processing unit - sensory
intensive division of its subfields continues, while on the other
hand, one may perceive certain integration of research in an systems (analyzers);
endeavor to build a general theory. Integration of research is 2) modulating, nervous system activating unit -
forced by the necessity to consolidate the whole research modulating systems (limbic-reticular systems) of the brain;
system in the field of artificial intelligence into a single unit, 3) programming, activating and behavioral acts
based on a universal concept or idea, aspiring to its functional controlling unit - motor systems (motion analyzer).
prototype: intelligent and functional human being" [1]. In
artificial intelligence theory such universal concept is Brain sensory systems (analyzers). Sensory (afferent)
represented by multidimensional receptor-effector neural-like system is activated when a certain event in the environment
growing networks, which aspire to their functional prototype - affects the receptor. Inside each receptor the physical factor
biological neural networks. affecting it (light, sound, heat, pressure) is converted into an
action potential, nervous impulse. Analyzer is a hierarchically
II. BASIC CONCEPTS OF ARTIFICIAL INTELLIGENCE structured multidimensional system. Receptor surface serves
as the base of the analyzer, and cortex projection areas as its
A. Artificial intelligence node. Each level is a set of cells, whose axons extend to the
Artificial intelligence - is а field of knowledge, which next level. Coordination between sequential layers of
studies the structure and functioning of intelligent systems analyzers is organized based on divergence/convergence
based on multidimensional receptor-effector neural-like principle.
growing networks. Artificial intelligence theory includes the
study of neural-like growing elements and multidimensional Brain modulating systems are an instrument of regulation
neural-like growing networks, temporary and long-term of the level of activity, performing also selective modulation,
memory, the study of the functional organization of the and stressing urgency of a certain function. The initial source
"brain" of the artificial intelligent systems, of sensory system, of activation is intrinsic activity and the needs of the organism.

473 | P a g e
www.conference.thesai.org
Science and Information Conference 2014
August 27-29, 2014 | London, UK

A second source of activation is related to environmental maintenance. On receiving information (unknown to the
irritants. system) on the receptors of the sensory area, the nearest novel
neural-like elements (whose excitatory level is not high, but
Brain motor (motion) systems. Fusion of excitations of higher than those of the nearest novel neural-like elements)
different intensity with biologically significant signals and and sensory area receptors establish connections, the latter
motivational influences are characteristic of motor cortex being assigned weights, while the neural-like elements are
areas. It is distinctive of them to accomplish a complete assigned a certain excitatory threshold. At repeated replication
transformation the afferent influences into a qualitatively new of this information the excitatory threshold increases. On
form of activity, directed toward the fastest output of afferent reaching the maximum excitement the neural-like element
excitations to the periphery, i.e. to the instruments of becomes the equivalent neural-like element and is transported
realization of the final stage of behavior organization. into the long-term memory.
The core of artificial intelligence is the system "brain", Definition 4. Long-term memory contains all of the equivalent
representing an active, associative, homogeneous structure - neural-like elements.
multidimensional receptor-effector neural-like growing Definition 5. Neural network is a parallel connected network
network, composed of a host of neural-like growing elements, of simple adaptive units, which interacts with the objects in
interconnected by synapses. Neural-like elements perceive, the environment similarly to the biological nervous system.
analyze, synthesize and save information, allowing the system Definition 6. Neural-like growing network is a set of
to learn, train, reason, systematize and classify information, to interconnected neural-like units, set up for reception, analysis
locate connections, patterns and distinctions in it, and to and processing of information during interaction with the
produce signals for the control of external facilities. objects of the real world, moreover in the process of reception
B. The definitions of Artificial Intelligence and processing of information the network changes its own
Axiom 2. The basic functional unit of the "nervous system" structure.
of intelligent systems is the artificial neuron (neural-like unit). C. Neural-like growing networks
Neural-like growing networks (n-GN) – is a new type of
Definition 1. The artificial neuron is a simplified model of neural-like networks, which includes the following classes:
the biological neuron, a device (analogous to the cell body) multi-connected (receptor) neural-like growing networks (mn-
with many excitatory and inhibitory inputs, modulating input GN); multi-connected (receptor) multidimensional neural-like
and one output. The output (analogue of the axon) consists of growing networks (mmn-GN); receptor-effector neural-like
a set of conductors and a set of endings. The input is fed the growing networks (ren-GN); multidimensional receptor-
information (codes, impulse bundles). The device processes effector neural-like growing networks (mren-GN).
information according to the concepts of the neural-like
growing network, generates the codes (bundles of impulses) N-GN are described as a directed graph, where the neural-
and simultaneously or periodically transmits them down the like elements represent the nodes of the graph, and
axon to the outputs of other neurons. Neuron inputs (synapse connections between the elements - its edges.
analogue) are receptors, reacting to or ignoring a certain piece So, the network is an unparalleled dynamic system with
of code fed to them, by this increasing or decreasing the level the topology of a directed graph, which performs information
of excitation of the neural-like element and the intensity of its processing by changing own state and structure in response to
feedback. The range and frequency of the signal are subjects environmental stimuli.
to adjustment.
The theory of neural-like growing networks operates the
Axiom 3. All data-free neural-like elements are novel basic concepts of structure and architecture, which
neural-like units. demonstrate the principles of connection and interaction
Axiom 4. All neural-like elements, carrying (holding) a between the elements of the network:
certain piece of information are equivalent neural-like the topological (dimensional) structure - is a directed
elements. graph, representing connections between the elements of the
Axiom 5. At the lack of information on the receptors of the system;
novel neural-like elements they continue in the mode of light the logical structure sets the rules and principles of
arbitrary background excitation. arrangement of
Axiom 6. Background excitation is a fluctuating arbitrary connections and network elements, as well as the logic of its
excitation value of the neural-like element. operation.
Definition 2. Neural-like elements of emotion are the the physical structure is a system of connections of the
elements, whose excitation threshold increases or decreases physical elements of the network (in the event of the
depending on the condition of the inner subsystems of the mechanical implementation of a neural-like growing network).
system, or the result of the function being executed. Neural- The system's architecture is defined by the set of
like elements of emotion have connections with action connections of the physical elements of the network and the
controlling motor neurons. principles of arrangement of connections and elements, as well
Definition 3. Temporary memory. Time required by the as the logic of its operation.
novel neural-like element for information analysis and

474 | P a g e
www.conference.thesai.org
Science and Information Conference 2014
August 27-29, 2014 | London, UK

The theory of neural-like growing networks uses some node ai , is correspondent to a certain weight mi . Each node
principles of the graph theory.
ai is assigned a certain excitation threshold. The nodes,
Directed or oriented graphs are the graphs, where direction
of the edges matters. The arc of a graph can be regarded as an which don't have the incoming arcs, are called the receptors,
ordered pair of vertices or as a directed edge, connecting the the rest are called the neural-like elements.
vertices. Rule 1. If on receiving information excitation arises in a
subset of nodes F out of the set of nodes having direct
( )
The vertices of a graph S = A, D are called adjacent, if
they are connected by an arc. Adjacent arcs are defined as connection with the node ai , and F ≥ h , then connections
arcs d im , d jm , which have a common vertex am . of the node ai with the nodes of the subset F are terminated,

The arc is called outgoing, when it it is directed away from and the network is joined by a new node ai +1 , whose inputs
the node am , i.e. if the node am is the tail, but not the head of are connected to the inputs of all the nodes of the subset F ,
and the output of the node ai +1 is connected to one of the
the arc d mi . Incoming arc is the arc d im , directed toward the
outputs of the node ai , so that the incoming connections of
node am , i.e., if the node am is the head of the arc d im , and
the node ai +1 are assigned weights m g , correspondent to the
not its tail d mi .
weights of the terminated connections of the node ai , and the
The topological structure of n-GN is represented by an
oriented connected graph (fig. 1). Using graphs, the mn-GN node ai +1 is assigned an excitation threshold Pi , equal to the
theory studies the processes of information flow and storage in sum of the weights of the connections, incoming to the node
the network.
( )
ai +1 , or assigned an excitation threshold Pi , equal to f mi ,
(a function of the weights of the connections, incoming to the
node ai +1 ).

The outgoing edge of the node ai +1 is assigned the weight


mi +1 . Receptor outgoing edges are assigned the weight mri .
Rule 2. If on receiving information excitation arises in the

subset of nodes G and G ≥ h , the network is joined by a


Fig. 1. Topological structure of mn-GN
new associative node ai +1 , which by incoming arcs is
Neural-like growing networks are formally defined in the
following way: connected to all the nodes of the subset G . Each of the
incoming arcs are assigned the weight mi , and the new node
S = (R, A, D, M, P, N ) , where R = {ri } , i = 1,n – ai +1 is assigned an excitatory threshold Pai +1 , equal to the
is a finite set of receptors; Ar = {ai }, i = 1, k – finite set sum of the weights mi of the incoming arcs, or assigned an
{ }
of neural-like elements; D = d i , i = 1,e – finite set of excitatory threshold Pi , equal to f (mi ) (a function of the
arcs, connecting receptors with the neural-like elements and
weights of the connections, incoming to the node ai +1 ). The
the neural-like elements with each other; N - connection
variables receptor areas; P = {Pi }, i = 1, k where P – new node ai +1 stays in the excited state.

excitation threshold of the node ai , P = f (mi ) > P0 ( P0 –


minimum allowed excitation threshold), provided that the set
of arcs D , associated with the node ai , is correspondent to
the set of weights M = mi , i { } = 1, w , and mi can take
both positive and negative values.
Definition 7. Multiconnected (receptor) neural-like
growing network is an acyclic graph, where the minimal
number of the incoming arcs to the node of the graph ai
equals the variable n , and each arc d i , associated with the
Fig. 2. Topological structure of mren-GN

475 | P a g e
www.conference.thesai.org
Science and Information Conference 2014
August 27-29, 2014 | London, UK

Definition 8. Informational dimension is the area if the In formal terms, mren-GN are defined in the following
neural-like growing network, which consists of the set of way:
nodes and arcs, joined into a single informational structure of S = (R , A r, D r, P r, N r, E , A e, D e, P e, M e, N e ) ;
one of the reflections. R ⊃ R v, R s, R t ; A r ⊃ A v, A s, A t ;
Definition 9. A set of interconnected acyclic graphs, D r ⊃ D v, D s, D t ; P r ⊃ P v, P s, P t ;
representing neural-like growing networks in different M r ⊃ M v, M s, M t ;N r ⊃ N v, N s, N t ;
dimensions of information, are called multi-connected E ⊃ E r , E d , E d1 ;A e ⊃ A r, A d1 , A d2 ;
multidimensional neural-like growing networks (mmn-GN). D e ⊃ D r, D d1 , D d2 ; P e ⊃ P r, P d1 , P d2 ;
Me ⊃ Mr, Md1, Md2 ; N e ⊃ N r , N d1 , N d2 ; here R v, R s, R t
Rule 3. If on receiving information, offered in different
– is a finite set of receptors, A v, A s, A t – finite set of neural-
informational dimensions, excitation arises in the subset Q of
like elements, D v, D s, D t – finite set of arcs, P v, P s, P t –
endpoints, then the endpoints are connected with each other by
arcs. finite set of excitatory thresholds of the neural-like elements of
the receptor area, belonging, for example, to the informational
Receptor-effector neural-like growing networks are visual, acoustic, tactile dimensions, N r – finite set of
formally defined in the following way:
connectivity variables of the receptor area, Er, Ed, Ed1 –
S = (R , A r, D r, P r, N r, E , A e, D e, P e, M e, N e ) , finite set of effectors, A r , A d1 , A d2 – finite set of neural-like
R = {ri }, i = 1,n – a finite set of receptors, Ar = {ai }, elements, D r , D d1 , D d2 – finite set of arcs of the effector
area, P r , P d1 , P d2 – finite set of excitatory thresholds of the
i = 1,k – a finite set of neural-like elements of the receptor neural-like elements of the effector area, belonging, for
area, D r = {d i } , i = 1,e – a finite set of arcs of the example, to the informational speech dimension and the action
dimension, N e – finite set of connectivity variables in the
receptor area, E = {ei }, i = 1,e – a finite set of effectors, effector area.
Ae = {a i }, i = 1,k – a finite set of neural-like elements of III. THE MATHEMATICAL APPARATUS OF THE FUNCTIONAL
ORGANIZATION OF THE "BRAIN" OF THE ARTIFICIAL
the effector area, D e = {d i }, i = 1,e – a finite set of arcs INTELLIGENT SYSTEMS
of the effector area, Pr = {Pi } , Pe = {Pi } , i = 1,k , The theory of neural-like growing networks studies binary
relations, defined by a set of nodes (neural-like elements)
where Pi – excitation threshold of the node air , aie
Pi = f (mi ) , provided that the set of arcs D r , D e ,
{a , a ,..., a } , where a is a finite-dimensional Boolean
1 2 i i

vector, {a , a } - a set of pairs of these elements. The pair


i k
associated with the node air , aie , is correspondent to the set
a i , a k is related to the subset R if and only if the vector a i
of weights M r = {mi } , M e = { m i } , i = 1, w , and
is related R to the element a k .
mi can take both positive and negative values. N r , N e –
connectivity variables of the receptor and effector areas. Basic properties of the vector pairs, based on the
conjunction operation, applied to the vector components, i.e.
Definition 10. Receptor-effector neural-like growing
network is a symmetric acyclic graph, where the minimum a i × a k = ( a(1) & b(1) , a(2) & b(2), ... , a(n) & b(n) ), here & –
number of the incoming arcs to the newly formed nodes of the conjunction, × – conjunction "vector" operation.
graph are equal to the variable n , each arc, associated with the
nodes of the receptor area, is accorded a certain weight, and G These
G basic conjunction properties of the vector pairs
each node - a certain excitatory threshold; each arc, associated a , c are as follows:
with the nodes of the effector area, is accorded a certain 1. aG ×
G G
c = a . 2. aG ×
G G
c ≠ a .
weight, and each node - an excitatory threshold. The nodes,
which don't have incoming arcs, are called receptors, the 3. aG ×
G G
c = с . 4. aG ×
G G
c ≠ с .
nodes without outgoing arcs, are called effectors, the rest of 5. aG ×
G
c = 0 . 6. aG ×
G
c ≠ 0 .
the nodes being neural-like elements [5-8].
Combinations of three from the basic properties of the
Definition 8. A set of interconnected symmetric acyclic vector pairs give us eight mutually negating relations:
graphs, representing the state of an object and actions G G G G G G G G G G
produced by it in different informational dimensions, are 1. a R1c ≡ ( a × c = a )∩ ( a × c = с )∩ ( a × c ≠ 0 ).
G G G G G G G G G G
called multidimensional receptor-effector neural-like growing 2. a R2c ≡ ( a × c ≠ a )∩( a × c ≠ с )∩( a × c = 0 ).
networks. G G G G G G G G G G
3. a R 3c ≡ ( a × c ≠ a )∩( a × c ≠ с )∩ ( a × c ≠ 0 ).
The topological structure of the multidimensional receptor- G G G G G G G G G G
effector neural-like growing network (mren-GN) is 4 a R4c ≡ ( a × c ≠ a )∩ ( a × c = с )∩ ( a × c ≠ 0 ).
represented by a graph (fig. 2).

476 | P a g e
www.conference.thesai.org
Science and Information Conference 2014
August 27-29, 2014 | London, UK
G G G G G G G G G G G G G G G G G G
5. a R 5 c ≡ ( a × c = a )∩ ( a × c ≠ с )∩ ( a × c ≠ 0 ). (a1ri ,arik),(ari2 ,arik),(ari3 ,arik),...,(arik−1,arik) and simultaneously, by
G G G G G G G G G G
6. a R 6c ≡ ( a × c = a )∩( a × c ≠ с )∩ ( a × c = 0 ). which ofGtheG relations Re1, Re2, Re3, Re4, Re5 the pair of
G G G G G G G G G G vectors a , a ' of the set of pairs of the effector area
7 a R 7 c ≡ ( a ×c ≠a )∩( a × c = с )∩ ( a × c = 0 ).
G G G G G G G G G G G1 G k G2 G k G3 G k
( a еi ,a еi ),( a еi ,a еi ),( a еi ,a еi ),...,( a еi ,a еi ) are related,
G k −1 G k
8 a R 8c ≡ ( a × c = a )∩( a × c = с )∩ ( a × c = 0 ).
Here ∩ - logical AND. here k runs from 2 to k + g , where g is a number of the
Obviously, relations R6, R7, R8 are trivial, as one or both new vectors
vectors in them are equal to null. Basing on the analysis of
G G
basic conjunctive properties of the vector pairs, let's introduce Receptor area. If the pair of vectors ( a 1ri , a ri k
) are
the following affirmations:
related by Rr1, Rr2, Rr3, Rr4, Rr5, the operations Qrj1, Qrj2,
G G Qrj3, Qrj4 or Qrj5 are performed:
Affirmation 1. On the set of vector pairs a , a ' ∈ A five
basic mutually negating relations R1, R2, R3, R4, R5 can be 1 G G G G G G G G G G 1

defined. Based on the affirmation 1, the following basic Qr1(a ,a' ) = (a1ri,akri,akri+1),a1ri := a1ri, akri := 0, akri+1 := 0, mak := bk , i

operations of construction of n-GN are defined. G 2 G G 1 2

mak := bk , PaG = f (mak ), PaG = f (mak );


i
0 0 i i
1 2

Let the external information coming into the receptor field 2 G G G G G G G G G G


i i
G 1

be represented by the set Wr ={rij} , i ∈Ir, j ∈Jr, and Qr1(a ,a' ) = (a1ri,akri,akri+1),a1ri := a1ri, akri := akri, akri+1:= 0, mak := bk , i

excitations coming into the effector field, by the set We ={dij}, G 2 G G 1 2

mak := bk , PaG = f (mak ), PaG = f (mak );


0 0
i∈Ie, j∈Je. i
1
i
2
i

G G G G G G G G G G G G G
i

G G G
i

For all pairs of vectors a ,a ' ∈ Wr, a ,a ' ∈ We, 3


Q (a , a ) = (a , a ,a ),a := (a ×a ×a ) ∪c ,a := (a ×a ×
r1
' 1
ri
k
ri
k +1
ri
1
ri
1
ri
k
ri
1
ri rj
k
ri
1
ri
k
ri
where Wr – is a set of vector lines of a length k of the G G G G G G G 1 k 1,k

×a ) ∪c ,a := a ×a ,ma := b , ma := b , ma := f (PG ),
k k +1 1 k i i i
0
receptor area, and We – a set of vector lines of a length l of the ri rj ri a ri ri k k k k c
k+1
i

effector area, let's introduce the mutually negating relations Rr G+1k 0 G1 G1 0 Gk Gk


PaGik+1 = f (mak i ),PaG1i = f (mak i , mcai),PaGik = f (mak i ,mac i );
0
i, Re i for the receptor and effector areas accordingly.
G G G G G G G G G G G G
aR1aG ≡∀aG ,aG ∈A:(aG ×aG =aG )∩(aG ×aG =aG ∩(aG ×aG ≠0),
4
' j j+1 j j+1 j j j+1 j+1 j j+1
Q (a , a ) = (a , a , a ), a := (a × a × a ) ∪ c , a := a
' 1
ri
k
ri
k +1
ri
1
ri
1
ri
k
ri
1
ri rj
k
ri
k
ri
,
r
G G – conjugation of vectors G and G , ∩
here a × a j
i i
j+1
i i i i i i
j
i i
j +1
– G
r1
G G G G
a a
1 k k k

a := 0,ma := b ,ma := b , ma := f (PaG ),PaG = f (ma ),


k +1 i i i
0 0 i
i i i i k k
ri k k k k c k
logical AND; i i

G G1 G1
aR1aG ≡ ∀aG ,aG ∈ A:(aG ×aG = aG )∩(aG ×aG = aG ∩(aG ×aG ≠ 0), PaG = f (ma , ma );
' j j+1 j j+1 j j j+1 j+1 j j+1 0 i i
1

G r i i i i i i i i i i k c

aRr2aG' ≠∀aGij,aGij+1∈A:(aGij ×aGij+1≠aGij)∩(aGij ×aGij+1≠aGij+1)∩(aGij ×aGij+1=0); G G G G G G G G G G G G


i
5
Q (a ,a ) = (a ,a ,a ),a := a , a := (a ×a ×a ) ∪c ,a := 0,
' 1 k k +1 1 1 k 1 k k k +1

G ri ri ri ri ri ri ri ri ri rj ri

aRr3aG' ≡∀aGij ,aGij+1∈A:(aGij ×aGij+1≠aGij)∩(aGij ×aGij+1≠aGij+1)∪(aGij ×aGij+1≠0);


r1
G G G k G G G
1
a k 1 k k

ma := b , m := b , ma := f (PaG ),PaG = f (ma ), PaG = f (ma ,ma ).


i 0 0 0

G G
i i i i i
1 1 k
k k k c k k c

aG Rr i a
' k i i i
1 2 3 4 5
Operations Qr1 , Qr1 , Qr1 , Qr1 or Qr1 are true, if hr ≥ n1,
aRr4aG' ≡∀aGij ,aGij+1∈A:(aGij ×aGij+1≠aGij)∩(aGij ×aGij+1=aGij+1)∪(aGij ×aGij+1≠0); otherwise, if
G
a 1 G
≠ a k
, then
G ri ri
aRr5aG' ≡∀aGij ,aGij+1∈A:(aGij ×aGij+1=aGij)∩(aGij ×aGij+1≠aGij+1)∩(aGij ×aGij+1≠0); G G G G G
a 1r i : = a 1r i , a rk i : = a rk i , a rk i + 1 : = 0 ,
G
aRe1aG' ≡∀aGij ,aGij+1∈A:(aGij ×aGij+1=aGij)∩(aGij ×aGij+1=aGij+1∩(aGij ×aGij+1≠0); m ak i : = b rk i ,
1 1
m ak i : = b rk i
1 1
,
G
aRe2aG' ≠∀aGij ,aGij+1∈A:(aGij ×aGij+1≠aGij)∩(aGij ×aGij+1≠aGij+1)∩(aGij ×aGij+1=0);
P 0a 1i = f ( m ak i ) , P 0a 1i = f ( m ak i ) ,
1 1
G
aRe3aG' ≡∀aGij ,aGij+1∈A:(aGij ×aGij+1≠aGij)∩(aGij ×aGij+1≠aGij+1)∪(aGij ×aGij+1≠0); G G G G
G if a 1 = a k , then a rik : = 0, a rik+1: = 0, maki : = bk
1

aRe4aG' ≡∀aGij ,aGij+1∈A:(aGij ×aGij+1≠aGij)∩(aGij ×aGij+1=aGij+1)∪(aGij ×aGij+1≠0);


,
ri ri
G
aRe5aG' ≡∀aGij ,aGij+1∈A:(aGij ×aGij+1=aGij)∩(aGij ×aGij+1≠aGij+1)∩(aGij ×aGij+1≠0) . P0a1i = f (mk ) . a1i

G G'
These relations are denoted as a Rr i a
. 1, if operation Qrj1 was performed,
G G2 G3 G k= 2, if operation Qr12, Qr14, Qr15 was
A. Let be a set of vectors a 1ri , a ri , a ri ,..., a rik and
G G G G performed,
a1ei , a 2ei , a 3ei ,..., a eik for the receptor and effector areas 3, if operation Qrj3 was performed.
accordingly.
If the pair of vectors ( a 2 , a k ) are related by Rr1, Rr2,
G G
B. Let's check
G byGwhich relation Rr1, Rr2, Rr3, Rr4, Rr5 the ri ri
pair of vectors a , a ' of the set of pairs of the receptor area Rr3, Rr4, Rr5, the operations Qrj1, Qrj2, Qrj3, Qrj4 are Qrj5
are related performed.

477 | P a g e
www.conference.thesai.org
Science and Information Conference 2014
August 27-29, 2014 | London, UK

Furthermore, if the pair of vectors ( a 3 , a k ) are


G G travel to the sensory system. Under the influence of
ri ri unconditioned irritant, there arises a specific excitation of the
related by Rr1, Rr2, Rr3, Rr4, Rr5, operations Qrj1, Qrj2, Qrj3, corresponding receptors. So two centers of excitation emerge
Qrj4 or Qrj5 etc. are performed till the set of pairs
G1 G k G 2 G k G3 G k
( a ,a ),( a , a ),( a ,a ),...,( a
G k −1 , G k ) is exhausted. simultaneously. A temporary reflexes connection appears
ri ri ri ri a
ri ri ri ri between the two centers. At the appearance of the temporary
iG G connection, an isolated effect of the conditioned irritant
These operations are denoted as Q (a , a )
ri
'
produces an unconditioned reaction.
G G G G
Effector area. Operations are carried out similarly t a Rr i a ⇒ Qri (a , a )
' i '
1
1
receptor area. G G G G G
t a Rr i a ⇒ Qri (a , a ) ⇒ a ⇒ e
' i ' k
1
So, descriptions of concepts, objects, conditions or 2 i

situations and connections between them, indicating mutual ...


G
a Rr i aG G G G
dependence of their informational representations, i.e. sensory
system, modulating system, conditioned and unconditioned t n
1
'
ri
i
⇒ Q (a , a ) ⇒ ai ⇒
' k
e
G
reflexes, reflexes ring, temporary and long-term memory - all
are developed in the receptor area of the multidimensional ren- t n
2 a Rr i aG '
ri
G G
⇒ Q (a , a )
i '

GN. The effector area in its turn generates action sequences Development of the conditioned reflex
G
and produces signals, operating the executive mechanisms, i.e.
motivational system of purposeful behavior, and motor t 1
1
a R r i aG '
⇒Q
iG G
ri
'
(a , a ) ⇒
G
a
k
i
⇒ e
G
system. Parallel functioning of these systems allow an
artificial intelligent system to perceive, analyze, remember and t 1
2
a R r i aG '
ri
iG G
⇒ Q (a , a
'
)
synthesize information, learn and perform purposeful actions CR – Conditioned reflexes
[5-7]. Conditioned reflexes are a universal adaptation mechanism
allowing for flexible behavior patterns.
IV. THE FUNCTIONAL ORGANIZATION OF THE "BRAIN" OF
THE ARTIFICIAL INTELLIGENT SYSTEMS Primary automatisms (AU1) – unconditioned reflexes
The "brain" of the system of artificial intelligence consists UR1t1 ⇒ AU1tn .
of a set of neural-like interconnected elements. Interacting
with each other, neural-like elements establish controlling Secondary automatisms (AU2) – established conditioned
signals, which regulate the cognitive and reflective activity of reflexes CR1t1 ⇒ AU 2tn .
the whole system.
B. Modulating system
A. Sensory system Modulating system regulates the level of excitation of the
Function of Perception - information from the outside neural-like elements and performs selective modulation of a
world travels to the receptor area, activates the receptors, particular function. The initial source of activation is the
which in their turn activate the neural-like elements of priority of the inner activity of the subsystems of the main
different levels of information processing - unconditioned system. It is embedded at the creation of the system similarly
reflex levels - primary automation systems, conditioned reflex to unconditioned reflexes. Any deviation from vitally
levels - secondary automation systems, classification, important system values leads to the activation (modification
generalization and memorization levels. In formal terms, we of the excitatory threshold) of certain subsystems and
define relations and perform operation
G processes. A second source of activation is related to
a R r i aG '

G G
i
Q ri ( a , a '
)
. environmental irritants. The priority of a certain activity is
determined in the process of "life cycle", similarly to the
Unconditioned reflexes are set at creating the system. development of conditioned reflexes.
G G GG G G
⎜⎜⇒a Rr i a'

⇒Qiri(aG ,aG' ) ⎞⎟⎟ ⇒ ai ⇐⎛⎜

G k
Qei(a a ) ⇐a Rei a ⇐⎞⎟⎠
i ' ' Motivation is a mechanism, which contributes to the
satisfaction of the needs: it connects the memory of a certain
⎝ ⎠
Conditioned reflexes are acquired in the process of functioning object (for example, lack of energy) with the action for
of the system. During the functioning of the system they satisfying this need (search for energy). Here, then is
develop as a reaction to a "specific" to each of them irritant, by developed a purposeful behavior, which consists of three
this providing for orderly execution of the most important blocks: a search for a goal, interaction with the identified goal,
system functions irrespective of arbitrary, changing rest after achieving the goal.
environment. A purposeful behavior - motivational goal setting -
G
a R r i aG G G G excitation, actions, directed at the search of an algorithm for
'
ri
i
⇒ Q (a , a ) ⇒ a i ⇒
' k
e UR. the solution of the target task, achievement of the goal -
release of excitation.
Conditioned reflexes are acquired in the process of C. Motor system
functioning of the system. Started by indifferent irritant, Fusion of excitations of different intensity with significant
excitation arises in the corresponding receptors, and impulses signals and motivational influences are characteristic of motor

478 | P a g e
www.conference.thesai.org
Science and Information Conference 2014
August 27-29, 2014 | London, UK

system. It is distinctive of them to accomplish a complete input of the system without turning on the "utterance" can be
transformation the afferent influences into a qualitatively new regarded as a model of artificial subconscious.
form of activity, directed toward the fastest output of efferent
excitations to the periphery, i.e. to the neuron chains at the Function of Consciousness is the propagation of
final stage of behavior formation. excitement through the active ensembles of neural-like
elements (intrinsic models of the outside world), strengthened
The motor system consists fully of an ensemble (chains) of by the motivational function, reflecting the most important
neurons of efferent (motor) type and is exposed to constant connections in the "subject – environment" system.
flow of information from the afferent (sensor) area. Unlike the
afferent area, in the launch and behavioral act control area Function of Subconscious is the propagation of excitement
activation processes flow in the top-down direction, taking through the active ensembles of neural-like elements (intrinsic
start at the most elevated levels. The chains of command models of the outside world), weakened by the motivational
neurons (motor programs) created at the highest levels, then function. Performs preparation of the models for realization,
move to the neural chains of the lower motor levels and motor recognition of acquired images and execution of habitual
neurons - effectors of the motor efferent impulse areas. motions.

Function of Action - information comes out of the effector Function of Unconscious Reaction – external information
area and through effectors and the motor area affects the at the subconscious level produces a feedback to the outside
world (unconditioned and conditioned reflexes, routine
environment M1 ⇒ е . actions, secondary automatisms).
Function of Motion – a sequence of actions (M), Function of Conscious Reaction – external information at
discovered accidentally (a child has learned to walk by the conscious level produces a feedback to the outside world
himself) or with the help of a teacher (a child has learned to (conscious actions at the stage of developing conditioned
walk with the help of his parents): reflexes and secondary automatisms).
t1 t2 t3 tn Function of Intuition – searching for new information,
M 1
⇒ M 1
⇒ M 1
... ⇒ M 1 . developing hypotheses and analogies, establishing temporary
Psychic function or behavioral act – a sequence of new connections, activating new ensembles of neural-like
automatisms is carried out in a system, functioning according elements and producing out of them new combinations, which
to the reflexes principle, in which central and receptor-effector automatically appear in the subconscious, the most active of
(peripheral) areas are interrelated and whose joint activity them later coming through to the conscious area [15 - 19].
produces an integral reaction. The system has a Function of Imitation – observing the actions of other
multidimensional structure, where each level from the receptor objects (such as a child watching his parents), the subject
to the effector formations makes a "specific" contribution to internally by micro movements repeats their actions (arises a
the "nervous" activity of the system. subtle excitation of the ensembles of the neural-like elements
t1 t2 t3 tn involved in performing these actions). Further on, by multiple
AU 1
⇒ AU 2
⇒ AU 2
... ⇒ AU 1 . repetition of this sequence in plays, one learns it (repetition
leads to the growth of the threshold rates of excitation of the
Function of Thought is an ensemble of excited neural-like neural-like elements), which gives rise to behavioral
elements at the subconscious level (intrinsic model of the stereotypes.
outside or abstract world, strengthened by the motivational
function at a given moment without exit to the outside world). D. Individuality of the system
Individual distinctions of the system are revealed through
Function of Reflection is a sequential interaction of
its activity and behavioral functions, and conditioned by the
ensembles of excited neural-like elements at the subconscious
constructive nature of its organization, as well as its "life"
level (intrinsic models), regulated by excitatory levels of
experience, gained in the process of training and functioning
neural-like elements, strengthened or weakened by the
[20].
motivational function. Information circulates in a closed
circuit - sensory area, information processing levels (analysis, V. PRACTICE
classification, generalization, memorization, motor area,
sensory area) without exit to external environment. A simplified virtual artificial robotic personality was
produced in the project "VITROM".
G
⇒ a Rr i aG ' i G G
⇒ Q (a , a ) ⇒ ai ⇒
ri
' Gk
e ⇒. Model has been implemented vision (Figure 3).
Implemented recognize different objects (Figure 4).
To think, to reflect is to realize. In this sense "mental Implemented recognition route through the city streets and
uttering" - cycles of transferring of the internal active traffic on a given route (Figure 5) [17]. The project was
information to the system's input - can be viewed as a model demonstrated at the exhibition CeBIT in Hanover in 2000-
of artificial consciousness of the intelligent computer, while 2002. A model of thinking was accomplished in the
cycles of transferring of the internal active information to the intellectual system "Dialogue" (2005).

479 | P a g e
www.conference.thesai.org
Science and Information Conference 2014
August 27-29, 2014 | London, UK

[3] E.N.Sokolov, The principle of vector coding in psychophysiology //


Moscow University Messenger. Series 14: Psychology. –1995.– № 4.–
P.3 – 13.
[4] А.R. Luria, Neuropsychology Basics / Luria A.R. – М., 1973. – 173p.
[5] V.A. Yashchenko, Receptor-effector Neural-like Growing Networks –
effective tool for modeling intelligence. I // Cybernetics and Systems
Analysis. – 1995. – № 4. – P. 54 – 62.
[6] V.A. Yashchenko, Receptor-effector Neural-like Growing Networks –
effective tool for modeling intelligence. II // Cybernetics and Systems
Analysis. – 1995. – № 5. – P. 94 – 102.
[7] V.A. Yashchenko, Receptor-effector Neural-like Growing Network –
efficient tool for building intelligence systems // Proc. of the second
international conference on information fusion, (California, July 6–8
1999). – Sunnyvale Hilton Inn, Sunnyvale, California, USA, 1999. –Vol.
II. – Р. 1113 – 1118.
[8] V.A. Yashchenko, Secondary automatisms of intelligent systems //
Artificial Intelligence. – 2005. – № 3. – P. 432 – 447.
Fig. 3. Model technical vision [9] V.A. Yashchenko, A.I. Shevchenko, Can computer think? // Artificial
Intelligence. – 2005. – № 4. – P. 476 – 489.
[10] V.A. Yashchenko, Thinking computers // Mathematical Machines and
Systems. – 2006. – № 1. – СP. 49 – 59.
[11] V.A. Yashchenko, А.I. Shevchenko, From artificial intelligence to
artificial personality, // Artificial Intelligence. – 2009. – № 3. – P. 492 –
505.
[12] V.A. Yashchenko Some aspects of the «nervous activity» of intelligent
systems and robots // Artificial Intelligence. – 2009. – № 4. – P. 504 –
511.
[13] V.A. Yashchenko, A.I. Shevchenko, Aspects of development of artificial
personality // Intern. sci.-tech. Multi-conf. «Contemporary problems of
computer information technology, mechatronics and robotics–2009»,
(CITMR-2009), (Divnomorsk, Russia, 28 Sep – 3 Oct, 2009). –
Divnomorsk, Russia, 2009. Report thesis.– P.10-17.
Fig. 4. Detection in real-time [14] V.A. Yashchenko Some aspects of the «nervous activity» of intelligent
systems and robots / ko // Intern. Sci.-Tech. Multi-conf. «Contemporary
problems of computer information technology, mechatronics and
robotics–2009», (CITMR-2009),
[15] Ященко В.А. От многомерных рецепторно-эффекторных
нейроподобных растущих сетей к электронному мозгу роботов/
Математичні машини і системи. - № 4. - 2013 с.14-19
[16] ЯщенкоВ.А. Некоторые аспекты «нервной деятельности»
интеллектуальных систем и роботов // Международная научно-
техническая мультиконференция. Актуальные проблемы
информационно-компьютерных технологий, механотроники и
робототехники – 2009, (ИКТМР-2009). (Дивноморск, Россия, 28
сентября – 3 октября 2009г.)
[17] Ященко В.А. К вопросу восприятия и распознавания образов в
системах искусственного интеллекта. / Математичні машини і
системи. - № 1. - 2012 с.16-27.
Fig. 5. Traffic on designated route [18] Ященко В.А. Размышляющие компьютеры // Международная
The system is implemented "Dialogue" perception, конференция KDS 2007, Knowledge - Dialogue - Solution, Varna,
Bulgaria, june 18-24, 2007. с.673-678.
analysis and synthesis of information. Implemented thinking
[19] ЯщенкоВ.А., Шевченко А.И. Может ли компьютер мыслить? //
and logical deduction, etc. Международная научно-техническая конференция
The system was demonstrated at the international Интеллектуальные и много процессорные системы – 2005 (ИМС-
2005). Тез. Докл. Дивноморск, Россия, 26 сентября – 1 октября
conference "Knowledge-Dialog-Solution 2007" in Varna and 2005г.
at the international sci-tech multi-conference "Contemporary [20] ЯщенкоВ.А., Шевченко А.И. От искусственного интеллекта к
problems of computer information technology, mechatronics искусственной личности // Искусственный интеллект, №3, 2009.
and robotics engineering 2009" in Russia [8-22]. с.492-505.
REFERENCES [21] Морозов А.А., Ященко В.А. Ситуационные центры
информационные технологии будущего. – Киев: СП
[1] A.I. Shevchenko, Contemporary Problems of Artificial. – Kiev: IAIP «Интертехнодрук», 2008. – 332с.
«Science and Education», 2003. – 228 p.
[22] Ященко В.А. Искусственный интеллект. Теория. Моделирование.
[2] Nervous System. – Access Mode: galactic.org.ua. Применение. – К. Логос. 2013. – 289с. – Библиогр. с.283-289

480 | P a g e
www.conference.thesai.org

Das könnte Ihnen auch gefallen