Sie sind auf Seite 1von 71

Biologically Inspired Algorithm

Prof M S Prasad
Invited Talk
Centre for Artificial Intelligence & Robotics
DRDO , Banglore 2016
Swarms, and how they might inspire us
There are some interesting things that come to mind when we
think of swarms (flocks, schools, etc …):
• A swarm sometimes seems to behave as if it is an individual organism.
Ants or wasps on a hunt for food, or on the attack, behave as if with a single
mind, co-ordinating different actions with different parts of the swarm.

• A swarm, of ants/bees/locusts/etc often exhibits behaviours that seem clearly


more intelligent than any of the individual members of it.

• The way in which swarms in some species change direction is astoundingly


well co-ordinated.

• The way in which swarms in some species avoid obstacles seems to be


extremely well choreographed
Other puzzling things that swarms do
• Termites build huge nests – how?? Is an individual
termite clever enough to do this?

• Bees build hives, with complex internal structure


-- same question.
What is a flock?
• One definition: a group of birds or
mammals assembled or herded together
Why does flocking/swarming occur
so much in nature?
Energy savings: Geese in V formation have around a 70% greater
range than in flying individually. Individuals can fly around 25%
faster (why?).
Frightening and confusing predators; avoiding being “picked off”

Helping to catch prey: e.g. tuna school in a crescent shaped flock


with the concave part forward:
This is thought to help channel
their prey to the “focus”, and
stop them from escaping
It also maybe helps with migration
If we can assume that:
– An individual has an idea, but not a perfect one, of where to
go … e.g. by itself it may go a few degrees off course.
– The “errors” of individuals are not correlated (i.e. they’re all
wrong in a randomly different way)
– An emergent result of the flocking is that the flock’s
direction is the average of its members’ directions.
Then: basic statistics can show that the error in the flock’s
direction is probably very small. About 1/sqrt(n) of
the typical error of one of the n individuals.
So …
Flocking occurs so much because it is clearly useful.
But how do they do it so well? Individual ants are
not clever enough to understand the benefits.

It comes down to: simple behaviours of individuals


in a group can have useful emergent properties.
A theme we will continue to see a lot …
The Adaptive Culture Model
Robert Axelrod has a well-known theory, “Axelrod’s Culture Model”,
which explains how ideas spread in societies. Kennedy and Eberhart
(a computer scientist and a social scientist respectively) altered this
into the “Adaptive Culture Model”, which works like this:
If you think your neighbour is good, then be more like them.
More in the PSO lecture, but that’s basically it. Notice the important
words,
neighbour: you change yourself under the influence of people nearby
good: in some way your neighbour is more optimal than you,
otherwise why be like them?
more like: this is vague, so you have freedom in how you change
This is actually a very good model for how culture and ideas
spread quickly in societies. Everything from rumours to eating
habits. I only hope this works with `green’ behaviour …
Two main things that come from swarm
inspiration:
Optimisation algorithms.
Ants seem to find the shortest path to find food that may be
quite distant from their nest. They do this via “stigmergy” –
laying pheronomones on their path as they move. This has
directly inspired the design of a very successful optimisation
method, called Ant Colony Optimisation.
Meanwhile, the adaptive culture model has led to a different,
and also very successful, new optimisation algorithm, called
Particle Swarm Optimisation
One other thing that come from swarm
inspiration:
Swarm-based construction
Not yet applied much, but soon to be: we are working on it!.
Swarm-based construction – how ants build their nests, bees
build hives, and beavers build dams, etc – seems to be
explainable by sets of simple rules that make use of stigmergy
(as with other emergent behaviours). But in this case, the
rules are about where the individual should put things, rather
than where the individual should go.

See section 2.1.3 of the recommended reading “Swarm Intelligence


Chapter” on my teaching site.
Autonomous Flocking
Behavior

Behavioral Algorithm
 In order to achieve full autonomous self
stabilizing automata we needed a strong
stable and efficient algorithm which
will able a UAV to move around in solo
and in group mode.

 This algorithm will try to imitate a


real life birds flocking with no leader
election.
A Few Definitions

 Each UAV has a detection range and a


separation range:
 The detection range - The distance at which
UAV's can detect other UAVs.
 The separation range - The distance at which a
UAV might steer to avoid other UAVs.

 Until something falls within a UAV's


detection range, it will not react to
it.
Craig Reynolds and “Boids”
Craig Reynolds is a computer graphics researcher, who revolutionised
animation in games and movies with his classic paper :

Reynolds, C. W. (1987) Flocks, Herds, and Schools: A Distributed Behavioral Model, in


Computer Graphics, 21(4) (SIGGRAPH '87 Conference Proceedings) pages 25-34.

The story is:


• before this paper, animations of flocks, swarms, groups, and so on,
behaved nothing at all like the real thing. Nobody knew how to make
it realistic. (we still have that problem with fire, explosions, and
realistic human movement, etc …)
• Reynold’s solved the problem by trying a very simple approach,
which was inspired by a sensible view of how animals actually do it.
Reynold’s Rules

Reynolds came up with three simple rules that solve this


Problem, resulting in entirely realistic flocking behaviour.

To explain them, we first need to consider the perceptual system of


an individual (which Reynolds called a boid).

For realistic movement, you need a realistic view of perception.


E.g. a starling’s movement is not influenced at all by the flockmates
that it cannot see – such as those out of its line of sight, or too far
away.
A simple sensory system
This picture is from Reynold’s boids page.
The green boid can see a certain amount
ahead, and is also aware of any
flockmates within limits on
either side (recall, birds tend
to have eithers on the sides
of their heads.)

Two parameters, angle and distance,


define the system. SO, this boid will only
be influenced by those others it can sense
according to these parameters.
Rule 1: Separation
At each iteration, a boid
makes an adjustment to its
velocity according to the
following rule:

Avoid getting too close to


local (the ones it is aware
of) flockmates.
Rule 2: Alignment
At each iteration, a boid
makes an adjustment to
match its velocity to the
average of that of its local
flockmates.
Rule 3: Cohesion
At each iteration, a boid
makes an adjustment to its
velocity towards the
centroid of its flockmates.
Steering : more complex
behaviors
Craig W. Reynolds, Steering Behaviors For Autonomous Characters, Game Developers
Conference, 1999.

Pursuit and Evasion Arrival Obstacle Avoidance

path following wall following Leader following


Boid motion (1)
• Boids have a local coordinate system
Boids!

Boids is an artificial life program, developed by Craig


Reynolds in 1986.

To simulates the flocking behavior of birds.

“boid” corresponds to a shortened version of “bird-oid


object”, which refers to a bird-like object.
.
• Similar to particle systems, but have orientation
• Have a geometric shape used for rendering
• Behavior-based motion
Boid motion (2)
• Flight is accomplished using a dynamic,
incremental, and rigid geometrical
transformation
• Flight path not specified in advance
• Forward motion specified as incremental
translations in local +Z direction
Boid motion
• Rotation about X, Y, and Z axes for pitch,
yaw, and roll
• No notion of lift or gravity (except for
banking)
• Limits set for maximum speed and
maximum acceleration
Flocking motion
• Boids must coordinate with flockmates
• Two main desires:
– Stay close to the flock
– Avoid collisions with the flock
• Flocking seems to have evolved due to
protection from predators, higher chances of
finding food, mating, etc.
Arbitrating behaviors
• Behavioral urges produce acceleration
requests: normalized 3D vector with
importance in [0,1]
• Priority acceleration allocation is used
instead of averaging acceleration requests
• Acceleration requests are prioritized and the
most important ones are used up to a
maximum acceleration
Simulated perception
• Unrealistic for each boid to have complete
knowledge
• Flocking depends upon a localized view of
the world
• Each boid has a spherical neighborhood of
sensitivity, based upon a radius and an
exponent: 1/rn
• Can be exaggerated in forward direction
Scripted flocking
• More control is needed for animation (e.g.,
flocks should be near point A at time t0 and
near point B at time t1)
• Flock has a migratory urge towards a global
target
• Global target can be moving and can vary
depending on boids or other factors
Avoiding obstacles (1)
• Force field approach
– Obstacles have a field of repulsion
– Boids increasingly repulsed as they approach
obstacle
• Drawbacks:
– Approaching a force in exactly the opposite
direction
– Flying alongside a wall
Avoiding obstacles (2)
• Steer-to-avoid approach
– Boid only considers obstacles directly in front
of it
– Finds silhouette edge of obstacle closest to
point of eventual impact
– A vector is computed that will aim the boid at a
point one body length beyond the silhouette
edge
Avoiding obstacles (3)
Algorithmic considerations
• Naïve algorithm is O(N2)
• This can be significantly reduced:
– Localizing each boid’s perception
– Parallelization
– Spatial partitioning can be used to achieve O(1)
Notes: It’s not quite as simple as that
to get realistic behaviour
Need to define an appropriate distance for the perceptive range.
What if this is too high, what if this is too small?

Reynolds found that he had to be careful about how the vectors from
the three rules get combined. It is not ideal to simply add them.
Opposing “shouts” from two rules may cancel out, leading to
the third winning – in what scenarios might this be a problem?

Note that the cohesion rule is interesting – it leads to “bifurcating”


around obstacles – a follow-the-leader approach to flocking would not
achieve that.
The simple rules also realistically lead to “flash expansion” if started
too close together.
Boids!
• “boids” comes from “bird-oids”
• Similar to particle systems, but have
orientation
• Have a geometric shape used for rendering
• Behavior-based motion
Boid motion (2)
• Flight is accomplished using a dynamic,
incremental, and rigid geometrical
transformation
• Flight path not specified in advance
• Forward motion specified as incremental
translations in local +Z direction
Boid motion (3)
• Rotation about X, Y, and Z axes for pitch,
yaw, and roll
• No notion of lift or gravity (except for
banking)
• Limits set for maximum speed and
maximum acceleration
Flocking motion
• Boids must coordinate with flockmates
• Two main desires:
– Stay close to the flock
– Avoid collisions with the flock
• Flocking seems to have evolved due to
protection from predators, higher chances of
finding food, mating, etc.
Arbitrating behaviors
• Behavioral urges produce acceleration
requests: normalized 3D vector with
importance in [0,1]
• Priority acceleration allocation is used
instead of averaging acceleration requests
• Acceleration requests are prioritized and the
most important ones are used up to a
maximum acceleration
Simulated perception
• Unrealistic for each boid to have complete
knowledge
• Flocking depends upon a localized view of
the world
• Each boid has a spherical neighborhood of
sensitivity, based upon a radius and an
exponent: 1/rn
• Can be exaggerated in forward direction
Scripted flocking
• More control is needed for animation (e.g.,
flocks should be near point A at time t0 and
near point B at time t1)
• Flock has a migratory urge towards a global
target
• Global target can be moving and can vary
depending on boids or other factors
Avoiding obstacles (1)
• Force field approach
– Obstacles have a field of repulsion
– Boids increasingly repulsed as they approach
obstacle
• Drawbacks:
– Approaching a force in exactly the opposite
direction
– Flying alongside a wall
Avoiding obstacles (2)
• Steer-to-avoid approach
– Boid only considers obstacles directly in front
of it
– Finds silhouette edge of obstacle closest to
point of eventual impact
– A vector is computed that will aim the boid at a
point one body length beyond the silhouette
edge
Data: A group of boids.
Result: Simulates flocking behaviour with an animation.
For each Frame do
foreach boid do
separation(boid);
cohesion(boid);
alignment(boid);
end
For each boid do
boid.x --- cos(boid.course) b.velocity dTime;
boid.y ---- sin(boid.course) b.velocity dTime;
draw(boid);
end
end
Steer to move toward the average position of local flock mates.
Cohesion is the rule that keeps the flock together,

goal (0,0);
neighbours getNeighbours(boid);
for each nBoid in neighbours
do
goal --- goal + positionOf(nBoid);
end
goal ---- goal / neighbours.size();
Steer forward(goal, boid);

% cohesion
Reynolds second rule: Separation.
goal (0,0);
neighbours getNeighbours(boid);
foreach nBoid in neighbours
do
goal --- goal + positionOf(boid) - positionOf(nBoid);
end
goal --- goal / neighbours.size();
Steer forward(goal, boid);

Steer to avoid crowding local flock mates


Alignment
% The course and velocity of the boid is updated.
dCourse --- 0;
dVelocity --- 0;
neighbours getNeighbours(boid);
for each nBoid in neighbours
do
dCourse ---- dCourse + getCourse(nBoid) - getCourse(boid);
dVelocity ---- dVelocity + getVelocity(nBoid) - getVelocity(boid);
end
dCourse --- dCourse / neighbours.size();
dVelocity --- dVelocity / neighbours.size();
boid.addCourse(dCourse);
boid.addVelocity(dVelocity);
steer towards the average heading of local flockmates. This
rule tries to make the boids mimic each others course and
speed.
Algorithmic considerations
• Naïve algorithm is O(N2)
• This can be significantly reduced:
– Localizing each boid’s perception
– Parallelization
– Spatial partitioning can be used to achieve O(1)
T. Vicsek Model

In a 2-dimensional box of side length L with periodic boundary


conditions, at t = 0, N particles were distributed in random
positions xi.
Velocity for each particle is constrained to a constant value v,
and initial directions θi for each particle’s velocity is
randomized. At each time step, position and direction followed
xi( t+1) = xi(t ) + vi(t)∆t and θ(t +1) = {θ(t)} + ∆θ

where { θi (t)} is the average direction of the velocities of particles


within a distance r of the ith particle. Particles further away have no
influence on the given particle. ∆θ is a random number uniformly
chosen between –η/2 and +η/2. This creates fluctuations in velocity
direction, representing the effect of finite temperature
TU Model
Tu's model, the boids are initially placed at random, within a
box of some fixed size.

The net force on boid i is given as :

vj is boid velocity & r ij is a unit vector along the line of


interaction between boids i and j.
𝝶 is the noise.

fb is cohesion force and fa is alignment force.


AGENT BASED MODEL
The Need for Agent-based Modeling

We live in an increasingly complex world.


Systems More Complex

New Tools, Toolkits, Modeling Approaches

Data
–Data now organized into databases at finer levels of
granularity(micro-data) –can now support micro-simulations

Computational Power
–Computational power advancing –can now support micro-
simulations
Agent definitions
• “Most often, when people use the term ‘agent’
they refer to an entity that functions continuously
and autonomously in an environment in which
other processes take place and other agents exist.”
(Shoham, 1993)
• “An agent is an entity that senses its environment
and acts upon it” (Russell, 1997)
•What is an agent?
•–A discrete entity with its own goals and behaviors
•–Autonomous, with a capability to adapt and modify its behaviors
• Assumptions
– Some key aspect of behaviors can be described.
– Mechanisms by which agents interact can be described.
– Complex social processes and a system can be built “from the
bottom up.”
Agents are diverse and heterogeneous

Agent-based Simulation Is a New Field Grounded in the Biological,


Social, and Other Sciences
Agents characteristics

• act on behalf of a user or a / another program


• autonomous
• sense the environment and acts upon it / reactivity
• purposeful action / pro-activity
• function continuously / persistent software
• mobility ?

• Goals, rationality
• Reasoning, decision making cognitive
• Learning/adaptation
• Interaction with other agents - social dimension
Other basis for intelligence?
62
No central authority or controller exists for:
–How the system operates
–How the system is modeled
–How the system/model moves from state
to state
􀂄 “Optimization”can be done for the
system as a whole
Multi-agent systems
Many entities (agents) in a common
environment

Environment

Influenece area Interactions 67


MAS - many agents in the same environment
• Interactions among agents
- high-level interactions
• Interactions for - coordination
- communication
- organization
Coordination
 collectively motivated / interested
 self interested
- own goals / indifferent
- own goals / competition / competing for the same resources
- own goals / competition / contradictory goals
- own goals / coalitions
68
Communication
 communication protocol
 communication language
- negotiation to reach agreement
- ontology
Organizational structures
 centralized vs decentralized
 hierarchical/ markets
"cognitive agent" approach

69
Communication
 communication protocol
 communication language
- negotiation to reach agreement
- ontology
Organizational structures
 centralized vs decentralized
 hierarchical/ markets
"cognitive agent" approach

70
SWARM INTELLIGENCE
Swarm intelligence (SI) is the collective behavior of
decentralized, self-organized systems, natural or artificial. The
concept is employed in work on artificial intelligence. The
expression was introduced by Gerardo Beni and Jing
SI systems consist typically of a population of simple agents or
boids interacting locally with one another and with their
environment. The inspiration often comes from nature, especially
biological systems.

The agents follow very simple rules, and although there is no


centralized control structure dictating
how individual agents should behave, local, and to a certain
degree random, interactions between such agents lead to the
emergence of “intelligent” global behavior, unknown to the
individual agents