Sie sind auf Seite 1von 24

BRAIN MACHINE INTERFACE

The bioport
load new skills into their colleagues'
brains
 Remember the movie “The Matrix”, those
rebels putting on the computer cords at the
back of the neck.
 The bioport (that’s what the technology was
called in movie) was a way of giving the
Matrix computers full access to the
information channels of the brain.
 The rebels use the bioport to load new skills
into their colleagues' brains—writing directly
into permanent memory.
 Imagine all this turning to reality.
 All your exam time problems vanishing.
 You being able to memorize your course
books with just a tap of button.
 Futurists and science-fiction writers also
speculate about a time when brain activity
will merge with computers.
 Moving to reality, many researches are actually going on to
explore the possibility of the man and machine merger
 Trials for the implanted chip technology have been very
successful for monkeys, who have learned to control a
computer game with their brains.
 Scientists are finding different ways of receiving senses for
people who have lost a sense, such as sight or touch, they are
made wear an artificial sensor.
 Scientists at the Max Planck Institute have developed "neuron
transistors" that can detect the firing of a nearby neuron, or
alternatively, can cause a nearby neuron to fire, or suppress it
from firing
The First Implant
 Researchers at the University of California,
Berkeley, have demonstrated how rhesus
monkeys with electrodes implanted in their
brains used their thoughts to control a computer
cursor.
 Once the animals had mastered the task, they
could repeat it proficiently day after day.
 It reflects a major finding by the scientists: A
monkey’s brain is able to develop a motor
memory for controlling a virtual device in a
manner similar to the way it creates such a
memory for the animal’s body
 The Berkeley researchers implanted
arrays of microelectrodes on the
primary motor cortex, about 2 to 3
millimeters deep into the brain,
tapping 75 to 100 neurons.
 The procedure was similar to that of other groups. The
difference was that here the scientists carefully
monitored the activity of these neurons using software
that analyzed the waveform and timing of the signals.
 Monitoring the neurons, the scientists placed the monkey’s right arm
inside a robotic exoskeleton that kept track of its movement.
 On a screen, the monkey saw a cursor whose position corresponded to
the location of its hand. The task consisted of moving the cursor to the
center of the screen, waiting for a signal, and then dragging the cursor
onto one of eight targets in the periphery. Correct maneuvers were
rewarded with sips of fruit juice.

 While the animal played, the researchers


recorded two data sets—the brain signals and
corresponding cursor positions.
 During manual control [left], the monkey maneuvers
the computer cursor while the researchers record the
neuronal activity, used to create a decoder.
 Under brain control [right], the researchers feed the
neuronal signals into the decoder, which then
controls the cursor.
 This determined whether the animal could perform the same task using only its
brain.
 a decoder, to translates brain activity into cursor movement.
 decoder is a set of equations , multiply the firing rates of the neurons by certain
numbers, or weights. When the weights have the right values, you can plug the
neuronal data into the equations and they’ll spill out the cursor position. To determine
the right weights, the researchers had only to correlate the two data sets they’d
recorded.
 Next the scientists immobilized the monkey’s arm and fed the neuronal signals measured
in real time into the decoder. Initially, the cursor moved spastically. But over a week of
practice, the monkey’s performance climbed to nearly 100 percent and remained there for
the next two weeks. For those later sessions, the monkey didn’t have to undergo any
retraining—it promptly recalled how to skillfully maneuver the cursor.
Medical Field

 Scientists who are finding different


ways of receiving senses. People who
have lost a sense, such as sight or
touch wear an artificial sensor.
 This might be a video camera, or a
touch sensitive glove. Then, electrical
pulses which encode the sense are
sent to brain via a strip on their
tongue
 REMOTE CONTROL BrainGate technology is designed to read
brain signals associated with controlling movement, which a
computer could translate into instructions for moving a computer
cursor or controlling a variety of assistive devices.

 Plugging a sensor into the human brain's motor cortex could turn
the thoughts of paralysis victims into action. Team of Brown
University scientists have expanded its efforts to developing
technology that reconnects the brain to lifeless limbs.
 BrainGate Neural Interface includes a baby aspirin–size brain
sensor containing 100 electrodes
 Sensor connects to the surface of the motor cortex (the part
of the brain that enables voluntary movement), registers
electrical signals from nearby neurons, and transmits them
through gold wires to a set of computers, processors and
monitors.

 BrainGate can assist those suffering from spinal cord injuries,


muscular dystrophy, brain stem stroke, amyotrophic lateral
sclerosis (ALS), and other motor neuron diseases
 One researcher Peter Fromherz a
director at the Max Planck institute for
biochemistry in Germany has been
studying possible connections between
silicon electronics and biological cells.
 Fromherz, first grew neurons from the
medicinal leech on silicon chips and
persuade the two parties to talk to
each other.
 Field effect transistor records the
signal from neuron
 The electronic stimulation of the
neuron arises from a voltage pulse
applied to a capacitor
 Fromherz and his coworkers established that an
ordinary silicon chip, with the outermost 20 nm
oxidized, is an ideal substrate to cultivate neurons
on.
 The silicon oxide layer insulates the two sides and
stops any electrochemical charge transfer, which
might damage the chip or the cell.
 Instead, there is only a capacitative connection,
established by a so-called planar core-coat
conductor. Proteins sticking out of the lipid
membrane ensure that there is a thin (50-100 nm)
conducting layer between lipid and silicon oxide,
which constitutes the core of the conductor.
 In the neuron-to-chip experiment, the current generated by
the neuron has to flow through the thin electrolyte layer
between cell and chip.
 This layer's resistance creates a voltage, which a transistor
inside the chip can pick up as a gate voltage that will
modify the transistor current. In the reverse signal transfer,
a capacitative current pulse is transmitted from the
semiconductor through to the cell membrane, where it
decays quickly, but activates voltage-gated ion channels
that create an action potential.
 The next challenge was to move upwards from
one neuron communicating with one stimulator
or sensor to more complex neuro-electronic
architectures, with the distant goal of getting
entire neuronal networks plugged into
electronics in a way that would allow their
function to be studied in detail or use them for
computational devices.
 For this first hybrid circuit, they used
neurons from snails.
 As a substrate to grow the cells on, the
researchers designed a specific chip
with 14 two-way junctions ( ie areas
that can both send signals to neurons
and receive signals back) arranged in a
circle of about 200 μm diameter
 Specifically, then the researchers
turned their attention to the rat
hippocampus, a brain region
associated with long-term memory.
 It is known that in this part of the rat
brain, a region known as CA3
stimulates the CA1 to which it is
connected by extensive wiring.
 Brain slices can be prepared such that
the cut runs alongside the CA3 to CA1
connection and makes this entire
communications channel accessible to
experiments.
 Using such slices, Hutzler and
Fromherz demonstrated that their chip
can (via its capacitor) stimulate the
CA3 region such that these brain cells
pass on the signal to CA1, where it can
be recorded with the chip's transistors.
 With a relatively simple chip device, the
spatial resolution remained low, but in
principle, it can be improved to the size of
features on commercial microchips, currently
standing somewhere near 100 nm.
 A CMOS (complementary metal-oxide-
semiconductor) chip with an array of 128
× 128 sensors for neural recording
packed into one square millimeter.
 The chip can practically generate a movie of
neurons in action: it delivers 16 kilo pixels at
2000 frames per second
Applications on the
horizon
 Sensors like the 16 kilopixel CMOS chip will
enable researchers to fill the gap between
studies involving only a few cells and those
operating at larger scales like magnetic
resonance imaging. Processes like associative
memory, could be studied in detail using
similar non-invasive devices.
 Prosthetic devices to restore vision, hearing
or limb control might be the next step.
 Further in the future, the real dreams would
be the realization of the “brain-in-computer
and chip-in-brain” arrangement
Gamers will soon be able to interact with
the virtual world using their thoughts and
emotions alone.
A neuro-headset which interprets the
interaction of neurons in the brain will go
on sale later this year.
It picks up electrical activity from the brain
and sends wireless signals to a computer.
It allows the user to manipulate a game or
virtual environment naturally and
intuitively.
The brain is made up of about 100 billion
nerve cells, or neurons, which emit an
electrical impulse when interacting. The
headset implements a technology known as
non-invasive electroencephalography (EEG)
to read the neural activity.
It’s a brain computer interface that reads
electrical impulses in the brain and
translates them into commands that a
video game can accept and control the
game dynamically.
 The headset could detects more than
30 different expressions, emotions
and actions.
 Gamers are able to move objects in
the world just by thinking of the
action
The Challenges and Future
Implications
 Thus the major challenge lies in fact that wiring of the spinal
cord is basically unknown. At best, on cats, researchers have
been able to hook into their optic nerves, to see what a cat can
see. And in blind people, we can stimulate a handful of pixels
in their brain, but that's about it. The brain is still a black box.
 A successful BrainGate2 trial could open up a number of new
possibilities, Although the technology is similar to what was
used in the original testing, the researchers are looking to
enlist up to 15 patients this time and gather more information
that will help them better understand brain signals as well as
"the method by which they decode them.
 Including the use of a second sensor to stimulate both sides of
the motor cortex. Researchers thus far have implanted the
sensor in the side of the brain that controls a patient's
dominant side—the left cortex for righties and the right cortex
for lefties.
 BrainGate2 is part of a larger mission to help
paralysis victims regain control of their bodies. They
want to reconnect the brain back to the muscles and
eventually back to the entire limb. They are
attempting to recreate parts of the nervous system
that have been disconnected from the brain.
 Nanobot-based virtual reality is not yet feasible in
size and cost(the one using neuron transistors), but
researchers have made a good start in
understanding the encoding of sensory signals.
 For example, Lloyd Watts and his colleagues have
developed a detailed model of the sensory coding
and transformations that take place in the auditory
processing regions of the human brain. We are at an
even earlier stage in understanding the complex
feedback loops and neural pathways in the visual
system
 The brain computer interfacing will become a profoundly
transforming technology by 2030. By then, nanobots (robots
the size of human blood cells or smaller, built with key
features at the multi-nanometer—billionth of a meter—scale
made using neuron transistors) will provide fully immersive,
totally convincing virtual reality in the following way. The
nanobots will take up positions in close physical proximity to
every interneuron connection coming from all of our senses
(e.g., eyes, ears, skin). When we want to experience real
reality, the nanobots would just stay in position (in the
capillaries) and do nothing. If we want to enter virtual
reality, they would suppress all of the inputs coming from
the real senses, and replace them with the signals that would
be appropriate for the virtual environment.
 Ultimately, we will merge our own biological intelligence with
our own creations as a way of continuing the exponential
expansion of human knowledge and creative potential.

Das könnte Ihnen auch gefallen