Sie sind auf Seite 1von 11

Handbook of Science and Technology Convergence

DOI 10.1007/978-3-319-04033-2_19-1
# Springer International Publishing Switzerland 2014

Nanotechnology-Neuroscience Convergence
Jo-Won Leea* and Moonkyung Mark Kimb
a
Department of Convergence Nanoscience, Hanyang University, Seoul, South Korea
b
School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USA

Abstract
Roco et al. 2013 introduced the convergence of nanotechnology with biotechnology, information
technology, and cognitive technology (NBIC) as a main trend in science and technology. They also
provided a list of 20 visionary ideas for the next 10–30 years. According to their ideas, in the next
20 years, we expect to have humanlike intelligent robots, smartphones with real-time language
translating function, and pocket-sized supercomputers through the advance in the NBIC. To pave the
way for this, every computing system should be flexible, mobile, self-programmable, real time, and
even self-learning. However, as the miniaturization trend continues following Moore’s law, it would be
impractical to apply the current nanoelectronics to future computing systems due to enormous energy
consumption and technological limits. Accordingly, the architecture and the functions of transistors
used in the present computing system need to be improved and inspired by the human brain. Unfortu-
nately, it is unclear how neural activities in the human brain result in cognitive processes such as
learning and reasoning. Nevertheless, the convergence of neuroscience with nanotechnology is
expected to bring us closer to building neuro-inspired chips for neurocomputers utilizing some clues
on neural activity and structure. In this chapter, we will show various scientific problems and challenges
in realizing neuro-inspired chips.

Introduction
According to the Cisco estimates (Cisco’s Visual Networking Index Forecast Projects, May 29, 2013), the
number of internet users already exceeded 2.3 billion as of 2012 and will reach 3.6 billion in 2017. In
addition, smartphones have become an essential part of our lives (Fig. 1 for inauguration photographs of
President Obama taken in 2009 and 2013). Smartphones empowered by the Internet offer us access to
various information such as map, weather, and traffic and help us manage our schedules. This dramatic
change in our lives is mainly attributable to the evolution of computers, which is the most epoch-making
invention in the twentieth century and possibly in the human history.
Computers can now assist us in performing many general activities associated with the bank, hospital,
government, and school and solve many of our complex tasks such as scientific problems, weather
forecasting, virtual nuclear testing, and so on. Thus, our lives are already deeply influenced by computers.
As time goes by, we will become more dependent upon it.
However, computers are still inconvenient to use due to their deficiency in reasoning and recognition.
Therefore, the demand for machine intelligence is on the rise. The best solution to overcome the current
shortcomings of computers is through the convergence of state-of-the-art nanotechnology with neurosci-
ence, which will allow computers to operate more like humans.
Roco et al. 2013 recommended the primary R&D areas for government in the convergence of
knowledge, technology, and society (CKTS) as follows:

*Email: jowon@hanyang.ac.kr

Page 1 of 11
Handbook of Science and Technology Convergence
DOI 10.1007/978-3-319-04033-2_19-1
# Springer International Publishing Switzerland 2014

Fig. 1 Inauguration photographs of President Obama taken in 2009 and 2013

– Cognitive society and lifelong wellness


– Distributed NBIC manufacturing
– Increasing human potentials
– Biomedicine-centered convergence
– Sustainable Earth systems

These R&D areas, closely related to pursuing human welfare, would be unattainable without the
progress of machine intelligence.

Limits of Present Computing Systems


Apple’s Siri and Google’s Now are the best examples of the voice recognition system since the concept
was first introduced in 1968 in the movie 2001: A Space Odyssey. However, they have some limitations
due to the inability of dealing with the continuously changing context of the mobile environment. For
example, if you want to make travel reservations using Siri, the smartphone needs to be capable of
identifying the best flight and hotel of your choice and the status of the current reservation based on
concurrent data. Is it possible for your current smartphone to easily execute these tasks? Smartphones do
not have all the necessary expert knowledge to perform them. In addition, the inability of acquiring
concurrent data is another limitation since the computational power and data capacity of the supercom-
puter in the data center cannot match the demand of users from all over the world. Notwithstanding these

Page 2 of 11
Handbook of Science and Technology Convergence
DOI 10.1007/978-3-319-04033-2_19-1
# Springer International Publishing Switzerland 2014

limitations, voice recognition in conjunction with wearable computer, smartphone, self-driving car, and
Internet of Things (IoT) will be the major application area of artificial intelligence (AI) utilizing human-
made software.
The advancement of the computer’s performance shocked the world in 1997 when the IBM super-
computer Deep Blue with 11.38 gigaFlops (floating-point operations per second: measuring computer
performance) defeated the world’s chess champion Garry Kasparov. In 2011, IBM surprised us again
when IBM’s Watson supercomputer with 80 teraFlops beat the champions in Jeopardy, a quiz game.
These two remarkable episodes are examples of how machine intelligence could be realized in real life. It
would be very convenient for us to have such intelligent systems in hand with the capability of learning,
recognition, and decision-making. However, systems like Deep Blue and Watson cannot be applied in
other areas due to their immobility. For instance, they cannot drive unmanned cars. Furthermore, they are
incapable of understanding semantic contexts or engaging in reasoning and decision-making activities
based on collected data from various environments.
A robot is one of the finest applications of AI. Robot scientists have mainly focused on programming
the logic for the reasoning and recognition carried out by humans. They believed it would be possible for
robots to think like humans by equipping them with sufficient information regarding various human tasks
and with logical reasoning power to process the information. So far, these attempts to apply AI in robots
have been generally unsuccessful in the past several decades except for some specific tasks, for example,
chess and quiz using supercomputers as mentioned earlier. Some may consider everyday activities such as
cooking to be easy for robots. However, culinary art requires a lot of creativity with instinct for taste. This
instinct has been inherited to humans for generations since the appearance of Homo sapiens 200,000 years
ago. However, robots lack such instinct due to the differences in the methods for processing input data.
Humans learn through their experiences but machines do not have this ability. If science fiction writer
such as Isaac Asimov were to come back to life, he would be disappointed to find out that humanlike robot
servants have not been developed yet.
Nowadays, we know how difficult it is to achieve intelligent systems which are comparable in
performance to human beings. However, in the past, the AI community was overly optimistic on the
possibilities of AI. Herbert Simon, a pioneer of AI and a Nobel laureate, made two predictions in 1957:
(i) most of psychological theories would be expressed in the form of computer programs, and (ii) a
computer would win the chess world championship by 1967. Of the two, only his latter claim was realized
after 30 years in 1997. Later in 1965, he insisted again that we would have humanlike machines within the
next 20 years, which is yet to be accomplished. Marvin Minsky, another pioneer of AI at MIT’s AI
laboratory, predicted in 1970 that within just 3–8 years, machines would have the general intelligence of
an average human being. Since then, Minsky has dramatically changed his stance and has become critical
of his AI community. He stated that “today you’ll find students excited over robots that play basketball or
soccer and make funny faces at you. But, they’re not making them smarter” (Michalowski 2013).
Robots are limited because they only carry out programmed work or input command. Moreover, they
lack learning or reasoning abilities. In 1956, AI pioneers, including Simon and Minsky, organized the
Dartmouth Conference to discuss how to build AI. They thought that using mathematical modeling and
programming, it would be possible to develop intelligent systems to mimic human brains. At that time,
solving mathematical problems and playing chess were considered to be some of the most complex and
difficult processes that the human brain handle. Thus, AI pioneers were of the opinion that everyday tasks
might be readily tackled if these were solved with some modeled rules. In the beginning, the pioneers’ AI
model seemed to work well as demonstrated by the superior performance of the computer in solving
complex math problems in comparison to humans. Completing math tasks requires Boolean processing
rather than advanced intelligence. Thus, the computer may excel in them, but not on other everyday tasks.
In reality, the realization of AI is not as simple as AI pioneers originally considered.

Page 3 of 11
Handbook of Science and Technology Convergence
DOI 10.1007/978-3-319-04033-2_19-1
# Springer International Publishing Switzerland 2014

Table 1 No further miniaturization for MOSFET


Technical obstacles Physical reason When reaching the limit
Variation properties and Unacceptable for transistor below 5 nm After 2020
size
Heat dissipation Associated with leakage current, cell density, switching Already happened
energy, and von Neumann architecture
Not ready for EUV Lack of viable power source to get the necessary Several times delayed as of 2014
lithography below 14 nm throughput looking for EUV replacements

Nevertheless, over the past few decades, some advances have been achieved toward building an AI
system in related disciplines. In neuroscience, scientists have gained considerable knowledge about how
interactions of neurons and synapses lead to learning and reasoning. Computer scientists, using super-
computer, have modeled and simulated neural mechanisms to obtain information on the behavior of
neurons. Engineers have accomplished remarkable progress in nanoelectronics to emulate neurons and
synapses with higher density and less power consumption.
Although current machines are not as intelligent as humans, they are capable of completing specific
tasks using supercomputers and software as mentioned above. This success was only possible due to the
advances in semiconductor technology following Moore’s law for the last 50 years. However, this step-
by-step miniaturization trend cannot be continued due to foreseeable technological difficulties within the
next 5–10 years.
There are three major obstacles for the miniaturization to increase the density and speed of chips (see
Table 1). The first is the unacceptable variation in property parameters and size as the transistor becomes
more scale-downed. It is generally accepted that the mass production of integrated chips could not be
possible for the transistor size below 5 nm. The second obstacle is the nanofabrication associated with
lithography. Extreme ultraviolet (EUV) is considered to be a strong and/or only candidate for the next-
generation lithography below 14 nm. It is not yet available for mass production due to the lack of a viable
light source. To make the matter worse, there is no next-generation lithography after EUV considering
both of the minimum pattern size and the throughput for mass production. Finally, as the transistor size
decreases, the chip temperature dramatically increases. Within 10 years or so, the surface temperature of
chips will reach that of the sun’s surface at 6,500 K without any management for the heat dissipation.
A similar situation took place in the 1980s. At that time, bipolar transistor was perceived to consume
extensive energy, and thus, it was replaced by CMOS to reduce its power consumption. Therefore, the
reduction of power consumption is a major challenge for future computing systems. In other words,
energy efficiency fundamentally limits the processing power in computers. No solution is yet to be found
to overcome these barriers regardless of the form in the name of beyond CMOS.
In addition, the length of switching devices is physically constrained by Heisenberg uncertainty
principle. Specifically, 1.5 nm is found to be the physical limit for switching devices based on any
materials due to the thermal and quantum perturbation (Zhirnov et al. 2003).

A Necessity for Neuro-Inspired Nanoelectronics for Intelligent Systems


Digital computers are characterized to serially compute input data with fast speed. However, its main
shortcoming lies in its large power consumption. For example, the supercomputer operating at several
tens of petaFlops (1015 Flops) needs several tens of megawatts, but the exaFlops (1018 Flops) computer,
expected in 2019, requires 1.3 GW (McMorrow 2013). As a rule of thumb, 1 MW is equivalent to the

Page 4 of 11
Handbook of Science and Technology Convergence
DOI 10.1007/978-3-319-04033-2_19-1
# Springer International Publishing Switzerland 2014

power necessary for 1,000 households. Thus, 1.3 GW is equal to the power for 1,300,000 households,
which are roughly 1.5 times the size of Los Angeles, CA.
This large power consumption is related to the von Neumann architecture inherent in the digital
computer. In this architecture, memory storing a program and data is physically separated from the
CPU. The program and data should continuously move back and forth between memory and the CPU for
each calculation. This creates von Neumann bottleneck. Although multi-core computers can process
multi-data at a time, they still use the same principle based on von Neumann architecture. It should be
noted that the synapses in our brains operate at 100 Hz. Thus, if all the synapses, which is ~1015 of the
human brain, would participate in the calculation, then 1017 calculations per second can be performed,
which is about 10 times faster than the present fastest supercomputer in the world. In addition, the
computation with CMOS ICs is very inefficient at human tasks such as pattern recognition, adaptability,
and error tolerance. Thus, emulating the function of the human brain through hardware development
could be the first prerequisite to the successful buildup of an AI system requiring a small amount of power
consumption.
A number of key differences exist between the computer and the human brain. At present, the highest
memory density of NAND flash is 128 Gb, which is around 1011/cm2. Synapse in the human brain is a
kind of memory and the density is four orders higher than the NAND. Neuron (soma) is a counterpart to
logic and the density is three orders larger than 22 nm CMOS chips (108/cm2). Roughly calculating,
building 109 CMOS/cm2 is required to reduce the dimension of the transistor by about one-tenth of that
(2.2 nm). This is an impossible scenario to implement due to technological difficulties. Operation
frequency in our brain is 4–30 Hz, while our desktop computer reaches to 2 GHz level, resulting in the
computer being 100 million times faster. However, our brain consumes only about 20 W, whereas
the power consumption of the desktop computer is around 350 W. In some tasks like calculation, the
computer surpasses our brain, but no computer in the world can reason like humans. Presumably, a robot
as smart as humans (100 petaFlops as described above) requires 10–20 MW, which is equivalent to the
power generated by a small hydroelectric plant if we use conventional CMOS.
The computer is built from Si and metal, while the human brain is composed of hydrocarbons and
aqueous solutions. As mentioned above, the computer is energy hungry, yet poor at pattern recognition,
learning, and decision-making. This inferior performance arises from the computer’s lack of understand-
ing of the context, and its computation being deterministic, only made by human inputs with synchronous
operation in a clock-driven mode. The most fatal shortcoming of digital computer is in its serial data
processing using 0 and 1, which is the main cause for its large power consumption.
In contrast, the human brain is very slow but energy efficient with distributed data processing and
learning ability and is operating asynchronously in an event-driven and/or stochastic mode. Furthermore,
our brain works in a parallel mode using analog data. Consequently, these inherent differences between
the computer and the human brain bring about their performance differences: the computer is good at
mathematics but poor in adaptation and learning; in contrast, the human brain is good for adaptation and
learning, but comparatively poor at mathematics.
Humans have long believed that the capacity for consciousness is the essential, defining characteristic
of our species. The computer is the culmination of human efforts to emulate consciousness in itself.
Accordingly, the ultimate goal of computer science has been to endow the machine with intelligence,
enabling it to function in a manner akin to human logical reasoning and thought. It can be said that God
created human being in his own image, while human beings invented the computer to mimic their
performance. If that is so, why are computers still inferior to humans at some skills, such as reasoning
and recognition? The failure of computers to compete with humans at these tasks indicates that some
aspects of human intelligence currently lie beyond the ability of computers. However, this perspective
hints at an intriguing question: if we can understand the logic underlying information processing in

Page 5 of 11
Handbook of Science and Technology Convergence
DOI 10.1007/978-3-319-04033-2_19-1
# Springer International Publishing Switzerland 2014

humans and develop corresponding computer hardware, could it be possible to build a computer that
functions similarly to humans?
Two fundamental questions of the human brain remain unresolved. First, we do not fully comprehend
how the brain processes and memorizes data at the molecular level. Second, we do not know how this
processed information results in our capacities for recognition and inference. In order to grasp these
behaviors, we need a detailed map to reveal the activity of neurons and neural networks throughout the
brain. The neural system of C. elegans, a roundworm, with 302 neurons and 5,000 synapses, has been
characterized for several decades, but neuroscientists do not even understand the basic functional details
of human neurological structures. For mapping the function of the human brain, scientists have sectioned
brains into as thin as possible slices and then stained the sections for observation by optical microscopy,
SEM, TEM, and so on. Afterward, the observed images were reconstructed into a 3-D format by a
computer. From these images, the neural connectivity has been found to be of a rectilinear 3-D grid with
no diagonal connections (Wedeen et al. 2012). This observation yields a very valuable insight that the grid
structure can be developed into a 3-D layered crossbar array to create “electronic synapses” in neuro-
inspired (neuromorphic) chips. It must be noted, however, that imaging studies of sectioned brain tissue
are only possible using cadavers, not living human subjects. In addition to natural deterioration of the
brain following death, information could also be lost due to tissue damage during the sectioning process.
An alternate method is to use fMRI to map the brain. However, the resolution is limited to around 1 mm
because fat molecules – lipid components of the neurons themselves – block the light, leading to opacity.
Recently, (Chung et al. 2013) at Stanford University, have made a major breakthrough in enhancing the
resolution to 500 nm – which is 2,000 times better than fMRI – by using a hydrogel known as CLARIT-
Y. CLARITY preserves the physical structure of the brain, while simultaneously making its structure more
visible by removing the lipids.
Neuroscientists have monitored neuronal activity by implanting electrodes on the brain, but the
coverage area of these electrodes is confined to one or a few neurons. The mean area of a neuron in the
hippocampus is variable from 250 to 300 mm2 (Zeidel et al. 1997), and the tip size of the nanoelectrode is
less than 0.6 mm2 for an intracellular recording unit (Lin et al. 2014). Accordingly, only the activity of a
single neuron and the interaction of one neuron with another have been well characterized (Houweling
et al. 2009). Is it reasonable to contend that we can understand the information processing of hundred
billions of neurons only by comprehending the activity of one isolated neuron? This is akin to arguing that
on television, we can extrapolate from one or a few pixels, depicting the glint of light on a wine glass, to
reconstruct the several millions of pixels that make up a complete episode of Downton Abbey, thereby
revealing all potential spoilers for the upcoming season!
A neuron largely consists of four parts: soma, axon, synapse, and dendrite. Axons and dendrites are
unique in neurons and are not present in other cells. The soma integrates incoming signals and generates
outgoing signals across the axon. The axon sends the outgoing signal to the dendrites of a neighboring
neuron. A gap of around 20 nm exists between the terminal of the axon and the dendrite of a data-
receiving neuron. This is called the synaptic gap, and thus, the synapse is the connection between neurons.
Synapses are known to be responsible for the robustness and massive parallelism of computations of the
mammalian brain. The existence of the synapse was verified in the late nineteenth century by Nobel
laureates Camillo Golgi and Santiago Tamony Cajal, using optical microscopy. However, their interpre-
tations based on their observations were diametrically opposed to each other. Golgi insisted that the two
neurons must be physically connected, while Cajal advocated that no connection is present between the
neurons. Therefore, their respective positions were locked in a bitter tug-of-war until the invention of
electron microscopy in 1931 by Nobel laureate Ernst Ruska. In the end, the absence of a connection was
found to be correct. This episode powerfully demonstrates that the higher the magnification to which we
are able to resolve any structure, the greater the potential for new results to overturn previously accepted

Page 6 of 11
Handbook of Science and Technology Convergence
DOI 10.1007/978-3-319-04033-2_19-1
# Springer International Publishing Switzerland 2014

results. In this respect, we believe that nanotechnology can contribute profoundly to better understanding
neural activity within the brain, which previously has remained unseen. Recently, the importance of the
convergence of nanotechnology with neuroscience has been exemplified by the development of the 3-D
model of the synapse, as first established by combining observations from electron microscopy, super-
resolution light microscopy, and mass spectroscopy (Wilhelm et al. 2014). It is generally true that we can
better understand any particular phenomenon in 3-D than 2-D.
The computer remains inferior to the brain at some tasks even if we try to miniaturize the transistor up to
the physical limits. Thus, a different approach to circuit architecture is clearly in order. Accordingly, many
attempts have been made to emulate our neuronal structures. We could begin to fabricate intelligent
machines if we possess even a few rudimentary clues about how neurons process information. In a sense,
soma can be considered analogous to digital logic structures, synapses to a kind of reconfigurable
memory, and dendrites and axons to metal interconnects within our chip. Furthermore, if we examine
the structure of the cerebral cortex in the brain, we come to realize that its layered structure can be
emulated by a layered crossbar array, for neuro-inspired nanoelectronics to achieve a density comparable
to the human brain.
As recently as 10 years ago, it was very difficult to fabricate a brain-emulating neuromorphic chip, due
to the brain’s complexity and extremely high density of neurons, even utilizing state-of-the-art semicon-
ducting fabrication technology. Specifically, CMOS-based neuromorphic chips faced major difficulties
caused by the variability in parameters such as gate length, threshold voltage, and so on. Furthermore,
these variations are large enough to make the chip impossible to control using human-made algorithms.
As a consequence, R&D activities on CMOS-based neuromorphic chips have been reduced, and instead,
the interest in the digital neuron using a supercomputer has been enlarged. Although this later approach
requires enormous power consumption, it has the advantage of producing results with easier
programmability.
During the last 10 years, nanotechnology has advanced tremendously, specifically in the area of
nanoelectronics. Now, we believe firmly that nanotechnology has the potential to make the goals of
artificial intelligence a reality – that we can emulate brain functions such as perception, cognition, and
inference. Via miniaturization and 3-D integration, nanotechnology can produce neuromorphic chips with
a density relatively close to that of human neurons, while approaching the low power consumption and
compact size of the human brain. Furthermore, spike-time-dependent plasticity (STDP) of the synapse,
which was definitively demonstrated by (Markram et al. 1997) is known to be a mechanism for the brain’s
learning. Thus, the realization of STDP is the starting point for the design of electronic synapses for any
neuromorphic chips. Fortunately, this STDP characteristic has been emulated in many nonvolatile
memories, such as phase change memory (PCM), resistive random access memory (RRAM), conductive
bridge random access memory (CBRAM), and so on (Kuzum et al. 2013). Utilizing high-density
neuromorphic chips with the STDP property shows great promise for building a computer capable of
learning and reasoning since such chips could offer the very high synapse density and low power
consumption that are essential for electric synapses in neuromorphic chips. This is called a brain-
inspired computer, a neuromorphic computer, or a neurocomputer, which by definition lie within the
scope of AI. At present, the development of the neurocomputer is still in the early stages of research.

Uploading Your Brain into a Computer


What developments, which were previously only dreamed of in science fiction, might await humanity in
the not-too-distant future if neurocomputers were implanted into the human brain or body and utilized as a
platform of brain machine interface (BMI) or brain computer interface (BCI)? One obvious inspiration
arises from Star Trek: The Next Generation. In that series, Chief Engineer Geordi La Forge, who was born

Page 7 of 11
Handbook of Science and Technology Convergence
DOI 10.1007/978-3-319-04033-2_19-1
# Springer International Publishing Switzerland 2014

Fig. 2 2045 Avatar Project

blind, is equipped with a VISOR which directly feeds a very dense stream of information directly into his
cerebral cortex, enabling him to “see” not only visible light but radiation spanning the entire known
electromagnetic spectrum. The first steps toward this future have been taken in the past 10 years, with the
development of prosthetic implants which directly stimulate retinal cells and restore to otherwise blind
patients at least a rudimentary ability to detect large shapes, such as the edge of a road and an oncoming
vehicle. As BMI technologies become more advanced, we can envision a future where it becomes cost-
effective for any person to download an app directly into their brain, to gather and process specialized
information, and to guide their work. Brain signals from thinking could potentially control any machine in
the world and even yield “telepathic” person-to-person communication by directly connecting both
parties to a common neural network.
Storing human memories after death or even downloading all the information from a human brain into a
computer for a technologically based “eternal life” has attracted particular attention. This theme was
shown in the aptly titled 2009 movie Avatar, one of that year’s biggest blockbusters. The film portrayed
that humans develop software to engineer actual souls for individuals within a local tribe of Na’vi – a
humanoid species. By implanting this software on the Na’vi, genetically matched humans could remotely
control hybrids of Na’vi bodies and human sprits, called Avatars. A similar concept to Avatar was
depicted more recently in the 2014 movie Transcendence. In this film, a human consciousness is uploaded
into a computer before dying. Through the computer hardware and the Internet, the uploaded conscious-
ness expands its knowledge continuously, 24 h a day, without need for food or sleep. As a result, this
computer gains enormous computation power, transcending the ordinary human ability and bearing an
ambition of dominating all living humans.
Although the ideas underlying Transcendence may appear at present to lie squarely within the realm of
science fiction, their realization may arrive sooner than we think. In February 2011, “2045 Initiative” was
announced by Dmitry Itskov, a Russian billionaire, in order to materialize artificial immortality through a
hologram conceived in Avatar as shown in Fig. 2 (Upload Your Brain Into A Hologram: Project Avatar
2045, 2013).

Major Neuroscience Initiatives Sponsored by the EU and the US


It is widely declared that the last frontiers of science are to uncover the secrets of the birth of the universe
which surrounds us and to understand the neural activity of the brain located between our own two ears. In

Page 8 of 11
Handbook of Science and Technology Convergence
DOI 10.1007/978-3-319-04033-2_19-1
# Springer International Publishing Switzerland 2014

Fig. 3 Green and intelligent society after 10~20 years

this regard, the EU and the US have undertaken large, visionary 10-year initiatives to investigate the
neural activity of the brain, in 2013. The examination of brain functions is expected to follow two broad,
parallel approaches. The first approach, embraced by the Human Brain Project (HBP) of the EU, is to
construct the structure of a digital brain through reverse engineering at the molecular level. The second
approach is to map various neural functions across the brain, which is the goal of the US BRAIN (Brain
Research through Advancing Innovative Neurotechnologies) Initiative. The HBP, with a total funding of
one billion euros, aims to model and simulate neural networks using a supercomputer, based on all
currently existing data about neural activities. Meanwhile, BRAIN intends to study how each neuron
interacts at a molecular level using various probing tools. The two initiatives take quite different
approaches, but share the same ultimate goal – to understand the neural functions of the brain.
Mapping the whole brain is absolutely necessary to find the mechanisms underlying various brain
functions. This comprehensive map of neuronal connections in the brain is known as the “connectome.”
A thorough study of the connectome is indispensable to fully understand the mechanisms, which underlie
the phenomena of learning, recognition, and reasoning. However, mapping information of the entire
human brain is the most difficult challenge because colossal amounts of data are produced from our brain,
consisting of 1011 neurons and around 1014~15 synapses. It is known that around 2,000 TB (2,000  1012)
of data storage is necessary for the electron microscopy information from one cubic millimeter of brain
tissue (Abbott 2013). Thus, more than 1 ZB (1021) is calculated to be the absolute minimum required to
store all the mapping information on the human brain. In fact, the total digital data produced by every
computer in the world in 2012 is estimated to be approximately 2.8 ZB (2.8  1021), meaning that the
storage requirements of even a single human connectome are approximately equal to that of 6 months’
worth of data generated by the entire human race.

Conclusion and Future Perspective


It is anticipated that we will be living in a green, intelligent society within the next 10 years (Fig. 3). All of
our common devices and their parts will be smart enough to be operated by our voices, gestures, and even
brain waves. One major obstacle to attaining this future is that our computer systems are built based on
von Neumann architecture, meaning that computations are executed in serial mode, unlike our highly

Page 9 of 11
Handbook of Science and Technology Convergence
DOI 10.1007/978-3-319-04033-2_19-1
# Springer International Publishing Switzerland 2014

parallel brain. With this architecture, computer technology will never realize the intelligent society that we
dream of. Implementing parallel processing through the neuro-inspired chips is the only workable
solution.
We certainly understand the neural activity of the brain to some degree. However, our knowledge is
nowhere near complete enough to emulate the brain by building a neurocomputer. The human brain has
evolved for the last 200,000 years to meet the demands for basic survival: finding food, avoiding
predators, breeding, and so on. In contrast, only about 70 years have passed since the first digital computer
(ENIAC) was born in 1946. Nevertheless, the authors are of the opinion that the best inspiration for
creating a neurocomputer is the approach embraced by the Wright brothers, who made the world’s first
airplane based on the physics of two fundamental aspects of the flight of birds: how to get lift and how
to steer.
Neurocomputing is very difficult to realize at present, but exciting to look forward to in the future.
Looking ahead, the neurocomputer would have the possibility of making errors in calculations, much like
humans. In this view, complex calculations would continue to be performed by digital computers, but
recognition and inference would be carried out by neurocomputers. Within the next several decades, a
formidable neurocomputer could become available to us with the speed of present cutting-edge super-
computers and the chip density close to that of the human brain. We can barely begin to imagine what
could be done with this awe-inspiring performance. For this dream to come true, systematic approaches
are necessary to combine all the nanotechnology-based disciplines with the knowledge derived from
neuroscience, computer science, and cognitive science.
Achieving this neuro-inspired computing system will be a long-term marathon, not a 100 m sprint. In
the early 1960s, no one thought of the possibility of going outer space. However, humans finally did
precisely that, leaving their footprint on the moon in 1969.

References
Abbott A (2013) Solving the brain. Nature 499:272–274
Chung K, Wallace J, Kim SY, Kalyanasundaram S, Andalman AS, Davidson TJ, Mirzabekov JJ,
Zalocusky KA, Mattis J, Denisin AK, Park S, Bernstein H, Ramakrishnan C, Grosenick L,
Gradinaru V, Deisseroth K (2013) Structural and molecular interrogation of intact biological systems.
Nature 497:332–337
Houweling AR, Doro G, Voigt BC, Herfst LJ, Brecht M (2009) Nanostimulation: manipulation of single
neuron activity by juxtacellular current injection. J Neurophysiol 103:1696–1704
Kuzum D, Yu S, Wong H-SP (2013) Synaptic electronics: materials, devices and applications. Nano-
technology 24:382001–382022
Lin ZC, Xie C, Osakada Y, Cui Y, Cui B (2014) Iridium oxide nanotube electrodes for sensitive and
prolonged intracellular measurement of action potentials. Nat Commun 5:3206
Markram H, L€ ubke J, Frotscher M, Sakmann B (1997) Regulation of synaptic efficacy by coincidence of
postsynaptic APs and EPSPs. Science 275:213–215
McMorrow D (2013) Technical challenges of exascale computing. The MITRE Corporation, McLean
Michalowski J (2013) Baroque Tomorrow Xlibris Corporation, Bloomington Indiana, USA
Roco MC, Brainbridge WS (eds) (2003) Converging technologies for improving human performance.
Kluwer, Norwell, MA, USA
Roco MC, Brainbridge WS, Tonn B, Whitesides G (eds) (2013) Convergence of knowledge, technology
and society. Springer, New York, NY, USA

Page 10 of 11
Handbook of Science and Technology Convergence
DOI 10.1007/978-3-319-04033-2_19-1
# Springer International Publishing Switzerland 2014

Upload your brain into a hologram: project Avatar 2045 – a new era for humanity or scientific madness?.
http://www.messagetoeagle.com/projectavatar2045.php. Accessed 2 Nov 2013
Wedeen VJ, Rosene DL, Wang RR, Dai G, Mortazavi F, Hagmann P, Kass JH, Tseng W-YI (2012) The
geometric structure of the brain fiber pathways. Science 335:1628–1634
Wilhelm BG, Mandad S, Truckenbrodt S, Kröhnert K, Schäfer C, Rammner B, Koo SJ, Claßen GA,
Krauss M, Haucke V, Urlaub H, Rizzoli SO (2014) Composition of isolated synaptic boutons reveals
the amounts of vesicle trafficking proteins. Science 344:1023–1028
Zeidel DW, Esiri MM, Harrison PJ (1997) Size, shape and orientation of neurons in the left and right
hippocampus: investigation of normal asymmetries and alterations in schizophrenia. Am J Psychiatry
154:812–818
Zhirnov VV, Cavin RK III, Hutchby JA, Bourianoff GI (2003) Limits to binary logic switching scale – a
gedanken model. Proc IEEE 91:1934–1939

Page 11 of 11

Das könnte Ihnen auch gefallen