Sie sind auf Seite 1von 8

A computer is a general purpose device that can be programmed to carry out a set of

arithmetic or logical operations automatically.


By Chris Herzog Sunday, Nov. 26, 2006
In the past twenty years, there has been a dramatic increase in the processing speed
of computers, network capacity and the speed of the internet. These advances have
paved the way for the revolution of fields such as quantum physics, artificial
intelligence and nanotechnology. These advances will have a profound effect on the
way we live and work, the virtual reality we see in movies like the Matrix, may
actually come true in the next decade or so.
NANOCOMPUTERS
Scientists are trying to use nanotechnology to make
very tiny chips, electrical conductors and logic gates.
Using nanotechnology, chips can be built up one atom
at a time and hence there would be no wastage of
space, enabling much smaller devices to be built. Using
this technology, logic gates will be composed of just a
few atoms and electrical conductors (called nanowires)
will be merely an atom thick and a data bit will be
represented by the presence or absence of an electron.
A component of nanotechnology, nanocomputing will give rise to four types of
nanocomputers:
Electronic nanocomputers
Chemical and Biochemical nanocomputers
Mechanical nanocomputers
Quantum nanocomputers
Electronic nanocomputers
Eletronic nanocomputers are created through microscopic circuits using
nanolithography. [Nanocomputers]
Chemical and Biochemical nanocomputers
The interaction between different chemicals and their structures is used to store and
process information in chemical nanocomputers. In order to create a chemical
nanocomputer, engineers need to be able to control individual atoms and molecules
so that these atoms and molecules can be made to perform controllable calculations
and data storage tasks.
Mechanical nanocomputers
A mechanical nanocomputer uses tiny mobile components called nanogears to
encode information. Some scientists predict that such mechanical nanocomputers will
be used to control nanorobots.
Quantum nanocomputers
A quantum nanocomputer store data in the form of atomic quantum states or spin.
Single-electron memory (SEM) and quantum dots are examples of this type of
technology.
Humanizing Nanocomputers
Apart from this, scientists aim to use nanotechnology
to create nanorobotsthat will serve as antibodies that
can be programmed. This will help to protect humans
against pathogenic bacteria and viruses that keep
mutating rendering many remedies ineffective against
new strains. Nanorobots would overcome this problem
by reprogramming selectively to destroy the new
pathogens. Nanorobots are predicted to be part of the
future of human medicine.


SPRAY-ON NANO COMPUTERS
Consider that research is being done at the Ediburgh
University to create "spray-on computers the size of a
grain of sand that will transform information technology. The research team aims to
achieve this goal within four years.
When these nanocomputers are sprayed on to the chests of coronary patients, the
tiny cells record a patients health and transmit information back to a hospital
computer. This would enable doctors to monitor heart patients who are living at
home.
QUANTUM COMPUTERS
A quantum computer uses quantum mechanical
phenomena, such as entanglement and superposition to
process data. Quantum computation aims to use the
quantum properties of particles to represent and
structure data. Quantum mechanics is used to
understand how to perform operations with this data.
The quantum mechanical properties of atoms or nuclei allow these particles to work
together as quantum bits, or qubits. These qubits work together to form the
computer's processor and memory. Qubits can interact with each other while being
isolated from the external environment and this enables them to perform certain
calculations much faster than conventional computers.
By computing many different numbers simultaneously and then interfering the
results to get a single answer, a quantum computer can perform a large number of
operations in parallel and ends up being much more powerful than a digital computer
of the same size.
"In the tiny spaces inside atoms, the ordinary rules of reality ... no longer hold.
Defying all common sense, a single particle can be in two places at the same time.
And so, while a switch in a conventional computer can be either on or off,
representing 1 or 0, a quantum switch can paradoxically be in both states at the
same time, saying 1 and 0.... Therein lies the source of the power." Whereas three
ordinary switches could store any one of eight patterns, three quantum switches can
hold all eight at once, taking "a shortcut through time." [Scientific America.com]
Quantum computers could prove to be useful for running simulations of quantum
mechanics. This would benefit the fields of physics, chemistry, materials science,
nanotechnology, biology and medicine because currently, advancement in these
fields is limited by the slow speed of quantum mechanical simulations.
Quantum computing is ideal for tasks such as cryptography, modeling and indexing
very large databases. Many government and military funding agencies are supporting
quantum computing research to develop quantum computers for civilian and national
security purposes, such as cryptanalysis.
ARTIFICIAL INTELLIGENCE
The term Artificial Intelligence was coined in 1956 by
John McCarthy at the Massachusetts Institute of
Technology. It is a branch of computer science that
aims to make computers behave like humans.
[Artificial Intelligence] Artificial Intelligence includes
programming computers to make decisions in real life
situations (e.g. some of these expert systems help
physicians in the diagnosis of diseases based on
symptoms), programming computers to understand human languages (natural
language), programming computers to play games such as chess and checkers
(games playing), programming computers to hear, see and react to other sensory
stimuli(robotics) and designing systems that mimic human intelligence by attempting
to reproduce the types of physical connections between neurones in the human brain
(neural networks).
Natural-language processing would allow ordinary people who dont have any
knowledge of programming languages to interact with computers.
So what does the future of computer technology look like after these developments?
Through nanotechnology, computing devices are becoming progressively smaller and
more powerful. Everyday devices with embedded technology and connectivity are
becoming a reality. Nanotechnology has led to the creation of increasingly smaller
and faster computers that can be embedded into small devices.

This has led to the idea of pervasive computing which aims to integrate software and
hardware into all man made and some natural products. It is predicted that almost
any items such as clothing, tools, appliances, cars, homes, coffee mugs and the
human body will be imbedded with chips that will connect the device to an infinite
network of other devices. [Pervasive Computing]
Hence, in the future network technologies will be combined with wireless computing,
voice recognition, Internet capability and artificial intelligence with an aim to create
an environment where the connectivity of devices is embedded in such a way that
the connectivity is not inconvenient or outwardly visible and is always available. In
this way, computer technology will saturate almost every facet of our life. What
seems like virtual reality at the moment will become the human reality in the future
of computer technology.
http://www.geeks.com/techtips/2006/techtips-26nov06.htm
What Is the Future of Computers?

By Natalie WolchoverSeptember 11, 2012 9:17 AM







.
View photo
Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry
In 1958, a Texas Instruments engineer named Jack Kilby cast a pattern onto the surface of
an 11-millimeter-long "chip" of semiconducting germanium, creating the first ever integrated
circuit. Because the circuit contained a single transistor a sort of miniature switch the
chip could hold one "bit" of data: either a 1 or a 0, depending on the transistor's
configuration.
Since then, and with unflagging consistency, engineers have managed to double the
number of transistors they can fit on computer chips every two years. They do it by regularly
halving the size of transistors. Today, after dozens of iterations of this doubling and halving
rule, transistors measure just a few atoms across, and a typical computer chip holds 9
million of them per square millimeter. Computers with more transistors can perform more
computations per second (because there are more transistors available for firing), and are
therefore more powerful. The doubling of computing power every two years is known as
"Moore's law," after Gordon Moore, the Intel engineer who first noticed the trend in 1965.
Moore's law renders last year's laptop models defunct, and it will undoubtedly make next
year's tech devices breathtakingly small and fast compared to today's. But consumerism
aside, where is the exponential growth in computing power ultimately headed? Will
computers eventually outsmart humans? And will they ever stop becoming more powerful?
The singularity
Many scientists believe the exponential growth in computing power leads inevitably to a
future moment when computers will attain human-level intelligence: an event known as the
"singularity." And according to some, the time is nigh.
Physicist, author and self-described "futurist" Ray Kurzweil has predicted that computers
will come to par with humans within two decades. He told Time Magazine last year that
engineers will successfully reverse-engineer the human brain by the mid-2020s, and by the
end of that decade, computers will be capable of human-level intelligence.
The conclusion follows from projecting Moore's law into the future. If the doubling of
computing power every two years continues to hold, "then by 2030 whatever technology
we're using will be sufficiently small that we can fit all the computing power that's in a
human brain into a physical volume the size of a brain," explained Peter Denning,
distinguished professor of computer science at the Naval Postgraduate School and an
expert on innovation in computing. "Futurists believe that's what you need for artificial
intelligence. At that point, the computer starts thinking for itself." [How to Build a Human
Brain]
What happens next is uncertain and has been the subject of speculation since the dawn
of computing.
"Once the machine thinking method has started, it would not take long to outstrip our feeble
powers," Alan Turing said in 1951 at a talk entitled "Intelligent Machinery: A heretical
theory," presented at the University of Manchester in the United Kingdom. "At some stage
therefore we should have to expect the machines to take control." The British
mathematician I.J. Good hypothesized that "ultraintelligent" machines, once created, could
design even better machines. "There would then unquestionably be an 'intelligence
explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent
machine is the last invention that man need ever make," he wrote.
Buzz about the coming singularity has escalated to such a pitch that there's even a book
coming out next month, called "Singularity Rising" (BenBella Books), by James Miller, an
associate professor of economics at Smith College, about how to survive in a post-
singularity world. [Could the Internet Ever Be Destroyed?]
Brain-like processing
But not everyone puts stock in this notion of a singularity, or thinks we'll ever reach it. "A lot
of brain scientists now believe the complexity of the brain is so vast that even if we could
build a computer that mimics the structure, we still don't know if the thing we build would be
able to function as a brain," Denning told Life's Little Mysteries. Perhaps without sensory
inputs from the outside world, computers could never become self-aware.
Others argue that Moore's law will soon start to break down, or that it has already. The
argument stems from the fact that engineers can't miniaturize transistors much more than
they already have, because they're already pushing atomic limits. "When there are only a
few atoms in a transistor, you can no longer guarantee that a few atoms behave as they're
supposed to," Denning explained. On the atomic scale, bizarre quantum effects set in.
Transistors no longer maintain a single state represented by a "1" or a "0," but instead
vacillate unpredictably between the two states, rendering circuits and data storage
unreliable. The other limiting factor, Denning says, is that transistors give off heat when they
switch between states, and when too many transistors, regardless of their size, are
crammed together onto a single silicon chip, the heat they collectively emit melts the chip.
For these reasons, some scientists say computing power is approaching its zenith. "Already
we see a slowing down of Moore's law," the theoretical physicist Michio Kaku said in a
BigThink lecture in May.
But if that's the case, it's news to many. Doyne Farmer, a professor of mathematics at
Oxford University who studies the evolution of technology, says there is little evidence for
an end to Moore's law. "I am willing to bet that there is insufficient data to draw a conclusion
that a slowing down [of Moore's law] has been observed," Farmer told Life's Little Mysteries.
He says computers continue to grow more powerful as they become more brain-like.
Computers can already perform individual operations orders of magnitude faster than
humans can, Farmer said; meanwhile, the human brain remains far superior at parallel
processing, or performing multiple operations at once. For most of the past half-century,
engineers made computers faster by increasing the number of transistors in their
processors, but they only recently began "parallelizing" computer processors. To work
around the fact that individual processors can't be packed with extra transistors, engineers
have begun upping computing power by building multi-core processors, or systems of chips
that perform calculations in parallel."This controls the heat problem, because you can slow
down the clock," Denning explained. "Imagine that every time the processor's clock ticks,
the transistors fire. So instead of trying to speed up the clock to run all these transistors at
faster rates, you can keep the clock slow and have parallel activity on all the chips." He
says Moore's law will probably continue because the number of cores in computer
processors will go on doubling every two years.
And because parallelization is the key to complexity, "In a sense multi-core processors
make computers work more like the brain," Farmer told Life's Little Mysteries.
And then there's the future possibility of quantum computing, a relatively new field that
attempts to harness the uncertainty inherent in quantum states in order to perform vastly
more complex calculations than are feasible with today's computers. Whereas conventional
computers store information in bits, quantum computers store information in qubits:
particles, such as atoms or photons, whose states are "entangled" with one another, so that
a change to one of the particles affects the states of all the others. Through entanglement, a
single operation performed on a quantum computer theoretically allows the instantaneous
performance of an inconceivably huge number of calculations, and each additional particle
added to the system of entangled particles doubles the performance capabilities of the
computer.
If physicists manage to harness the potential of quantum computers something they are
struggling to do Moore's law will certainly hold far into the future, they say.
Ultimate limit
If Moore's law does hold, and computer power continues to rise exponentially (either
through human ingenuity or under its own ultraintelligent steam), is there a point when the
progress will be forced to stop? Physicists Lawrence Krauss and Glenn Starkman say "yes."
In 2005, they calculated that Moore's law can only hold so long before computers actually
run out of matter and energy in the universe to use as bits. Ultimately, computers will not be
able to expand further; they will not be able to co-opt enough material to double their
number of bits every two years, because the universe will be accelerating apart too fast for
them to catch up and encompass more of it.
So, if Moore's law continues to hold as accurately as it has so far, when do Krauss and
Starkman say computers must stop growing? Projections indicate that computer will
encompass the entire reachable universe, turning every bit of matter and energy into a part
of its circuit, in 600 years' time.
That might seem very soon. "Nevertheless, Moore's law is an exponential law," Starkman, a
physicist at Case Western University, told Life's Little Mysteries. You can only double the
number of bits so many times before you require the entire universe.
Personally, Starkman thinks Moore's law will break down long before the ultimate computer
eats the universe. In fact, he thinks computers will stop getting more powerful in about 30
years. Ultimately, there's no telling what will happen. We might reach the singularity the
point when computers become conscious, take over, and then start to self-improve. Or
maybe we won't. This month, Denning has a new paper out in the journal Communications
of the ACM, called "Don't feel bad if you can't predict the future." It's about all the people
who have tried to do so in the past, and failed.
Follow Natalie Wolchover on Twitter @nattyover or Life's Little Mysteries @llmysteries.
We're also on Facebook & Google+.
http://news.yahoo.com/future-computers-131739358.html


Vision as well as sound, oh my! When British telecommunication giant BT imagined the future of
communication technology from videoconferencing to high-definition document transmission they
made their most conceptually innovative proposition, the notion of telecommuting, with a kind of
facetiousness most ironic in the context of todays remote-everything workplace.
In 1980, a TV segment entitled Telefuture envisions a world of television-based information services. While at
its core lies a fascinating and, in retrospect, remarkably accurate exploration of the exponential progression
of technology including transmedia experiences that even modernity cant get quite right, like Internet TV
the excitement and language used to describe technologies we now find primitive is a disarming source of
amusement. We held it together quite admirably, until the vintage-voiced man described basic 8-bit
diversions as incredibly complex games at that point, through tears of laughter, we wonder how his
vocabulary of superlatives would hold up against the latest Halo 3 or Guitar Hero.

Das könnte Ihnen auch gefallen