Sie sind auf Seite 1von 15

"I mean, being a robot's great; but

we don't have emotions and


sometimes that makes me very
sad."

Super-Intelligent Machines

Jared Schmidt
In This Presentation…
• Super-intelligent machines
• Beginnings of A.I.
• Technological singularity
• Where we stand today
• Notable predictions about future A.I.
• Moral / ethical implications of developing
intelligent machines
Deep Blue
• Deep Blue – (1997, IBM) First chess playing
machine to beat the reigning world chess
champion.
– Successor to Deep Thought (1989)
– 259th fastest supercomputer at the time
– 200 million positions per second

• Brute force algorithm – Less than intelligent


But Computers with Brains?
• Some prominent figures and computer
scientists believe it will be possible.

• Human brain processes ≈ 15-25 petaflops data

• IBM’s Sequoia = 16.3 petaflops (TOP500)


– 1.57 million cores
– 1.6 PB memory
Technological Singularity
• The hypothetical future emergence of greater-
than-human super-intelligence through
technological means.
• An intellectual event horizon

Singularity Summit 12 (6th annual, Oct 13)


Hindrances of Super-Intelligent
Machines

• Limited understanding of how the human


brain works
• Advancements in technology and other fields
could overcome such barriers
Advancements in Nanotechnology
• Carbon nanotubes showing promise for many
medical applications.
– Kanzius Machine
• Nanobots in R&D stage
– Potential medical breakthroughs
– Interaction with biological systems

• Possible future use to reverse-engineer brain


Raymond Kurzweil
• American inventor and futurist
• Predicts machines will be more
intelligent than humans in the near
future

The Singularity is Near (2005)


2020s: Nanobot use in medical field
2029: First computer passes Turning test
2045: Singularity
BillMicrosystems
• Cofounder of Sun Joy

Why the Future Doesn’t Need Us (2000)


• Possible dangers of increasingly advancing
technologies
– genetics, nanotechnology and robotics
• Argues machines will eventually become
smarter than us, creating a dystopia
Existential Risk
• The first super-intelligent machine would be
programmed by error prone humans.

“We tell it to solve a mathematical problem, and


it complies by turning all the matter in the solar
system into a giant calculating device, in the
process killing the person who asked the
question”
- Nick Bostrom (Existential Risks)
Robots are People Too?
• If machines think and act like a human, should
they be considered human?
• Is it moral to give machines human values and
emotions?
• If a super-intelligent robot committed a crime,
is the programmer to blame?
Morally Sound A.I.
• Artificial intelligence projects which help us to
live our lives better or more efficiently are
morally sound
• Overcoming disease, genetics, and famine
with technology
In Conclusion
• Less than super-intelligent A.I. to date
• Theory of technological singularity
• Advancements in nanoscience could advance
neuroscience
• Risks of developing super-intelligent machines
• Moral / Ethical Issues
• Morally sound A.I.
Sources
Bostrom, Nick. “Existential Risks: Analyzing Human
Extinction Scenarios and Related Hazards.”
Journal of Evolution and Technology 1.9
(2002). www.nickbostrom.com.
Georges, Thomas M. Digital Soul: Intelligent
Machines and Human Values. Boulder, CO:
Westview, 2003. Print.
Joy, Bill. Why the Future Doesn’t Need Us.
www.wired.com/wired/archive/8.04/joy
Kurzweil, Raymond. The Singularity is Near.
Cambridge, MA: MIT, 2005. Print.

Das könnte Ihnen auch gefallen