Sie sind auf Seite 1von 13

Neuromorphic

Technologies for
Next-Generation
Cognitive Computing
Research Almaden
Geoffrey W. Burr
October 11, 2016

Revising our assumptions about computing


Conventional computing requires that ALL the devices work right.
This is getting difficult to guarantee across billions of devices
as voltages & device-sizes scale down.
Several recent trends in computing try to modify this assumption in different ways

1) Quantum computing
much more sensitive to noise (need low temperatures)
but much more functionality PER device (qubit)

2) Approximate/Stochastic computing redundancy through design


3) Brain-inspired computing redundancy through learning
a. Deep Neural Networks
Apply too many resources to the problem yet get a result
It can be OK if some of the resources are unreliable

b. Even-more-Neuromorphic computing machine intelligence


Use sparsity in time & space to reduce overall computing power
2

Neuromorphic Technologies for


Next-Generation Cognitive Computing

G. W. Burr
IBM Research Almaden

October 11, 2016

Deep Machine Learning vs. Machine Intelligence


Brain-inspired computing

(1940s understanding of the brain)

human
brain

Deep Machine Learning


solving a specific task on labeled data by
defining & optimizing an objective function

PRO:
can follow gradient descent thru backpropagation
convergence to good solutions

(modern understanding of the brain)

Machine Intelligence
flexible systems that continuously learn from
unlabeled data, and that perform (motor)
actions, predict consequences of those
actions, and then plan ahead to reach goals

PRO:

mapping to matrix manipulation GPUs!!

were sure this is what the brain does

great progress in ML thanks to competitions


Many datasets created
Focus on quantifying performance

MI should be able to handle


unlabelled & temporal data

algorithm is scalable:
more resources better performance

CON:
were sure the brain doesnt do backpropagation

Brain-inspired computing

MI should enable continuous learning


can learn from only a few examples
(less critical for companies to hoard data)

CON:

can only handle static, labelled data

we dont know (yet) how the brain guarantees


robust, stable convergence in learning

insistence on quantifying performance


may now be stifling innovation

we have to figure out how to appropriately


quantify performance

Neuromorphic Technologies for


Next-Generation Cognitive Computing

G. W. Burr
IBM Research Almaden

October 11, 2016

Deep Learning on GPUs


1) Input data

(images,
raw speech data, etc.)

input to neural network

2) classification results
compared to labels
3) corrections
backpropagated
& all weights updated

Combine
100-1000
input vectors
into an
input matrix
(mini-batch)

multiply by current
weight matrix,

Neuromorphic Technologies for


Next-Generation Cognitive Computing

excitation
into next
hidden
neurons

All steps can be


mapped to
matrix multiplications
can run very fast
on GPUs

G. W. Burr
IBM Research Almaden

October 11, 2016

Multiply-accumulate: in GPU matrix-mult, but then move data


x1A

x2A

wij

GPU spends time


& energy
transferring
data to & from
its on-board
DRAM

S xiA wij
B

x528

xjB=f(S xiA wij)

Neuromorphic Technologies for


Next-Generation Cognitive Computing

G. W. Burr
IBM Research Almaden

October 11, 2016

Multiply-accumulate: NVM compute w/ physics, at the data


NVM

Selector device
x1A

Conductance
pairs

x2A

N1

wij

wij

x528

N2

I=G+ V(t)

S xiA wij
A

xi

xjB=f(S xiA wij)

By reading all the


NVM devices along
a column (or a row)
in parallel, we
perform the
multiply-accumulate
AT the data

I= S
Neuromorphic Technologies for
Next-Generation Cognitive Computing

Nn

G+ V

Analog, in-memory, neuromorphic computing


6

I=G- V(t)

G. W. Burr
IBM Research Almaden

+M

I= S G- V
October 11, 2016

IBM TrueNorth chip

NVM-for-on-chip-learning

4096 cores

(of 256 axons x 256 neurons)

1 million neurons, 256 million synapses


5.4 billion transistors
Highly modular
Asynchronous spikes for communication
70mW power
very power efficient!!
Chip exists!

Performs forward evaluation


of DNNs not training.
Needs (at least one) killer app
7

Neuromorphic Technologies for


Next-Generation Cognitive Computing

Killer app: compete against the


GPU for Deep-NN learning
Chip does not exist
were trying to decide if it makes sense
to build one (anytime soon)
We need to be sure it would be
Lower power than GPU
Faster than GPU
SAME accuracy as GPU
G. W. Burr
IBM Research Almaden

October 11, 2016

Backpropagation:
local learning rule within a global architecture
correct
answer

x1A
x1B

d1 = y1

g1

d2 = y2

g2

x2A

wij

xj B

NN results

B
x250
A

x528

dk = yk

d10 = y10 g10

gk

Dwij = h * xi * dj
learning rate

But we want something closer to what the brain does


Neuromorphic Technologies for
Next-Generation Cognitive Computing

G. W. Burr
IBM Research Almaden

October 11, 2016

Spike-Timing Dependent Plasticity: brain-inspired


local learning rule, but wheres the global architecture?
correct
answer

x1A
x1B

d1 = y1

g1

d2 = y2

g2

x2A

wij

xj B

??NN results

B
x250
A

x528

dk = yk

d10 = y10 g10

gk

Change in
Synaptic
Conductance

DG

Neuromorphic Technologies for


Next-Generation Cognitive Computing

G. W. Burr
IBM Research Almaden

October 11, 2016

Machine Intelligence based on sequences of


Sparse Distributed Representations

OUTPUTS:
Predictions
Context
Stable Concepts (SDR)
Motor commands

INPUT:
Spatial-temporal data
streams of any kind

Context-Aware Learning

A potential path to handling


temporal, unlabelled data
(inspired by Jeff Hawkins
Hierarchical Temporal Memory)
Maybe a path
to machine intelligence?

winfriedwilcke@us.ibm.com
10

Neuromorphic Technologies for


Next-Generation Cognitive Computing

G. W. Burr
IBM Research Almaden

October 11, 2016

Von Neumann architecture: aspects were likely to miss (a LOT)


1) Programmable
adaptable

CPU

Memory

BUS

2) Great co$t model

Sell it to
LOTS of people
for vastly
different
purposes

Design 1 piece
of hardware

3) Modularity of design

11

Specifications

Specifications

Requirements

Requirements

Neuromorphic Technologies for


Next-Generation Cognitive Computing

G. W. Burr
IBM Research Almaden

October 11, 2016

Some imperfections are OK great!


But basic engineering like identifying how many would be
too many imperfections will not be easy.

Big data
sets

www.datasciencecentral.com/profiles/blogs/concise-visual-summary-of-deep-learning-architectures

12

Specifications

Specifications

Requirements

Requirements

Neuromorphic Technologies for


Next-Generation Cognitive Computing

G. W. Burr
IBM Research Almaden

October 11, 2016

Opportunities in brain-inspired computing


human
brain

Deep Machine Learning


solving a specific task on labeled data by
defining & optimizing an objective function

Forward inference engines


On smartphones
For TrueNorth?
(spikes for low-power communication)

On-chip learning

If peripheral circuitry supports


massive parallelism
speed
If NVM devices support
linear conductance change
accuracy

13

Neuromorphic Technologies for


Next-Generation Cognitive Computing

Machine Intelligence
flexible systems that continuously learn from
unlabeled data, and that perform (motor)
actions, predict consequences of those
actions, and then plan ahead to reach goals

Brain-inspired computing
For helping to understand the brain
For computation
STDP (e.g., spikes for learning)
need a global architecture
HTM/CAL (e.g., based on
Sparse Distributed Representations)
Need to show
generalization
scalability
(at least one) killer app
G. W. Burr
IBM Research Almaden

October 11, 2016

Das könnte Ihnen auch gefallen