Sie sind auf Seite 1von 5

The New Intel: How Nvidia Went From Powering Video

Games To Revolutionizing Artificial Intelligence


Aaron Tilley, Forbes Staff, December 20, 2016
http://www.forbes.com/sites/aarontilley/2016/11/30/nvidia-deep-learning-ai-intel/

Nvidia cofounder Chris Malachowsky is eating a sausage omelet and sipping burnt coffee in a
Denny's off the Berryessa overpass in San Jose. It was in this same dingy diner in April 1993 that
three young electrical engineers--Malachowsky, Curtis Priem and Nvidia's current CEO, Jen-Hsun
Huang--started a company devoted to making specialized chips that would generate faster and
more realistic graphics for video games. East San Jose was a rough part of town back then--the
front of the restaurant was pocked with bullet holes from people shooting at parked cop cars-and no one could have guessed that the three men drinking endless cups of coffee were laying
the foundation for a company that would define computing in the early 21st century in the same
way that Intel did in the 1990s.
"There was no market in 1993, but we saw a wave coming," Malachowsky says. "There's a
California surfing competition that happens in a five-month window every year. When they see
some type of wave phenomenon or storm in Japan, they tell all the surfers to show up in
California, because there's going to be a wave in two days. That's what it was. We were at the
beginning."
The wave Nvidia's cofounders saw coming was the nascent market for so-called graphics
processor units, or GPUs. These chips, typically sold as cards that video gamers plug into a PC's
motherboard, provide ultrafast 3-D graphics. Marketed under testosterone-drenched labels like
"Titan X" or "GeForce GTX 1080," these cards can cost up to $1,200, and two decades later they
still produce more than half of Nvidia's $5 billion in revenues.
But despite the surprising resilience of PC gaming (at Nvidia the segment grew 63% year-on-year
in its most recent quarter, even as the broader market for PCs tanked), it's not video games that
has Wall Street salivating over the firm. It's artificial intelligence. In a fascinating bit of silicon
serendipity, it turns out that the same technology that can conjure up a gorgeous alien
landscape or paint a picture-perfect explosion is also nearly optimal for the hottest area of AI:
deep learning. Deep learning enables a computer to learn by itself, without programmers having
to code everything by hand, and it's leading to unparalleled levels of accuracy in areas like
image and speech recognition.

Tech giants like Google, Microsoft, Facebook and Amazon are buying ever larger quantities of
Nvidia's chips for their data centers. Institutions like Massachusetts General Hospital are using
Nvidia chips to spot anomalies in medical images like CT scans. Tesla recently announced it
would be installing Nvidia GPUs in all of its cars to enable autonomous driving. Nvidia chips
provide the horsepower underlying virtual reality headsets, like those being brought to market by
Facebook and HTC.
"At no time in the history of our company have we been at the center of such large markets,"
Huang says in the company's headquarters in Santa Clara, California, where he's clad in his
trademark all-black outfit: black leather shoes, black jeans, black polo shirt and black leather
jacket. "This can be attributed to the fact that we do one thing incredibly well--it's called GPU
computing."
There are an estimated 3,000 AI startups worldwide, and many of them are building on Nvidia's
platform. They're using Nvidia's GPUs to put AI into apps for trading stocks, shopping online and
navigating drones. There's even an outfit called June that's using Nvidia's chips to make an AIpowered oven.
"We've been investing in a lot of startups applying deep learning to many areas, and every single
one effectively comes in building on Nvidia's platform," says Marc Andreessen of venture capital
firm Andreessen Horowitz. "It's like when people were all building on Windows in the '90s or all
building on the iPhone in the late 2000s.
"For fun," adds Andreessen, "our firm has an internal game of what public companies we'd invest
in if we were a hedge fund. We'd put all our money into Nvidia."
Nvidia's dominance of the GPU sector--it has more than a 70% share--and its expansion into
these new markets have sent its stock soaring. Its shares are up almost 200% in the past 12
months, and more than 500% in the past five years. Nvidia's market cap of $50 billion brings its
trailing earnings multiple to more than 40 times, among the highest in the industry. That
performance has generated a $2.4 billion fortune for Huang (fellow cofounder Malachowsky is
semiretired, while the third cofounder, Priem, left the company in 2003).
Those skyrocketing shares are part of the reason Nvidia is at the top of the semiconductor
industry on the inaugural Just 100 ranking of America's best corporate citizens. Developed in
conjunction with Just Capital, a firm founded by billionaire hedge fund investor and philanthropist
Paul Tudor Jones II, the Just 100 is the product of a survey of 50,000 Americans evaluating some
1,000 public companies based on how they treat their employees, customers and shareholders.
Of the ten areas considered, Nvidia performed significantly above average in worker pay and
benefits, product attributes and environmental impact. Its employee-friendly policies--including
generous vacation time, flex work hours and stress-management programs--have caused Nvidia
to score very well against its peers on Glassdoor, the anonymous workplace-review website
favored by job-hopping Silicon Valley techies. In a famously homogeneous industry Nvidia has
formal programs to increase the number of women and minorities in core engineering positions.
"I think about the company like it's a person, like it's a being," Huang says. "The culture of a
company is the genetic code of the company or the operating system of the company. If there's
anything I've learned at all about building companies, it's that culture is the single most
important thing."
HUANG ALWAYS KNEW his graphics chips had a lot more potential than just powering the latest
video games, but he didn't anticipate the shift to deep learning. Deep learning techniques (more
traditionally referred to as neural networks) are loosely inspired by how the brain works with
neurons and synapses. They've been around in academia since at least the 1960s, with major
advances made in the 1980s and 1990s. But there were always two factors preventing them
from taking off: the amount of data needed to train the algorithms and access to cheap, pure
computing horsepower.
The Internet solved the first problem--all of a sudden practically unlimited piles of data were at
everyone's fingertips. But the computing power was still out of reach.

Starting in 2006, Nvidia released a programming tool kit called CUDA that allowed coders to
easily program each individual pixel on a screen. A GPU simulates thousands of tiny computers
operating simultaneously to render each pixel. These computers perform a lot of low-level math
to render shadows, reflections, lighting and transparency. Before CUDA was released,
programming a GPU was an incredibly painful process for coders, who had to write a lot of lowlevel machine code. CUDA, which Nvidia spent years developing, brought the ease of
programming a high-level language like Java or C++ to GPUs. Using CUDA, researchers could
develop their deep learning models much more quickly and cheaply.
"Deep learning is almost like the brain," Huang says. "It's unreasonably effective. You can teach it
to do almost anything. But it had a huge handicap: It needs a massive amount of computation.
And there we were with the GPU, a computing model almost ideal for deep learning."
A pivotal moment for the mass adoption of deep learning came at a 2010 dinner in a Japanese
restaurant in Palo Alto. Andrew Ng, a soft-spoken Stanford University professor, was there to
meet with Google (now Alphabet) CEO Larry Page and Sebastian Thrun, the genius computer
scientist who was then the head of Google X. Two years earlier Ng had written one of the first
academic papers on the effectiveness of applying GPUs to deep learning models. "Deep learning
was very unpopular in 2008," Ng says. "It was much sexier to think about algorithm tricks."
Thrun, who had developed some of the first practical self-driving vehicles, shared an office wall
with Ng at Stanford and the two scientists pitched Page on the idea of creating a deep learning
research group at Google. It made sense: Google's massive computing infrastructure was perfect
for building the world's largest neural network. Page agreed to the idea, and Google Brain was
born. The deep learning work done at Google Brain now pervades nearly every Google product,
especially search, speech and image recognition.
While Google was starting Google Brain, another researcher more than 2,500 miles away was
also tinkering with deep learning. In 2012 Alex Krizhevsky, then a Ph.D. student at the University
of Toronto, submitted some stunning research to the ImageNet competition, which teams enter
from all over the world to see how accurately their software can recognize objects and scenes in
images. From his bedroom, Krizhevsky had plugged 1.2 million images into a deep learning
neural network powered by two Nvidia GeForce gaming cards. His model was able to achieve
image-recognition accuracy never before seen, with an error rate of only 15%--a giant leap from
the previous year's rate of around 25%. Not only did Krizhevsky easily win the ImageNet
competition, but his results were also an instant hit in academia. (Krizhevsky and his former
University of Toronto professor both now work at Google.)
With such results coming out, deep learning started spreading like wildfire. In addition to Google,
forward-looking deep learning research projects started popping up at Microsoft, Facebook and
Amazon. Nvidia's decision to invest heavily in the underlying software ecosystem with CUDA was
a key enabler in this shift. "It was a substantial investment for many years," says Ian Buck, who
led the development of CUDA at Nvidia. "We're now clearly reaping the benefit from this longterm vision. Jen-Hsun committed to it for many years."
Nvidia has increasingly optimized its hardware for deep learning. It has taken its latest server
chip, the Tesla P100, and put eight of them into the DGX-1, a 3-foot-long, 5-inch-thin rectangular
container that Nvidia calls "the world's first AI supercomputer in a box." The $130,000 machine
delivers 170 teraflops of performance--on par with 250 conventional servers. In August Huang
personally delivered the first unit to Elon Musk and his San Francisco AI nonprofit, OpenAI.
HUANG'S COMPETITIVE SPIRIT has been evident since his earliest years. Born in Taiwan in 1963,
he ended up at a tiny boarding school for troubled youth in rural eastern Kentucky at the age of
10 while his parents were in the process of immigrating to the United States. It was a rough
place. His roommate was 17 years old, seven years Huang's senior, and was recovering from a
fight that had left him with seven stab wounds. Huang found an escape and an obsession in PingPong. In 1978 he placed third in junior doubles at the U.S. Table Tennis Open championship, at
age 15.

Huang also fell in love with computers in high school and ended up studying computer science
and chip design at Oregon State University. There he met his future wife, Lori. After graduation,
they moved to Silicon Valley, where Huang started his first job designing processing chips at Intel
rival AMD. He continued his education, earning a master's in electrical engineering at Stanford in
1992. While at his next gig, at chipmaker LSI Corp., he met Malachowsky and Priem, both of
whom were working at Sun Microsystems.
Huang had just turned 30, and the three of them started fantasizing about starting a graphics
chip company. They saw a huge opportunity for advancements to be made in the rudimentary
graphics then available on PCs.
Nvidia's first chip, the NV1, was released in 1995 and cost $10 million to develop--money was
raised from Sequoia Capital and Sutter Hill Ventures. Unfortunately, the chip tried to do too many
things, and it failed to win many paying customers. Nvidia, just two years old at the time, nearly
went bankrupt and was forced to lay off half its employees, keeping about 40. But its third chip,
the RIVA 128, which was released in 1997, proved to be a breakout success. It was up to 400%
faster than any other graphics processor, and the company's survival was assured.
Still, that cycle--"me too" chips followed by ones that broke performance records--was the story
of the industry and Nvidia for the next decade and a half. Of the 70 GPU companies that existed
in the late 1990s, only Nvidia and AMD have survived.
In that time, Huang has managed to create a very happy workforce, reflected in its ranking on
the Just 100. Huang worries incessantly about his workers. Following a speech he gave during a
2015 conference on workplace diversity, he spoke to a handful of female Nvidia employees to
figure out what challenges they were facing in moving up the ladder. Parental leave was a big
one. Huang made moves to improve. Now new parents can take up to 22 weeks of fully paid
leave and another eight weeks of flexible time to transition back.
Huang ascribes a lot of employee happiness to the kind of work Nvidia is doing. Moving into
areas like deep learning has reinvigorated its workforce. "There has to be a connection between
the work you do and benefits to society," Huang says. "The work we do has to benefit society at
some level that's almost science fiction. We want to be able to advance the discovery for the
cure for cancer. That sounds incredible."
NVIDIA'S SUCCESS HAS not gone unnoticed. Virtually every major power in chips is suddenly
chasing the AI dream. A slew of startups are emerging with new types of deep learning chip
architecture. And the chip players aren't the only ones excited. Deep learning is so vital to the
future of the tech business that one of Nvidia's most important customers--which has never
before made its own chips--is now also a competitor: Google.
In May at its annual developer conference, Google announced it had built a custom chip called
the Tensor Processor Unit, which is tailor-made for TensorFlow, its deep learning framework.
Google said it had been equipping its data centers with these chips to improve its maps and
search results.
Similarly, another Nvidia customer, Microsoft, is now making its own chips for its data centers: a
custom chip called a field-programmable gate array (or FPGA), which can be reprogrammed after
it's manufactured and has proven useful for AI apps.
The semiconductor incumbent, Intel, seems especially terrified of the progress Nvidia is making.
After missing the boat on smartphones, it's desperate not to miss the next shift into deep
learning. Lacking its own state-of-the-art AI research, it has embarked on an acquisition spree,
recently buying two AI chip startups: Nervana, for a reported $400 million-plus in August, and
Movidius, for an undisclosed amount a month later. Last year, Intel plunked down $16.7 billion for
FPGA-maker Altera.
Intel is protective of its most profitable cash cow: the data center, where it has an almost
complete monopoly with 99% market share. Nvidia's current chips can't replace those Intel
processors--they simply accelerate them. But Intel would obviously prefer its customers use its
hardware exclusively. In 2017 Intel plans to launch a server chip optimized for deep learning, the

new Xeon Phi processor. And with the technology acquired from the Nervana team, Intel boldly
claims, it can accelerate deep learning networks 100 times by 2020.
Nvidia's advantage is that it has a big head start on Intel, AMD and other rivals. But it can't relax.
For many years, it was alone in field. Now the market is swarming. "I think Nvidia is in a good
position, and the odds are in its favor, but I wouldn't give it to them just yet," veteran tech
analyst Jon Peddie says. "There are too many people looking at the area."
"AI computing is the future of computing," Huang says. "So long as we continue to make our
platform the best platform for AI computing, I think we're going to have a good shot of winning
lots of business. GPUs will be all over companies."
Still, Huang has inherited a certain amount of the philosophy that Intel's longtime boss Andy
Grove extolled in his 1990s bestseller, Only the Paranoid Survive.
"I always think we're 30 days from going out of business," Huang says. "That's never changed.
It's not a fear of failure. It's really a fear of feeling complacent, and I don't ever want that to
settle in."

Das könnte Ihnen auch gefallen