Elon Musk’s Tesla is developing its own self-driving chip rather than using someone else’s, such as Nvidia’s Drive Xavier, at left.
Send us a LetterHave an opinion about this story? Click here to submit a Letter to the Editor, and we may publish it in print.
Bombshells are a common occurrence in Tesla’s quarterly analyst calls, and the latest was no exception. As soon as CEO Elon Musk introduced the call, he turned the microphone over to members of his Autopilot team who announced that Tesla had spent three years developing a custom “neural network accelerator” chip that is now nearly ready to power its upcoming autonomous hardware suite.
According to Pete Bannon, Tesla’s director of Autopilot hardware engineering, Tesla already has drop-in chip replacements for the Model S, X, and 3. “The chips are up and working,” he says. “All have been driven in the field.”
If you are not neck-deep in the world of autonomous vehicle chip design, this may not seem like a big deal. But for people in the know, such as executives at chip makers Nvidia and Intel, Tesla’s announcement makes it clear the company thinks it can make bigger advances in self-driving cars on its own.
Tesla’s development of a new chip specifically for its self-driving hardware is the latest example of the firm’s commitment to vertical integration, meaning it makes a lot of components in its own factories, including Tesla seats. Currently Tesla uses Nvidia Drive PX2 boards in its vehicles. Just two years ago, Musk hailed Nvidia’s boards as “basically a supercomputer in a car.”
“Nvidia’s complete platform is of course a powerful system, built to automotive grade, but it may not be perfect for what Tesla wants to use it for,” says Mike Ramsey, automotive research director at Gartner. “Probably more important, Elon and Tesla feel like they need to own this technology. If they think the chip vendors are slowing them down, or locking them into a certain architecture or into a long-term design from which they cannot easily escape, then building your own chip makes some sense.”
Shapiro: Claims aren’t fair.
Bannon says the new chip is “a bottom-up design” optimized for the neural net algorithms that Tesla uses in its Autopilot driver-assistance system and in its long-promised “full self-driving” option. The chip is the third iteration of its Autopilot hardware, which his team designed.
By building the chip itself, Bannon says, Tesla can create self-driving hardware that is “dramatically more efficient and has dramatically more performance than what you can buy today.”
It’s unclear whether Tesla’s benchmarks for chip performance match with the rest of the industry. Musk says the new chip can process 2,000 frames of sensor data a second, compared with the current Nvidia chip, which can process 200 frames a second.
Nvidia says those claims are not fair. Danny Shapiro, senior director of automotive for Nvidia, says Musk is comparing Tesla’s chip with Nvidia’s 3-year-old Drive PX2.
“A more accurate comparison would have been to our current generation, Drive Xavier, which was designed from the ground up to be an autonomous vehicle processor,” Shapiro says.
Nvidia has become the leading source of processors for self-driving neural net algorithms, which run more efficiently on the firm’s graphics processing unit, or GPU, architecture than traditional central processing units, or CPUs. The company supplies Toyota, Volkswagen, Volvo, BMW, Daimler, Honda, Renault-Nissan, Bosch, Baidu and others.
Musk said Tesla figured out what was slowing down the Nvidia Drive PX2 board: There was a bottleneck between the CPU and GPU.
But Shapiro said Nvidia already figured that out and saw a tenfold improvement in data processing when it tested the Drive Xavier boards. Bandwidth improved from 2 gigabytes per second to 20 gigabytes per second. Xavier is also Nvidia’s most efficient automated driving board to date, achieving 30 trillion operations per second, or TOPS, with just 30 Watts, compared with Drive PX2’s peak of 24 TOPS at 150 Watts. Tesla’s custom version of the PX2 produces between 8 and 10 TOPS.
Next year, Nvidia will make a board called Drive Pegasus publicly available, which integrates two Xavier chips each with current Volta-generation integrated GPUs and adds two next-generation discrete GPUs as well as two deep-learning accelerators, for a staggering 320 TOPS at 500 Watts.
“Our performance has gone up by more than a factor of 10, generation over generation,” Shapiro says
Just as importantly, Shapiro says, Nvidia has been making sure that its performance gains don’t come at the expense of flexibility.
“Development of these neural nets is so new and is changing so rapidly … if you lock in a particular type of neural network you have no flexibility to take advantage of these innovations,” Shapiro said.
Nvidia’s archrival Intel takes an approach closer to Tesla’s, co-developing processors that are optimized for integrated software applications.
“We do software-hardware co-design,” says Jack Weast, Intel’s chief systems architect of autonomous driving solutions. “We let the needs of the software algorithm drive what goes into the hardware. You can do a much, much more efficient implementation of portions of an algorithm if you know what that algorithm is in advance.”
Intel’s recent acquisition of the Israeli automotive computer vision company Mobileye, whose EyeQ3 chip powered Tesla’s first generation of Autopilot hardware, gives Intel a significant head start, Weast says.
“Unlike some companies who are delivering their first deep-learning accelerator chip to market, we’re actually on our third generation,” he says. The latest chip, called EyeQ5, will start appearing in cars on the road in 2019. A recent Reuters report said the chip will be in as many as 8 million automated vehicles starting in 2021.
Intel and Nvidia’s different approaches highlight how divergent autonomous vehicle development strategies can be, with some automakers seeking an efficiently optimized hardware-software package like Intel’s, and others preferring the raw power and flexibility of Nvidia’s chips and boards.
The history of Tesla’s relationships with both companies suggests that it bridles at both, having publicly complained about the limitations of both Mobileye’s relatively mature products as well as the relative inefficiency of Nvidia’s.
Perhaps the biggest question about Tesla’s move toward more specialized silicon is whether it has really reached a point of software maturity where it makes sense to start optimizing its hardware. And even if it has, there are questions about its ability to keep pace with the powerhouse firms that dedicate massive r&d budgets to continuously improving their offerings.
“Tesla is not a giant chip company,” Ramsey says. “Nvidia is spending billions of dollars investing in this technology, mostly subsidized by its incredibly healthy video game business. Intel, similarly, can pour massive resources into the design and validation of the chips. They both either own or have good relationships with huge chip manufacturers. Tesla is unlikely to save money and could produce a product that doesn’t perform as well in the field.”
Some scrappier startups are rethinking the way chips are placed in the vehicle, putting deep-learning chips near the sensors rather than near the centralized stack. Orr Danon, founder and CEO of one such company called Hailo Technologies, sees great opportunities for “fresh thinking about how we imagine a computer operating” in the autonomous vehicles of the future. But, he warns, there are challenges of trying to prematurely sell rapidly changing cutting-edge technologies.
“This is an exciting and essential step, but we all have to be aware that the road ahead to a stable technology is long, and do our best to understand how to make the overall path as smooth as possible,” he says.