Chips
Nvidia has firmly staked a claim on being a big part of the self-driving car market of the future. With the company's new Tegra X1 mobile superchip, Nvidia has the potential to bring deep neural network technology to your vehicle. |
|
Nvidia CEO Jen-Hsun Huang took the stage at CES January 4th to unveil its new mobile chip, the Tegra X1. With 256 processor cores and eight CPU cores, Huang touts it as the first mobile "superchip.". With its capabilities, Nvidia plans the X1 to be the graphics and artificial intelligence engine for the self-driving cars.
The Tegra X1 includes a 256-core “Maxwell” CPU, the same architecture that Nvidia launched last year. Maxwell powers the GTX 980 and GTX 970 chips that the company launched this fall. But the new X1 also includes an 8-core, 64-bit Denver CPU. All told, the new X1 can process 4K video at 60 frames per second, using either the H.265 or VP9 video codecs.
The Tegra X1 is the first teraflop mobile supercomputer, equivalent to the fastest supercomputer in the world circa 2000, boasting 10,000 Pentium Pros, Huang said.
According to Darrell Boggs, a chip architect for Nvidia, the “Denver” chip” and the 32-bit version of the Tegra K1 share the same 192-core Kepler graphics core that helps give the K1 its performance. But the 64-bit Denver includes chip optimizations that can push the number of instructions it can process per clock cycle to 7, versus just 3 for the 32-bit version.
Towards Self-Driving Car Intelligence
"The question is what are we going to do with all that horsepower? ... It turns out that the future car will have a massive amount of horsepower inside of it." |
Nvidia is poised to make an even harder play for the car, a platform with millions of potential upgrade subjects, all waiting for better graphics and safety features that depend on intense processing power. Eventually, those cars will drive themselves, and Nvidia wants to be driver behind that virtual wheel.
Related articles |
“End to end platform all the way from the processor to the software,” Huang said of Drive CX and the Nvidia Studio software that powers it.
Huang said the Drive platform could be used to intelligently improve driver-assist features, which currently include radar, ultrasonic, and computer-vision technologies. Increasingly, all three safety features are being replaced by camera-based technologies, which are getting better and better at detecting objects in low light. Eventually, chips like the X1 will become the foundation for self-driving cars, Huang said, complete with frequent software updates.
“We imagine all these camera around the car connected to a supercomputer inside the car,” Huang said.
Huang said the PX platform can detect and identify different kinds of objects, even different types of cars—including police cars. PX will also try to match objects—is that a pedestrian? is that a speed sign?—and compare them against a database, Huang said.
Currently, Nvidia’s Drive PX architecture is only good enough to accurately detect about 80 percent of the objects it sees, according to the ImageNet Challenge benchmark. But Huang said Nvidia has tested the technology in the field, identifying speed-limit signs and even occluded pedestrians.
SOURCE PC World
By 33rd Square | Embed |
0 comments:
Post a Comment