I asked Huang to compare the GTC of eight years ago to the GTC of today, given how much of Nvidia’s focus has changed.
“We invented a computing model called GPU accelerated computing and we introduced it almost slightly over 10 years ago,” Huang said, noting that while AI is only recently dominating tech news headlines, the company was working on the foundation long before that. “And so we started evangelizing all over the world. GTC is our developers conference for that. The first year, with just a few hundred people, we were mostly focused in two areas: They were focused in computer graphics, of course, and physics simulation, whether it’s finite element analysis or fluid simulations or molecular dynamics. It’s basically Newtonian physics.”
A lot can change in a decade, however, and Huang points to a few things that have changed in the past 10 years that have shifted the landscape in which Nvidia operates.
“The first thing is that Moore’s Law has really slowed,” he said. “So as a result GPU-accelerated computing gave us life after Moore’s Law, and it extended the capability of computing so that these applications that desperately need more computing can continue to advance. Meanwhile, the reach of GPUs has gone far and wide, and it’s much more than computer graphics today. We’ve reached out into all fields — of course computer graphics, virtual reality, augmented reality — to all kinds of interesting and challenging physics simulations.”
But it doesn’t end there. Nvidia’s tech now resides in many of the world’s most powerful supercomputers, and the applications include fields that were once considered beyond the realm of modern computing capabilities. However, the train that Nvidia has been riding to great success recently, AI, was a later development still.
“AI is just the modern way of doing software.”
“Almost every supercomputer in the world today has some form of acceleration, much of it from Nvidia,” Huang told me. “And then there was quantum mechanics. The field of quantum chemistry is going quite well and there’s a great deal of research in quantum chemistry, in quantum mechanics. And then several years ago – I would say about five years ago – we saw an emergence of a new field in computer science called deep learning. And deep learning, combined with the rich amount of data that’s available, and the processing capability came together to become what people call the Big Bang of modern AI.”
This was a landscape shift that moved Nvidia from the periphery. Now, Nvidia’s graphics hardware occupies a more pivotal role, according to Huang – and the company’s long list of high-profile partners, including Microsoft, Facebook and others, bears him out.
GPUs really have become the center of the AI universe, though some alternatives like FPGAs are starting to appear, as well. At GTC, Nvidia has had many industry-leading partners onstage and off, and this year will be no exception: Microsoft, Facebook, Google and Amazon will all be present. It’s also a hub for researchers, and representatives from the University of Toronto, Berkeley, Stanford, MIT, Tsinghua University, the Max Plank Institutes and many more will also be in attendance.
GTC, in other words, has evolved into arguably the biggest developer event focused on artificial intelligence in the world. Nowhere else can you find most of the major tech companies in the world, along with academic and research organizations under one roof. And Nvidia is also focusing on bringing a third group more into the mix: startups.
Nvidia has an accelerator program called Inception that Huang says is its AI platform for startups. About 2,000 startups participate, getting support from Nvidia in one form or another, including financing, platform access, exposure to experts and more.
Huang also notes that GTC is an event for different industry partners, including GlaxoSmithKline, Procter & Gamble and GE Healthcare. Some of these industry-side partners would previously have been out of place even at very general computing events. That’s because, unlike with the onset of smartphones, AI isn’t just changing how you present computing products to a user, but also what areas actually represent opportunities for computing innovation, according to Huang.
“AI is eating software,” Huang continued. “The way to think about it is that AI is just the modern way of doing software. In the future, we’re not going to see software that is not going to continue to learn over time, and be able to perceive and reason, and plan actions and that continues to improve as we use it. These machine-learning approaches, these artificial intelligence-based approaches, will define how software is developed in the future. Just about every startup company does software these days, and even non-startup companies do their own software. Similarly, every startup in the future will have AI.”
Nor will this be limited to cloud-based intelligence, resident in powerful, gigantic data centers. Huang notes that we’re now able to apply computing to things where before it made no sense to do so, including to air conditioners and other relatively ‘dumb’ objects.
“You’ve got cars, you’ve got drones, you’ve got microphones; in the future, almost every electronic device will have some form of deep learning inferencing within it. We call that AI at the edge,” he said. “And eventually there’ll be a trillion devices out there: Vending machines; every microphone; every camera; every house will have deep learning capability. And some of it needs a lot of performance; some of it doesn’t need a lot of performance. Some of it needs a lot of flexibility because it continues to evolve and get smarter. Some of it doesn’t have to get smarter. And we’ll have custom solutions for it all.”