Nvidia’s secret sauce is starting to worry regulators

Nvidia has had a remarkable ride in its journey to become a $3 trillion heavyweight in the AI ​​industry. Now, authorities want to know if he got there fairly.

French regulators are set to charge the Silicon Valley chip giant over concerns it has engaged in anti-competitive behavior, a Reuters report said, citing people familiar with the matter.

It follows developments last month involving the US Department of Justice and the Federal Trade Commission that could see Nvidia and other giants of the AI ​​industry, such as Microsoft, face tough questions about how they use their power in shop.

An Nvidia spokesman declined to comment for BI.

Nvidia has emerged as a dominant force in the generative artificial intelligence boom, as companies including OpenAI, Google and Meta have all bowed to its billionaire CEO Jensen Huang to secure access to the chips – known as GPUs, in which the company specializes.

The demand is driven by the role of these GPUs in training AI models. In May, Nvidia gave its latest indication of how relentless demand has been as it revealed a 262% year-over-year increase in first-quarter revenue to $26 billion.

The company’s dominance was further strengthened last month as it narrowly overtook Microsoft to become the world’s most valuable company, with a market capitalization of about $3.34 trillion.

But while Nvidia’s hardware has drawn attention, regulators also seem keen to highlight the software side of its business: CUDA.

In its first opinion on the “competitive functioning” of the AI-generating sector, published on Friday after launching an investigation in February, the French competition regulator raised concerns over the sector’s “dependence on Nvidia’s CUDA software”.

What is CUDA?


nvidia shares

Nvidia’s CUDA software helps make its hardware easier to use.

Slaven Vlasic/Getty Images for The New York Times; Chelsea Jia Feng/BI



CUDA, which stands for “unified computing device architecture,” is a computing platform that Nvidia unveiled in 2006.

At the time, Nvidia’s GPUs were built to cater to the gaming market of the time. They boasted an ability to process game graphics better than rivals’ chips, thanks to a neat trick they performed called parallel computing.

But Nvidia was ready to expand the use of its GPUs to handle other types of computing tasks. This is where CUDA would come in. Nvidia wanted to create a software package that could allow its GPUs to handle a whole range of computing tasks.

It worked. The advantage of CUDA today is that it works effectively as a plug-and-play system. No matter how diverse or complex a company’s AI workload is, CUDA works in a way that makes Nvidia GPUs useful for all companies working on AI projects. How did he achieve this?

What makes Nvidia tick


Jensen Huang presenting chips on stage.

Jensen Huang presents at Nvidia’s GTC conference.

Justin Sullivan/Getty Images



After Nvidia’s GTC conference in March, dubbed the “Woodstock of AI” by analysts, James Wang, general partner at Creative Ventures, a VC firm, wrote a blog explaining how Nvidia’s unveiling of new GPUs was more little more important to its success than CUDA.

He has several explanations for this.

For one, CUDA is adaptable. The software “continues to be forward and backward compatible,” even as new GPUs come out, Wang wrote in a Substack blog.

Wang also noted that there are also a bunch of “super nice tools,” which are supported by a dedicated community of CUDA developers. Simply put, these tools are designed and updated to make life easier for companies looking to use Nvidia chips.

“The reasons for Nvidia’s dominance are years and billions of dollars of investment in the CUDA ecosystem, evangelism and education of the community that builds AI,” Wang wrote.

While Huang has won credit within Silicon Valley for building such a powerful software system that has given Nvidia a competitive edge, others have tried to build rival offerings.

For example, Nvidia’s chip rival AMD, led by Huang’s cousin Lisa Su, operates a CUDA alternative called ROCm. However, it was released in 2016, 10 years after CUDA, and hasn’t gained a similar kind of traction.

For regulators now, the question is whether Nvidia has achieved its dominance by unfairly locking out companies using its CUDA GPUs.

As French regulators noted in their opinion on Friday, the software is “the only one that is 100% compatible with the GPUs that have become essential for accelerated computing.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top