Nvidia (NVDA) Earnings Review – May 22, 2024

Third-Party Content. Provided for informational purposes only. Not investment advice or a recommendation to buy or sell any security. See disclosure here.

Nvidia (NVDA) Earnings Review – May 22, 2024

Nvidia designs semiconductors for data center, gaming and other use cases. It’s unanimously considered the technology leader in chips meant for accelerated compute and Generative AI (GenAI) use cases. The following are important acronyms and definitions to know for this company:

  • GPU: Graphics Processing Unit. This is an electronic circuit used to process visual information and data.
  • CPU: Central Processing Unit. This is a different type of electronic circuit that carries out tasks/assignments and data processing from applications. Teachers will often call this the “computer’s brain.”
  • DGX: Nvidia’s full-stack platform combining its chipsets and software services.
  • Hopper: Nvidia’s modern GPU architecture designed for accelerated compute and GenAI. Key piece of the DGX platform.
    • H100: Its Hopper 100 Chip. (H200 is Hopper 200)
    • L40S: Another, more barebones GPU chipset based on Ada Lovelace architecture. This works best for less complex needs.
    • Ampere: The GPU architecture that Hopper replaces for a 16x performance boost.
  • Grace: Nvidia’s new CPU architecture designed for accelerated compute and GenAI. Key piece of the DGX platform.
    • GH200: Its Grace Hopper 200 Superchip with Nvidia GPUs and ARM Holdings tech.
  • InfiniBand: Interconnectivity tech providing an ultra-low latency computing network.
  • NeMo: Guided step-functions to build granular GenAI models for client-specific needs. It’s a standardized environment for model creation.
  • Cuda: Nvidia-designed computing and program-writing platform purpose-built for Nvidia GPUs. Cuda helps power things like Nvidia Inference Microservices (NIM), which guide the deployment of GenAI models (after NeMo helps build them).
    • NIM helps “run Cuda everywhere” — in both on-premise and hosted cloud environments.
  • GenAI Model Training: One of two key layers to model development. This seasons a model by feeding it specific data.
  • GenAI Model Inference: The second key layer to model development. This pushes trained models to create new insights and uncover new, related patterns. It connects data dots that we didn’t realize were related. Training comes first. Inference comes second… third… fourth etc.

Demand

  • Beat revenue estimates by 5.5% & beat guidance by 8.3%.
  • Beat data center revenue estimates by 6.6%.

Nvidia’s 3-year revenue compounded annual growth rate (CAGR) continues to accelerate. 66.2% this quarter compares to 64% Q/Q and 56% 2 quarters ago.

Source: Brad Freeman – SEC Filings, Company Presentations, and Company Press Releases

Profits & Margins

  • Beat EBIT estimates by 10.8% & EBIT beat guidance by 12.9%.
  • Beat $5.18 GAAP EPS estimates by $0.80 & beat GAAP EPS guidance by $0.96.
    • GAAP and non-GAAP operating expense (OpEx) growth continued to lag revenue. Love it.
  • Beat 77% gross profit margin (GPM) estimates and identical guidance by 190 basis points (bps; 1 basis point = 0.01%). Rapid Y/Y GPM leverage was helped considerably by lower inventory charges.

Source: Brad Freeman – SEC Filings, Company Presentations, and Company Press Releases

Balance Sheet

  • $31.4B in cash & equivalents.
  • $9.7B in debt.
  • Share count is roughly flat Y/Y.
  • Raised its quarterly dividend from $0.04 to $0.10.
  • Announced a 10-1 stock split.

Q2 Guidance & Valuation

  • Revenue guidance beat estimates by 4.9%.
  • EBIT guidance beat estimates by 4.8%.
  • Roughly $6.21 EPS guidance beat by $0.21. This assumes share count is flat Q/Q. Nvidia only gives gross margin, OpEx, non-operating income and tax guidance. So we have to assume share count for EPS.
  • It sees 40%-43% Y/Y OpEx growth. This compares very nicely to 86% Y/Y revenue growth expected by sell-siders.

Nvidia trades for 36x forward EPS; EPS is set to grow by 95% Y/Y (probably closer to 100% Y/Y now). EPS is expected to grow by 26% Y/Y next year.

Call & Release

Data Center Demand:

As you can tell from the masterful results, demand for Nvidia’s GenAI infrastructure is off the charts. This can be directly seen in the data center portion of its business. Demand across all types of end customers and all geographies remains ahead of supply.

Why is demand so exceedingly strong? Because Nvidia remains materially ahead of everyone else in its ability to deliver large training, efficiency and total cost of ownership (TCO) edges. These large leads facilitate strong return on investment (ROI) for cloud providers. Specifically, every $1 spent on Nvidia’s AI infrastructure (including chips and Cuda) leads to $5 in GPU instance hosting revenue over a 4 year span for its customers. This means Nvidia chips facilitate a compelling payback period of under 1 year.

There’s another key edge here: vertical integration. Nvidia doesn’t just design chips on the Hopper platform or the newer Blackwell GPU platform. Its toolkit includes chips, servers, switches, networking and cutting edge software. It designs the entire next-gen data center layout so customers can enjoy the best of accelerated compute. Nvidia calls these data centers “AI factories.”

Cuda is the powerful software layer that frees Nvidia to constantly optimize every part of its stack. It’s always getting better. For example, this quarter it helped drive H100 inference performance boosts of 3x. By obsessively controlling every piece of its infrastructure, it drives faster improvement than others can. Its lack of need for 3rd party vendors means a less siloed overarching service. It can observe bottlenecks in real time… it can optimize in real time… it can deliver tangible value to its clients in real-time. It’s a somewhat similar idea to other software-only platforms I cover that create this same utility. Think ServiceNow, Salesforce etc.

“We know it works because we built all of it.” – CEO Jensen Huang

Whether it’s every mega-cap tech name you can think of or every notable GenAI unicorn you can think of, Nvidia is dominating what it views as the “next industrial revolution.” It was instrumental in Tesla’s breakthrough autonomous driving software and in Meta’s breakthrough Llama models. The list goes on and on. Competition can try to catch up… but Nvidia’s rapid pace (or “rhythm”) of new platform iterations make it extremely tough to catch all the way up. Everyone else is running a race from miles behind while Nvidia continues to sprint faster than they can.

Leadership was asked if this rapid improvement is demotivating clients to purchase large sums of H100 chips. If Amazon (coincidental example) knows something better is coming in 6 months, why not wait? Well, they can’t really wait. Large GenAI players are in an arms race to build out the infrastructure needed to power apps and use cases in the years ahead. According to Huang, waiting for the next iteration is all but guaranteeing you won’t lead this wave.

“We are seeing more demand for Hopper through the quarter. We expect demand to outstrip supply for quite some time as we transition to H200.” – Founder/CEO Jensen Huang

Inference continues to power 40% of overall data center revenue and should continue to represent the bulk of the opportunity. Models only need to be trained once, and then periodically seasoned with more data. Inference is constant. With this in mind, models are shifting from “information retrieval to question answering and skill generation.” This will require a lot more inference capacity. Enter Nvidia, with its unmatched ability to provide inference capacity in a cheaper and better manner. Good combo.

“We are fundamentally changing how computing works and what computers can do… We are manufacturing digital intelligence.” – Founder/CEO Jensen Huang

  • The auto industry is expected to be the largest contributor to data center revenue this year.

Chip Innovation:

Nvidia delivered the first H200 chip to OpenAI during the quarter. If you saw the OpenAI demo… none of that was possible without Nvidia. H200 has double the inference performance of H100.

DGX H200 Servers are Nvidia-built computer systems offering H200 chips, memory, storage and networking to clients in a neatly organized package. Meta is using these for its Llama models. Generally speaking, clients, on average, enjoy $7 in revenue over 4 years for every $1 spent on these servers. That means investments are profitable in a little over 6 months.

The GH200 chip is “shipping in volume.” 9 new supercomputers are using these chips for more “energy-efficient AI processing.” Nvidia is enjoying an 80% attach rate for GH200 chips in global supercomputers. Not a typo. It powers all 3 of the most efficient supercomputers in the world.

Its newest Blackwell GPU trains models 4x faster than H100 with 30x more powerful inference. It lowers TCO by a full 25x thanks to things like air cooling – and now liquid cooling at scale too. Blackwell was built to intentionally enable liquid cooling systems to become economically feasible. Less heat means less waste means less inefficiency. Simple enough. The Blackwell platform includes its newest networking products (discussed next). Blackwell shipments will begin this quarter.

Networking Revenue:

This product category allows data centers and apps to communicate with one another more seamlessly. Networking revenue and Nvidia chipsets work together like salt & pepper. They combine best-in-class model training/inference with better interoperability to… you guessed it… power better customer outcomes.

Infiniband and Spectrum X are the two products to focus on here. Infiniband demand remains exceedingly strong. Q/Q revenue did tick down, but that was due to supply availability. Sequential growth will resume in Q2 as demand continues to run “well ahead” of supply. Spectrum X is its new networking product for large-scale, ethernet-only AI. It delivers 60% better networking performance vs. alternatives and is making its way into several large Nvidia GPU clusters. The company sees Spectrum X revenue ramping from $0 annually to a few billion within the year.

The Demand Runway:

The current debate for Nvidia is all about the length of this current demand boom. It’s obvious that Nvidia is leading the boom, but how long will it last? A key hint for this is its supply/demand dynamic. If things stay supply-constrained for longer, that should mean it can sell everything it makes — with strong pricing power. H100 chips seem to be finding easier supply to meet still robust demand. H200 chips, GH200 chips and early Blackwell orders point to supply constraints lasting “well into next year.” That commentary is highly important. It hints at rock-solid, high-margin demand enduring for the foreseeable future. The semiconductor industry is and always will be cyclical. Any commentary Nvidia is willing to offer on longer-term visibility is highly valuable. And it’s encouraging when that commentary is this positive. There is a reason why Nvidia only offers concrete guidance for one quarter out. Modeling multi-year demand amid unpredictable cycles is wildly difficult. This shows you how confident they are in having more demand than they know what to do with for at least another 18 months.

China:

Nvidia designed a less powerful version of the H100 chip to comply with new export restrictions to China. It’s called H20. It has not secured any needed licensing to continue selling the H100s or other, more powerful chips. Data center revenue in that nation is understandably down considerably Y/Y… Nvidia just has so much excess demand elsewhere that this hit is not presently mattering much.

Sovereign AI:

Nvidia is quickly becoming the go-to partner for governments around the globe to embrace their own AI transformation. Nvidia can guide public sector clients better than others can thanks to the aforementioned vertical integration. This, according to leadership, paves the way to an unmatched ability to train geo-specific models on a government’s own data. Italy, France, Singapore and Japan were all highlighted as governments that are leaning on Nvidia.

Gaming & Professional Visualization (ProViz):

Reception for Nvidia’s newest gaming GPU (GeForce RTX Super GPU) was called “strong.” These are equipped with Cuda Cores to enable high quality processing and general compute tools. They also come with Tensor Cores, which are designed for high performance compute use cases. The Tensor Cores can optimize image clarity and latency on the fly. Nvidia and Microsoft delivered a new Windows optimization; it allows large language models (LLMs) to run at 3x the speed with these GeForce RTX GPUs.

Nvidia announced new application programming interfaces (APIs) to allow clients to run digital twin simulations. This facilitates a boatload of valuable use cases to test and tinker with applications in a zero-stakes environment. It bolsters the ability for clients to rapidly iterate without worrying about breaking something. These digital twins are allowing a client to cut production cycles in half, with 40% lower defect rates.

  • Gaming revenue rose 18% Y/Y.
  • ProViz revenue rose 45% Y/Y.

Take

These results are like a picturesque sunset after a long week: breathtaking, pretty, warm and relaxing. Was this its smallest beat in several quarters? Yes it was. But was this still nothing short of elite? Also yes. I find it remarkable that this company, with this many analyst eyeballs on it, can continue to deliver explosive upside vs. expectations. Quarter after quarter. What else is there to say besides “great results”… again? For as long as this cycle lasts, it is blatantly obvious that Nvidia will lead the charge.

Disclaimer: Third party content is provided for informational purposes only and should not be construed as an offer to sell or a solicitation of an offer to buy or sell any security. Third party content is not intended to serve as a recommendation to buy or sell any security and is not intended to serve as investment advice. Third party content creators are not affiliated with BBAE Holdings LLC, (“BBAE”) Redbridge Securities LLC (“Redbridge Securities”) or BBAE Advisors LLC (“BBAE Advisors”). All investments involve risk, including the possibility of total loss of principal. For additional important information, please click here.

Related Posts
BBAE Blueprint

Join BBAE: Unlock Up to $400 Bonus!

Tailored insights, powerful tools. Automatic bonus at signup.
Get Started with BBAE Now!