AMD ($AMD) – CFO Interview – June 15, 2024

Third-Party Content. Provided for informational purposes only. Not investment advice or a recommendation to buy or sell any security. See disclosure here.

AMD ($AMD) – CFO Interview – June 15, 2024

If there’s one thing the semiconductor industry loves, it’s constantly changing the names of products with an alphabet soup of acronyms for us all to juggle. Fun, fun. Those acronyms all fall into neat categories: chips, networking and connectivity, and software. It’s these ideas and AMD’s positioning within them that matter to investors. Not that they’ve memorized what an MI325 HBM3E chip stands for. That’s how we’ll frame this coverage. I’ll use Nvidia as the measuring stick based on its current technological prowess and considering that every question in this interview included Nvidia.

  • GPU: Graphics Processing Unit. This is an electronic circuit to display screen images.
  • CPU: Central Processing Unit. This is a different type of electronic circuit that carries out tasks/assignments and data processing from applications.

Chips:

AMD leadership is adamant about it continuing to lead the neutral processing unit (NPU) race for AI-infused personal computing. “TOPs” stands for Tera Operations Per Second. This measures chip performance with more TOPs being better. AMD’s new Ryzen AI PC has 20% more TOPs than Microsoft’s own best unit. The PC includes a superchip with its latest CPU, GPU and NPU all in one, To them, TOPs superiority is imperative for running copilots and GenAI apps on personal computers with optimal latency, hallucination rates and performance. It’s how the firm claims to be a “leader in AI inference on the PC side.” The semiconductor space is a lot like the cybersecurity space in that vendors all think their technology is better than everyone else’s. In reality, they’re probably all right for specific use cases.

Within AI data centers, its new CPUs “extend leadership in performance per watt and dollar significantly.” GPUs are going to be the main workhorse for high-performance compute workloads. But CPUs still work just fine for static, step-series, instruction-based tasks. And when a CPU can be used instead of a GPU, cost optimization routinely favors the CPU. For this reason, it continues to focus on boosting core counts (or adding more processing power to CPUs) rather than unit volumes here. Chip unit volumes are challenged, but CPU core unit volumes look a lot better.

GPU (and software) is where Nvidia is thought to have the largest, most defensible lead. Like Nvidia, AMD is also shrinking the timing of new platform delivery from a few years to 12 months. Later this year, its newest MI series chip will feature “better memory and bandwidth than the competition.” Its 2025 release will boost those metrics by 35x and will “compete with Blackwell 200.” This is great to hear, but the issue is that Nvidia will have another chip after Blackwell that delivers more exponential efficiency gains next year. It’s very hard to catch Nvidia here, but AMD thinks it is making some headway. That Blackwell successor will be called Rubin, and AMD thinks it will get to these Rubin levels of performance in 2026. That is likely when Rubin revenue will start to ramp.

“We believe we are very competitive on the GPU side.”

CFO Jean Hu

Hybrid bonding and chip-on-wafer-on-substrate constraints continue to ease. There are still bottlenecks, which will likely last into the second half of the year.

Networking:

Nvidia has made rapid progress with its NVLink switches and SpectrumX technology to blaze deeper, larger connections between GPU clusters. It’s working with Microsoft, Meta, Google, AWS, Broadcom and Cisco on a new open standard framework for linking up to 1,000 GPUs. SpectrumX can connect 10,000 GPUs.

AMD doesn’t want to develop all of this internally. It will lean on partners for networking (including Ethernet) to emulate the full service suite that Nvidia offers. As we covered in the Broadcom section, there are other very formidable players here that can help AMD close the gap without doing so on its own.

Software:

Its new software stack release comes with a broad suite of models, tools and MI chip integration help. Nvidia’s Cuda software suite has become immensely popular. Utilizing Cuda and AMD’s NPUs at the same time requires a lot of manual work and hinders adoption of AMD’s chipsets. AMD is pushing hard to lower the friction associated with porting Cuda applications to AMD’s chip framework. There is vendor lock when it comes to Nvidia’s Cuda, and AMD needs to proactively overcome that vendor lock for more intuitive back-end integrations.

Disclaimer: Third party content is provided for informational purposes only and should not be construed as an offer to sell or a solicitation of an offer to buy or sell any security. Third party content is not intended to serve as a recommendation to buy or sell any security and is not intended to serve as investment advice. Third party content creators are not affiliated with BBAE Holdings LLC, (“BBAE”) Redbridge Securities LLC (“Redbridge Securities”) or BBAE Advisors LLC (“BBAE Advisors”). All investments involve risk, including the possibility of total loss of principal. For additional important information, please click here.

Related Posts
BBAE Blueprint

BBAE: Up to $400 First Deposit Bonus!

Tailored insights, powerful tools. Automatic bonus at signup.
Get Started with BBAE Now!