Here is today’s Q&A. #QandAwithQIAOFENG #AMD, #NVIDIA, Question: Can you discuss whether AMD’s MI300 outperforms the H100 in terms of benchmarks? And does AMD have the potential to increase its market share in the AI chip sector? Answer: Simply put, AMD is indeed becoming a significant threat to NVIDIA, but this threat, even if it grows, is unlikely to shake NVIDIA’s position in AI hardware infrastructure over the next decade. The competition might end up with AMD increasing its market share from a fraction of a percent to 5%, with NVIDIA retaining the majority. Recent industry tests have indeed shown that AMD’s MI300, with larger memory and higher bandwidth, is more efficient with the GPT-4 model, especially when deploying 32K context windows. However, this isn’t because AMD’s product design is more advanced or innovative; it’s purely because MI300 is a later development. MI300 started shipping in bulk in the third quarter of 2023, whereas its competing model, the H100, began a year earlier in 2022. Naturally, being a year later, MI300 has some improved specifications, which is where the difference lies. Similarly, when NVIDIA’s next-generation H200 ships, AMD will only have the current MI300, and its performance will significantly lag behind NVIDIA’s. When deploying large-scale data centers and computing centers now, considerations extend beyond a 20% performance difference to stability, ease of use, and migration costs. NVIDIA, with its 20-year foundation in CUDA architecture and software-hardware integration, holds a strong position. Customers also understand that the current gap is merely due to AMD’s later chip development and expect NVIDIA’s next-gen H200 to be stronger. So, a mere 20% performance lead is insufficient to win customers, considering the high costs of switching. Only if AMD’s products continuously outperform for 2 or 3 generations, showing a stable expectation that, despite the learning and switching costs and risks, it’s still more cost-effective to switch, will AMD gain a larger share of the compute card market. However, this possibility is very low. Even Intel’s recently hyped Falcon Shores, with 288GB of memory and 9.8TB/s bandwidth, faces similar challenges. Although the likelihood is low, if certain special events occur, NVIDIA’s competitors might have a bigger chance. These special events refer to shifts in AI models. For instance, if a structurally different model like I-JEPA becomes a substitute for large language models and AMD bets on it before the switch, developing hardware specifically for the I-JEPA algorithm, they could demonstrate a dominating advantage in the first batch of new models, like outperforming competitors by three or four times. Such scenarios are not impossible.