AMD's Zen 5 chips pack in 8.315 billion transistors per compute die, a 28% increase in density

AMD Ryzen processor
(Image credit: AMD)

When AMD formally introduced its Zen 5-based Ryzen processors for desktops and laptops, it revealed a lot of details about the Zen 5 microarchitecture and capabilities of these CPUs. However, it didn't disclose the actual die sizes and transistor counts of various Zen 5-based products — at least not to the U.S. press. In Germany, representatives for the company were kind enough to reveal this information to HardwareLuxx

Swipe to scroll horizontally
CodenameCoresDie SizeTransistor CountNodeTransistor Density
Ryzen 7000 'Durango'8 Zen 471 mm^26.5 billion5 nm92.9 MTr/mm^2
Ryzen 9000 'Eldora'8 Zen 570.6 mm^28.315 billionN4P117.78 MTr/mm^2
Hawk Point 18 Zen 4178 mm^2?N4 (?)? MTr/mm^2
Hawk Point 22 Zen 4 + 4 Zen 4c138 mm^2?N4 (?)? MTr/mm^2
Strix Point4 Zen 5 + 8 Zen 5c232.5 mm^2?N4P? MTr/mm^2

As it turns out, AMD's 'Eldora' core complex die of the company's Ryzen 9000-series processors packs 8.315 billion transistors, which is significantly more than the Durango Zen 4 CCD, which has 6.5 billion transistors. Yet, because Eldora is made on TSMC's N4P (a 4nm-class) fabrication process, whereas Durango is produced using TSMC's N5 (a 5nm-class) production node, their die sizes are similar—around 71 mm^2. 

AMD's Ryzen 300 AI-series 'Strix Point' accelerated processing unit (APU) has a die size of 232.5 mm^2, which is massively larger than the Ryzen 7000 series 'Hawk Point 1' die, which is 178 mm^2. This is understandable, though, because both CPUs are made on 4nm-class manufacturing processes, but the new one packs four Zen 5 cores and eight Zen 5c cores along with 36 MB of L2+L3 cache. In contrast, the previous-generation APU integrates eight Zen 4 cores with 24 MB of L2+L3 cache. 

Since 2022, TSMC has been making chips on the N4P fabrication process for Apple and possibly some other players, so its yields should be decent, and performance variability should be relatively low. Therefore, AMD's costs with a massive 232.5 mm^2 die size should be manageable. Furthermore, since N4P is a performance-enhanced version of TSMC's N4 node, Strix Point's power efficiency should be very high. Still, Strix Point will be the company's largest APU in many years, which will affect its cost. 

That said, it is not surprising that AMD is reportedly prepping a smaller Strix Point version with a lower core count aimed at mainstream and lower-end notebooks. This APU will combine lower cost with higher power efficiency than its predecessors.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • usertests
    It's interesting that the die area barely budged, despite the increase in transistor density. Effects of its new full AVX-512 support and similar changes?

    I think AMD has a few obvious choices for its future mainstream core chiplets. Keep it at 8 cores but shrink it, potentially packing more of them on each socket, increase CCX to 10-12 cores, or go to 16-cores dual-CCX like Zen 4c.

    Core chiplet die areas could shrink if they manage to migrate all/most L3 cache off the chiplet and 3D stack it. Or they could use the area savings to include more cores. Maybe 70-75mm^2 is an optimal size.
    Reply
  • bit_user
    usertests said:
    It's interesting that the die area barely budged, despite the increase in transistor density.
    The Zen-era cores have traditionally been smaller than competing Intel cores, in transistor count. So, AMD had a bit more margin for increasing core complexity.

    usertests said:
    I think AMD has a few obvious choices for its future mainstream core chiplets. Keep it at 8 cores but shrink it, potentially packing more of them on each socket, increase CCX to 10-12 cores, or go to 16-cores dual-CCX like Zen 4c.
    I expect the mainstream will probably go with hybrid composition. My reasoning is that a mix of core types would improve performance per area, while simultaneously reducing thermal density.
    Reply
  • usertests
    bit_user said:
    I expect the mainstream will probably go with hybrid composition. My reasoning is that a mix of core types would improve performance per area, while simultaneously reducing thermal density.
    If you mean a dual-CCX chiplet with (for example) 8x Zen cores and 8x ZenC cores, that might be a tough sell since AMD has been reusing the same core chiplets between desktop and server, and I doubt server customers are as interested in hybrid.

    However, rumors are pointing toward AMD being willing to make more types of chiplets. For example, "Standard", "Dense Classic", and "Client Dense" cores for Zen 6, and there was a rumor somewhere that Strix Halo would not use the exact same chiplets as desktop despite having Zen 5 cores (not 'C').
    Reply
  • bit_user
    usertests said:
    If you mean a dual-CCX chiplet with (for example) 8x Zen cores and 8x ZenC cores, that might be a tough sell since AMD has been reusing the same core chiplets between desktop and server, and I doubt server customers are as interested in hybrid.
    Yes, that's what I meant.

    Whether they do it probably has to do with whether their mobile CPUs stay monolithic. If they can share these chiplets between two markets, they don't necessarily have to be desktop + server. Even if they can't, probably the desktop market is big enough for them to consider it.
    Reply