Reviewer reports RTX 5080 FE instability — PCIe 5.0 signal integrity likely the culprit

GeForce RTX 5090 Founders Edition
(Image credit: Nvidia)

Nvidia's GeForce RTX 5090 and RTX 5080 graphics cards are slated to hit shelves today. While initial performance reviews have been disappointing for both GPUs, YouTuber der8auer reported issues with his review sample of the RTX 5080 FE, including boot failures and unexpected crashes when operating in PCIe Gen 5.0 mode. On further investigation, Igor's Lab discussed this particular problem, and the cause might boil down to Nvidia's choice of a multi-PCB design for its Founders Edition models.

Cramming 575W of power inside a dual-slot package for the RTX 5090 required some creative engineering solutions. For starters, the RTX 5090 FE features three PCBs rather than one large PCB, one for the PCIe 5.0 x16 connector, one for the video ports, and the main PCB hosting the GB202 package, GDDR7 memory, and power delivery circuitry. We suspect these modular boards have been connected via ribbon cables to not interfere with cooling.

Der8auer's test bench featured the Asus ROG Crosshair X870E Hero and the Ryzen 7 9800X3D. For reference, this same setup was used to benchmark GPUs like the RX 7900 XTX, RTX 4080, RTX 4090, and even the RTX 5090, with no issues whatsoever. At the start, the RTX 5080 reportedly showed no signal. Power-cycling and reseating the GPU multiple times finally got it to work. With all drivers installed, the problem persisted as the GPU was not detected after another reboot. Rinse, repeat and the GPU booted after a lot of trial and error but ran at remarkably slow PCIe x8 Gen 1.1 speeds.

After manually setting the PCIe configuration to x16 Gen 5.0 in the BIOS, plus all the extra restarts, the GPU successfully ran in PCIe 5.0 mode, only to crash/freeze later in Valorant, PUBG, and Remnant 2. These issues may have several if not many suspects, including driver issues, improper BIOS configurations, faulty components; you name it. However, switching to PCIe Gen 4.0 eliminated all these problems. Given that other GPUs worked fine in the same setup, by deduction the problem likely lies with the RTX 5080 FE, in particular, its design.

A Little More Performance but a Lot More PCIe Issues - RTX 5080 FE Review - YouTube A Little More Performance but a Lot More PCIe Issues - RTX 5080 FE Review - YouTube
Watch On

Igor's Lab noted in his review of the RTX 5090 that signal integrity is crucial for Blackwell GPUs as they use PCIe 5.0, which doubles data transmission speeds to 32 GT/s. Common symptoms of issues with PCIe connectivity include the system failing to initialize the GPU, unexpected crashing, or freezing; the same anomalies that der8auer faced. This issue is especially apparent if you use a riser cable with these GPUs since you'll have to step down to PCIe 4.0 speeds for stability. The tri-PCB architecture of the Founders Edition in-a-way functions as a riser cable and is suspected to degrade signal quality.

As it stands, this is just a theory and not a proven fact. However, if you end up facing the same problem, a simple fix is to force a downgrade to PCIe Gen 4.0 for the GPU in BIOS, which reportedly incurs a minor loss in performance.

Hassam Nasir
Contributing Writer

Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.

  • bit_user
    So, PCIe 5.0 ended up being worse than a pointless waste of money - it's downright harmful!
    Reply
  • hotaru251
    bit_user said:
    So, PCIe 5.0 ended up being worse than a pointless waste of money - it's downright harmful!
    more like design flaw same as the connector with 40 series.

    concept was fine just done in a way that failed.
    Reply
  • Gururu
    Yikes.
    Reply
  • alceryes
    bit_user said:
    So, PCIe 5.0 ended up being worse than a pointless waste of money - it's downright harmful!
    No. The PCIe 5.0 standard and design is fine.
    Current speculation is that NVIDIA's multi-PCB design introduces too much noise, messing with signal integrity.
    Reply
  • alceryes
    hotaru251 said:
    more like design flaw same as the connector with 40 series.

    concept was fine just done in a way that failed.
    The 40 series doesn't use a multi-PCB design for the PCIe connector.

    With the 50 series, the slot component (the part that fits into the motherboard) is not on the same PCB that houses the GPU core and VRAM. They are two different pieces with a type of FFC connecting them. Currently, it is theorized that this FFC allows for more noise introduction in the signaling. Too much to handle at times, apparently.

    If this turns out to be true, this is a VERY big deal as it is part of the core design for the Founder's Edition card.
    Reply
  • edzieba
    alceryes said:
    No. The PCIe 5.0 standard and design is fine.
    Current speculation is that NVIDIA's multi-PCB design introduces too much noise, messing with signal integrity.
    The question is: is the 5080 within spec, or actually out of spec? If it's within spec but marginal when combined with motherboard traces designed assuming PCIe 4.0 operation, then the finger needs to be pointed elsewhere. Time for someone with the big grunty extreme bandiwdth 'scope that costs more than your house to check the eye patterns in order to find out.

    The good news is performance delta between PCe 5.0 16x and PCIe 4.0 16x is basically nil, so no impact to actual users beyond inconvenience.
    Reply
  • JarredWaltonGPU
    Note that this is an issue with one particular card and not endemic to all 5080 Founders Edition cards. Also, I'm not saying this will be the only card with an issue, just that it's probably more of a QA and testing thing rather than bad design. I guess we wait and see.

    My cards have been working fine (knock on wood), and there's certainly more potential for problems with three PCBs. Well, really it's just the two PCBs and the ribbon cable between them: the PCIe 5.0 slot connector, ribbon to the main PCB, and the GPU PCB. A crimped or damaged cable would obviously be one potential culprit.

    And naturally, the melting 16-pin 12VHPWR connectors on the 4090 started with just one instance. LOL. Would be very interesting if, over time, there are a bunch of failures or issues with the Founders Edition cards and PCIe 5.0 that don't crop up on the custom AIB designs!
    Reply
  • A Stoner
    Hopefully it is just a one off on that specific card. Is it widespread or just this one? If it is a design flaw, hopefully it can be fixed with some hardened cables replacing the current cables in the design.
    Reply
  • Eximo
    I'm curious if there are going to be waterblocks for the FE cards. Not likely to buy one, just interested how that problem would be solved.

    If I convince myself to get a 5070 Ti or something, likely slap a block on that.

    Been holding out for a big Intel card, just for fun.
    Reply
  • bit_user
    alceryes said:
    No. The PCIe 5.0 standard and design is fine.
    I didn't say the standard was bad. I meant including it in a consumer desktop machine was not only a pointless waste of money, but now it's causing actual problems.

    alceryes said:
    Current speculation is that NVIDIA's multi-PCB design introduces too much noise, messing with signal integrity.
    Yeah, which isn't an issue with at PCIe 4.0 speeds. So, the fact that Intel decided to reach for PCIe 5.0 (and AMD followed) just created a pitfall and Nvidia walked right into it.
    Reply