Nvidia GeForce RTX 3090 Ti Officially Launches, Starting at $1,999
Some custom cards may eclipse $3,000
It's been a bit of a winding road to get here, but the GeForce RTX 3090 Ti officially launched today, with full specifications and pricing revealed about two months later than originally expected. If you're after maximum performance — "Damn the torpedoes, full speed ahead!" sort of thinking — the RTX 3090 Ti should now reign as the fastest option in our GPU benchmarks hierarchy, and possibly as the best graphics card for prosumer content creation people that don't want to move up to the Nvidia A-series offerings (formerly Quadro).
So, where's the review? We're still awaiting our sample, as Nvidia elected not to seed reviewers with its Founders Edition. We should have an AIC partner card shortly, and we'll post a full review with the usual suite of benchmarks once it arrives — including some extra proviz testing in content creation workloads. If you're mostly interested in gaming performance, take the GeForce RTX 3090 and tack on an extra 10% or so performance, give or take (Nvidia says it's 9% faster overall), and you'll mostly end up with the 3090 Ti.
While we wait for our card to arrive, here's a quick rundown of the official specs.
Graphics Card | RTX 3090 Ti | RTX 3090 | RTX 3080 Ti | RTX 3080 |
---|---|---|---|---|
Architecture | GA102 | GA102 | GA102 | GA102 |
Process Technology | Samsung 8N | Samsung 8N | Samsung 8N | Samsung 8N |
Transistors (Billion) | 28.3 | 28.3 | 28.3 | 28.3 |
Die size (mm^2) | 628.4 | 628.4 | 628.4 | 628.4 |
SMs | 84 | 82 | 80 | 68 |
GPU Cores | 10752 | 10496 | 10240 | 8704 |
Tensor Cores | 336 | 328 | 320 | 272 |
RT Cores | 84 | 82 | 80 | 68 |
Base Clock (MHz) | 1560 | 1395 | 1370 | 1440 |
Boost Clock (MHz) | 1860 | 1695 | 1665 | 1710 |
VRAM Speed (Gbps) | 21 | 19.5 | 19 | 19 |
VRAM (GB) | 24 | 24 | 12 | 10 |
VRAM Bus Width | 384 | 384 | 384 | 320 |
ROPs | 112 | 112 | 112 | 96 |
TMUs | 336 | 328 | 320 | 272 |
TFLOPS FP32 (Boost) | 40.0 | 35.6 | 34.1 | 29.8 |
TFLOPS FP16 (Tensor) | 160 (320) | 142 (285) | 136 (273) | 119 (238) |
Bandwidth (GBps) | 1008 | 936 | 912 | 760 |
TBP (watts) | 450 | 350 | 350 | 320 |
Launch Date | Mar 2022 | Sep 2020 | Jun 2021 | Sep 2020 |
Starting Price | $1,999 | $1,499 | $1,199 | $699 |
Considering the past 18 months of extreme GPU shortages and inflated GPU prices, you should definitely take the last line in the above table with a healthy serving of salt. There's a clear downward trend in recent graphics card prices, including a 25% observed drop in EU pricing during March, but we're not out of the woods just yet. Our latest data for the US using mid-March eBay GPU prices puts most of these extreme GPUs at around 30–50% over MSRP, except for the RTX 3080 (10GB) that's still floating at closer to double the MSRP. The (*cough*) 'good' news is that with a much higher starting MSRP, the actual RTX 3090 Ti prices may land a bit closer to Nvidia's hypothetical starting point — sort of like how the RTX 3080 Ti is only 30% over MSRP since it was priced over 70% higher than the 3080 as a baseline.
Moving past the pricing elephant in the room, there are some other eyebrow-raising items of note. We've long expected the memory to clock at 21Gbps, and credible rumors indicate that's a major reason for the two month delay in Nvidia spilling the beans on the 3090 Ti. The GPU also uses the fully armed and operational GA102 chip, sporting 84 streaming multiprocessors (SMs) and 10752 CUDA cores, with boost clocks about 200MHz higher than the RTX 3090.
But there's a catch, and it's a pretty big one: The RTX 3090 Ti has a TBP (Total Board Power) rating of 450W, 100W higher than the 3090 and 3080 Ti. That's nearly a 30% increase in power use, which isn't too surprising given the higher boost clock and memory speed. So basically, Nvidia is pushing to the far right of the voltage/frequency curve and maxing out performance at the cost of higher power consumption. Considering the recent Nvidia Hopper H100 reveal, this could be a taste of things to come for the Ada / RTX 40-series graphics cards.
What can you expect from the increased power, pricing, core counts, and clock speeds? As noted already, we don't have the card in hand just yet, but we do have benchmarks from all the other GPUs. In gaming performance, the RTX 3090 was only 2.4% faster than the RTX 3080 Ti overall, with a slightly larger 3.0% advantage if we focus purely on 4K gaming performance. Even if we switch over to ray tracing games with our DXR benchmark suite, the 3090 was still only 2.9% faster than the 3080 Ti. There's a bigger gap between the RTX 3090 and RTX 3080, with the 3090 leading by 16% on average and by 20% at 4K, but there's not a lot of gas left in the GA102 tank, it seems, even when paired with the Core i9-12900K, the current best CPU for gaming, or at least the fastest CPU for gaming.
On paper, looking just at the specs, the 3090 theoretically has 4.4% more compute and 2.6% more memory bandwidth than the RTX 3080 Ti. It also has 19.5% more compute and 23.2% more memory bandwidth than the RTX 3080. That means real-world performance scales pretty close to the paper specs, with compute being more important than bandwidth. The RTX 3090 Ti theoretically delivers 12.4% more compute and 7.7% more bandwidth than the 3090. At best, then, the 3090 Ti could be about 12% faster than the 3090, but in general, we expect it to land closer to 10% — and the margin of victory will be even smaller if the workload happens to be CPU limited.
Note that because Nvidia isn't seeding reviewers with reference clocked RTX 3090 Ti Founders Edition cards, there's a good chance that comparisons will be made using a factory overclocked card that can reach even higher levels of performance. It's likely only going to be a 2–4% bump, but we can't help but think the lack of reference card sampling was at least partly done in order to make the custom 3090 Ti cards look a bit better. They're still a highly questionable value, basically bringing back Titan RTX levels of pricing without a few of the extra Titan features.
Designed for Content Creation
Like the RTX 3090 before it, Nvidia isn't pitching the RTX 3090 Ti primarily as a gaming GPU. Instead, it's a card designed for content creation. The extra VRAM should help quite a bit more in intense content creation workloads, though often those end up being a case of a card either succeeding or failing completely due to insufficient VRAM — there's a reason the Nvidia RTX A6000 has 48GB of slower GDDR6 memory, for example. The 3090 Ti has half as much VRAM, which means it's limited to models and workflows that stay under 24GB, but that's still double what the other consumer models offer.
Nvidia went so far as to provide a guide to testing "large memory workflows" on the RTX 3090 Ti. We're not opposed to that, but when the results of testing on GPUs with less than 24GB of VRAM end up with 'failed to run,' it's less about comparative benchmarking and more about portraying the extreme GPUs in the best light possible.
"Oh, you don't have an RTX 3090 Ti, 3090, or Titan RTX? Sorry, you can't do this particular task in this particular fashion." Again, that might be true, and it certainly can be relevant to content creators, but it's weird that these professional applications can't just run in a fallback-to-system-RAM fashion.
Anyway, if you have a need for a GPU that can handle 24GB VRAM workflows, the RTX 3090 Ti now supplants the RTX 3090 with better performance and a rather significant $500 bump in pricing. If you need even more VRAM, you'll have to step up to something like the Nvidia RTX A6000, which will have the added benefit of providing fully ISV-certified drivers for professional applications.
Can the RTX 3090 Ti Handle 8K Gaming?
Besides content creation, the other aspect of the RTX 3090 Ti that Nvidia is once again pushing is the potential for 8K gaming. Frankly, it's a bit ludicrous, as the slight bump in performance relative to the 3090 won't suddenly make 8K more viable. Practically speaking, it's only going to be games that support DLSS Ultra Performance mode (or some other form of upscaling) that will reach higher framerates — well, those as well as the old and lightweight games that are kind enough to support 8K. If you're only after 30 fps, though, it can probably manage quite a few games at medium detail settings.
Frankly, if you actually have an 8K display and you want to hook it up to a PC, go right ahead and buy the RTX 3090 Ti, because clearly you can afford it. We don't have access to an 8K display for testing purposes, but even 4K still proves a bit much for the RTX 3090 at maximum quality without some form of upscaling. Ten percent faster than "not fast enough" likely isn't going to make or break the card, and we're definitely a long way from 8K becoming anything close to mainstream. That's probably for the best, or at least something your wallet will greatly appreciate.
We'll have a full review of an RTX 3090 Ti card up in the near future, once we have a card we can put through its paces. Stay tuned.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.
-
-Fran- Comparing the 3090 and now the TIE sibling to the "proper" Titan line is just hurting nVidia itself. Talk about a self-inflicted wound... Until they don't release a driver that unlocks proper CUDA and OPs for the GA102 die, then it will still be a "kiddies" toy. This is something Linus already reviewed back when it was compared the first time and nVidia, it seems, took the hint and didn't draw any more parallels.Reply
Anyway, that's just besides the point of this card. People that can make use of it, will get it regardless. As someone that doesn't need this card, I can only thank that I don't. $2K starting price is just too bonkers for not being a pro card. But in benchmarks it'll do or die. Looking at the numbers, this thing will be at best 10% better than the 3090 while using 100W more and over ~30% more in price ($500 from MSRP, but probably like $1K on the streets).
I'd also love to see "room" temperature tests. I am really curious to see how this thing increases room temperature and how it affects it inversely. We're getting to the point at which where you live is going to matter, which is quite frankly bonkers.
Regards. -
derekullo
If you live near the arctic circle you could get a gaming pc and a heater all in one.-Fran- said:Comparing the 3090 and now the TIE sibling to the "proper" Titan line is just hurting nVidia itself. Talk about a self-inflicted wound... Until they don't release a driver that unlocks proper CUDA and OPs for the GA102 die, then it will still be a "kiddies" toy. This is something Linus already reviewed back when it was compared the first time and nVidia, it seems, took the hint and didn't draw any more parallels.
Anyway, that's just besides the point of this card. People that can make use of it, will get it regardless. As someone that doesn't need this card, I can only thank that I don't. $2K starting price is just too bonkers for not being a pro card. But in benchmarks it'll do or die. Looking at the numbers, this thing will be at best 10% better than the 3090 while using 100W more and over ~30% more in price ($500 from MSRP, but probably like $1K on the streets).
I'd also love to see "room" temperature tests. I am really curious to see how this thing increases room temperature and how it affects it inversely. We're getting to the point at which where you live is going to matter, which is quite frankly bonkers.
Regards. -
-Fran-
Or if you live near a lake or sea, you have a free sauna room!derekullo said:If you live near the arctic circle you could get a gaming pc and a heater all in one.
Regards. -
Friesiansam
Those gamers with a lot more money than sense and, a pathological need to be able to boast they have the best, will hoover-up every 3090 ti that's made, regardless of price.-Fran- said:$2K starting price is just too bonkers for not being a pro card. -
JarredWaltonGPU
Nvidia still did mention the Titan RTX in its Reviewer's Guide, rather than the 3090 -- which makes sense, as this is only a modest bump from the 3090.-Fran- said:Comparing the 3090 and now the TIE sibling to the "proper" Titan line is just hurting nVidia itself. Talk about a self-inflicted wound... Until they don't release a driver that unlocks proper CUDA and OPs for the GA102 die, then it will still be a "kiddies" toy. This is something Linus already reviewed back when it was compared the first time and nVidia, it seems, took the hint and didn't draw any more parallels.
Anyway, that's just besides the point of this card. People that can make use of it, will get it regardless. As someone that doesn't need this card, I can only thank that I don't. $2K starting price is just too bonkers for not being a pro card. But in benchmarks it'll do or die. Looking at the numbers, this thing will be at best 10% better than the 3090 while using 100W more and over ~30% more in price ($500 from MSRP, but probably like $1K on the streets).
I'd also love to see "room" temperature tests. I am really curious to see how this thing increases room temperature and how it affects it inversely. We're getting to the point at which where you live is going to matter, which is quite frankly bonkers.
Regards. -
SkyBill40 This card is, in a word, dumb. It should come as little surprise though as Nvidia is going to milk as much money as they can out of those chips. They'd be better off in my opinion to make more 3080 cards seeing that's where the bulk of the high end will buy. The miniscule differences between the 3090 and this card just don't justify the monetary outlay, even for e-peen braggarts.Reply -
-Fran-
That's shocking, TBH. I thought, honestly, they had actually remained silent because they got the hint... I guess not!JarredWaltonGPU said:Nvidia still did mention the Titan RTX in its Reviewer's Guide, rather than the 3090 -- which makes sense, as this is only a modest bump from the 3090.
Regards. -
TJ Hooker
What exactly are you referring to here, FP64 (double precision) performance? Titans have always used the same drivers available to regular gaming cards.-Fran- said:Until they don't release a driver that unlocks proper CUDA and OPs for the GA102 die, then it will still be a "kiddies" toy. -
-Fran-
I had to go and refresh my memory, because I do remember the Titan being better at FP compute than the regular GF cards. That changed slightly with later gens of it, but the difference was made via driver locks, mainly. Linus already proved that, so I won't there again. In certain aspects for compute, the Titan cards are just different, or at least used to.TJ Hooker said:What exactly are you referring to here, FP64 (double precision) performance? Titans have always used the same drivers available to regular gaming cards.
So, in short from what I can see, while they do use the same driver suite, there's still differences on how they behave.
Regards. -
TJ Hooker
Yes, the FP64 performance is limited by drivers/FW, but that isn't new. The majority of Titans have had equivalent compute (including FP64) performance to regular Geforce cards (or only slightly higher, if they have a few extra cores).-Fran- said:I had to go and refresh my memory, because I do remember the Titan being better at FP compute than the regular GF cards. That changed slightly with later gens of it, but the difference was made via driver locks, mainly. Linus already proved that, so I won't there again. In certain aspects for compute, the Titan cards are just different, or at least used to.
So, in short from what I can see, while they do use the same driver suite, there's still differences on how they behave.
Regards.
The only consistent, defining characteristic of Titan cards that I have found is that they cost more than all the regular Geforce cards at the time they are released. All other definitions I've seen people come up with with respect to what constitutes a 'real' Titan (usually in contrast to a RTX 3090) are broken by at least one previous Titan.