Black Myth Wukong PC benchmarks: A tour de force for Nvidia's ray tracing hardware [Updated]
AMD and Intel GPUs will want to stick to rasterization rather than full ray tracing.
Black Myth Wukong, a feast for the eyes
Black Myth Wukong launched on August 20, 2024 for PC and console gamers, but if you want to see everything the game has to offer — graphically speaking — you'll want to play it on an Nvidia GPU. The game supports full ray tracing, often referred to as "path tracing" in Nvidia parlance, and as expected it's quite brutal in terms of GPU demands. But you don't need to have an RTX GPU or path tracing to enjoy the game, as even in pure rasterization mode it's quite beautiful. Our cohorts over at PCGamer scored it an 87, if you're wondering whether the game is any good or not.
We're more interested in seeing how it runs on some of the best graphics cards, and we've tested most of the latest generation Nvidia, AMD, and Intel GPUs to see how they stack up. There's also a standalone Black Myth benchmarking tool, which uses the same built-in benchmark as the main game, so that's helpful if you want to check performance.
[Update: We've added three RTX 30-series and three RX 6000-series GPUs to the charts. The text has not been updated to reflect the older GPUs' performance, but in general the older cards are slower compared to the current models. That's 23 GPUs in total tested now, which seems like a good stopping point. Sound off in the comments if you'd like to see anything we've skipped.]
Intel Core i9-13900K
MSI MEG Z790 Ace DDR5
G.Skill Trident Z5 2x16GB DDR5-6600 CL34
Sabrent Rocket 4 Plus-G 4TB
be quiet! 1500W Dark Power Pro 12
Cooler Master PL360 Flux
Windows 11 Pro 64-bit
GRAPHICS CARDS
Nvidia RTX 4090
Nvidia RTX 4080 Super
Nvidia RTX 4070 Ti Super
Nvidia RTX 4070 Super
Nvidia RTX 4070
Nvidia RTX 4060 Ti 16GB
Nvidia RTX 4060 Ti
Nvidia RTX 4060
AMD RX 7900 XTX
AMD RX 7900 XT
AMD RX 7900 GRE
AMD RX 7800 XT
AMD RX 7700 XT
AMD RX 7600 XT
AMD RX 7600
Intel Arc A770 16GB
Intel Arc A750
We're glad to see a full complement of supported technologies from the game, with DLSS 3.7.1 upscaling and frame generation alongside FSR 3.1 upscaling and frame generation, plus XeSS 1.3 upscaling for good measure. There still appear to be some rendering issues with FSR, however, as we noticed far more ghosting and artifacts than with DLSS. We've opted to test with 67% scaling in all cases, as we feel most gamers will be better served by the higher performance that offers compared to native resolution, even if there's the occasional loss in image fidelity.
For this initial look at how Black Myth Wukong runs on PC, we’ve used our standard GPU test PC, which consists of an Intel Core i9-13900K Raptor Lake CPU, 32GB of DDR5-6400 memory, and a 4TB Crucial T700 PCIe 5.0 SSD for storage. Then we’ve tested most of the current generation AMD, Nvidia, and Intel graphics cards, using the latest drivers from the respective companies. We have preview Nvidia 560.87 drivers — it's an Nvidia-promoted game, if that wasn't clear — which have the same game optimizations as the public 560.94 drivers. We're also using AMD 24.7.1 and Intel 5971 drivers, though AMD's drivers are not "game ready" for Black Myth Wukong.
We aren't testing every current gen GPU, choosing instead to skip the Nvidia RTX 4080 and RTX 4070 Ti, replacing them with their newer Super variants. Testing is also ongoing, as it takes quite a while to get through all the settings we want to look at for each card, so we'll be adding some of the missing data over the coming day or two, and we may add some previous generation GPUs as a reference point as well.
We're testing with the medium preset at 1080p, again with 67% scaling manually dialed in — the game always seems to drop that one point to 66% scaling after you exit the menu (and will drop from 66% to 65% if you start there, FYI). We also test with the 'cinematic' preset at 1080p, 1440p, and 4K, again with 67% scaling manually dialed in. Nearly all of the testing has been done without frame generation, simply because we find that feature to be more of a marketing item than something that truly improves the overall gaming experience, though we do have one chart where we enabled framegen just to show how the game runs in that mode.
Our baseline testing runs in pure rasterization mode (i.e. using Unreal Engine's Lumen), for what will become obvious reasons. Then we use the same settings as before, except with the "Full Ray Tracing" turned on, using the low RT preset combined with medium quality and then the very high RT settings with the cinematic preset at 1080p, 1440p, and 4K. We also test with maxed out 1080p settings with frame generation enabled as one final data point, to see how that affects "performance" — or at least the number of generated frames delivered to your monitor.
We'll have screenshots and a discussion of image fidelity at the various settings later, but it's easy to dismiss the full RT option at first. It's a bit of a wash in terms of what it does to the visuals at the low and medium settings, while the very high option kills performance — particularly on non-Nvidia GPUs. It does improve the visuals of the game, adding a lot of details, but it's very much a feature designed for those with at least an RTX 4070 or faster GPU.
Black Myth Wukong Medium GPU performance
But let's start with the rasterization performance. Each setting gets run at least twice, using the higher result; the first test (i.e. after launching the game) gets run three times and we discard the first result. The built-in benchmark lasts about 145 seconds, if you're wondering, so that's a lot of time required to test up to nine different settings on each GPU.
We start with the medium preset, which offers a good blend of visual fidelity and performance. Unreal Engine's Lumen and Nanite technologies are put to good use, though classifying Lumen as "software ray tracing" is a bit of a stretch — it's more accurate to call it "shader-based rendering with some calculations that approximate ray tracing," which is basically what "rasterization" means in my bood. There are elements of the tech that may qualify as RT-lite, but reflections are one example where it's using traditional screen space reflections. We'll have more to say about that in the image quality discussion.
One other item to mention is that the game uses upscaling by default at all settings. It uses a slider with a range of 25 to 100 — that's 16X upscaling to native, if you're wondering. It will set the scale to 66% at 1080p, 50% at 1440p, and 44% at 4K (and 80% at 1600x900 if you're wondering — only 720p defaults to 100% scaling).
If you're used to the standard Quality, Balanced, and Performance upscaling modes, the values used by Black Myth Wukong are more aggressive in general. Quality mode normally means ~2X upscaling, or ~71% of the target resolution; Balanced mode uses ~3X upscaling, or ~58% scaling; and Performance mode uses ~4X upscaling, or 50% of the target resolution. (DLSS, FSR, and XeSS can use slightly different values as well, depending on the game and version, but we're trying not to get too bogged down in the nitty gritty details.)
For our purposes, we don't want to rely on different scaling values, so we set a static 67% scaling for all of our testing. That means we're rendering at 1280x720 for 1080p output, 1707x960 for 1440p output, and 2560x1440 for 4K output. If we used the game's defaults, the render resolutions would be 1280x720, 1280x720, and 1707x960 for those same respective outputs — which would mean the 1080p and 1440p results would be quite similar, other than differences in upscaling overhead. Running the game at maximum settings at native rendering is a good way to further reduce performance, if you like lower fps for whatever reason.
Our first look at performance seems quite good overall. Nearly everything we tested easily breaks 60 fps. Sure, older generation GPUs are more likely to struggle, and we'll try to test some of those in the near future, but you only need a budget $200 graphics card to have a good experience in Black Myth Wukong.
The AMD vs Nvidia results are also pretty reasonable. We're used to seeing the 4080 Super just ahead of the 7900 XTX, for example, and the 7900 XT usually ends up pretty close to the 4070 Ti Super. Some of the lower tier AMD cards don't match up as well, though. If you look at our GPU benchmarks hierarchy, focusing on the rasterization performance, the 7700 XT beats the 4060 Ti by 16%; here, it's only 4% faster. The 7800 XT likewise beats the vanilla RTX 4070 by 6%, but here it's 1% slower.
It's not too surprising, perhaps, as Black Myth Wukong has been heavily promoted by Nvidia. Even though Unreal Engine 5 on its own should be somewhat GPU agnostic, developers need to tune it for their particular game, and that can lead to vendor specific optimizations.
What about Intel Arc? Well, despite having game ready drivers, it's really part of an older generation of hardware — it was designed to compete with the RTX 3060, and mostly does so in games where drivers don't hold it back. It's also a lot like Nvidia GPUs in that it often performs worse than AMD 'equivalents' in rasterization games, but comes out ahead with ray tracing.
The A770 16GB and A750 end up as the two slowest GPUs that we've tested so far, with the A770 just barely edging past 60 fps while the A750 only manages 52 fps. The A770 also has 14% more raw compute than the A750, plus more memory, so performing 20% better than the A750 isn't totally out of the ordinary — just a bit wider of a gap than we normally see. Minimums are also lower on the Intel GPUs, and the numbers further driver optimizations could be beneficial. These aren't unexpected results, though, as the A770 and A750 also rank below the RX 7600 in our GPU hierarchy.
Black Myth Wukong Cinematic GPU performance
Before we get to the results of the 'cinematic' preset, let's be clear: We're not trying to show the best mix of settings for the various GPUs. Our purpose is to show how the GPUs stack up, in terms of performance potential, and so we like to punish the GPUs with maxed out settings.
If you're looking to just play Black Myth Wukong, the high preset tends to run about 50~60 percent faster than the cinematic preset and is what we recommend for most users in this particular game. We'll discuss the various presets and image fidelity later, but there are very much diminishing returns when going beyond the high preset.
There's a pretty massive hit to performance when using the cinematic preset, and you can see why the game defaults to turning on upscaling. All of the GPUs we tested see their performance cut in half, or more, relative to the medium preset. Nearly everything is still technically playable, but you'll need at least an RTX 4070 Super to get above 60 fps at these settings.
The same patterns as before hold here as well. The AMD and Nvidia matchups generally look close at the top, but as you go down the performance ladder the RTX GPUs punch slightly above their normal weight class. The 4060 Ti might cost as much as a 7700 XT, but for rasterization performance AMD usually comes out with a clear lead.
Intel's Arc GPUs again take up position at the bottom of the chart. The A770 is only about 10% slower than the RX 7600, which isn't too far off what we normally see for rasterization games, but it's also falling just shy of 30 fps, with minimums dipping into the upper teens. We've also seen other games where the 8GB A750 struggles, even when other 8GB GPUs don't, and at least so far that doesn't seem to be a major issue with Black Myth Wukong. The A770 comes out 17% ahead of the A750, which is pretty much right in line with the difference in compute teraflops.
We didn't run the high preset on every GPU, due to time constraints, but as you'll see in our image quality analysis, that tends to be the sweet spot in terms of balancing image fidelity and performance. As mentioned above, it runs about 50~60 percent faster than the cinematic preset, and in general it looks nearly as good. The minor differences in shadows and foliage aren't enough to warrant the performance hit in our opinion.
Moving up to 1440p with the cinematic preset, performance doesn't drop too much compared to 1080p. That's perhaps partly because we have 67% scaling enabled, but the GPUs we tested are all around 12~17 percent slower than at the lower resolution.
That's enough to drop a few cards below acceptable rates — the RTX 4060 and below are all pretty marginal at these settings — but tweaks to the settings should allow these cards to handle 1440p at lower settings. Obviously, there are plenty of GPUs that are slower than the ones we've tested, and older cards aren't going to like 1440p.
The standings of the individual GPUs haven't shifted at all compared to 1080p medium. That's an interesting result, as usually there are at least a few shifts. Also notice that the RTX 4060 Ti 16GB and 8GB, along with the RX 7600 XT and vanilla 7600, offer basically identical performance. Clearly we're not exceeding 8GB of VRAM four our testing.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
4K cinematic once again results in nearly identical standings, with the Arc A770 being the only position change. There's a less than normal drop in performance as well (compared to other games) when looking at these 4K results compared to 1440p. That's at least partly because of upscaling, as otherwise native 4K rendering often bumps up VRAM requirements... but we've seen plenty of other games exceed 8GB of VRAM use, so it's a bit of a breath of fresh air to see the RX 7600, RTX 4060, and RTX 4060 Ti all managing to keep pace.
Those GPUs we just named aren't playable at 4K, mind you, but they're slow because of the demands of the game engine, not because they're thrashing on VRAM allocation. If you want at least borderline playable, you'll need the RX 7800 XT or RTX 4070 as a minimum for 4K (yes, with upscaling).
Curiously, and this is something we've seen in other games, the Arc A750 does fall off the pace here, even while other 8GB cards do fine. It's not a big deal, because even though 19 fps is much higher than 11 fps, neither Arc GPU can really deal with 4K in Black Myth Wukong. As we'll see on the next tests with full RT, Intel's drivers do appear to need more tuning for this game.
What about hitting 60 fps at 4K, though? The only GPU to manage that is the mighty RTX 4090. It will probably be joined by at least the RTX 5080 and 5090 when those Nvidia Blackwell GPUs arrive, and maybe by an RDNA 4 GPU or two as well, but those are seemingly months away at best.
As said above, the high preset should improve performance by around 50~60 percent, perhaps more in some cases. That would make 4K viable for 4060 Ti and above, and maybe even the 4060 and RX 7600 in a pinch.
Black Myth Wukong full ray tracing performance
If the results from the cinematic testing seemed at times pretty poor, just wait until you see what happens with full RT enabled. Again, check our image fidelity commentary below, where we have screenshots and a deeper discussion of how the game looks. Here, we're just looking at the performance.
We tested using the medium preset with full RT set to low as our "easy RT" option, and then we used the cinematic preset with full RT on very high for maximum image quality — and maximum punishment of your poor graphics card. If you've seen performance results from other full RT implementations like Cyberpunk 2077's RT Overdrive and Alan Wake 2, you probably already know what to expect.
This is where we get to a tale of two GPU types: Nvidia and everything else. For the RTX 40-series GPUs, our medium RT testing runs okay. Everything from the RTX 4060 and up breaks 60 fps — and again, that's without using DLSS 3 frame generation. The game is definitely playable at these settings for team green's latest GPUs, though we'll have to see about adding some RTX 30-series results if we can find the time.
The other side of the coin is the AMD results, which start just below the 4060 if you have the ~$900 RX 7900 XTX and only get worse from there. Considering the 7900 XTX comes pretty close to the performance of the 4080 Super in the rasterization results, seeing it drop to less than half that level of performance means most AMD users shouldn't even bother with the full RT option, other than to perhaps see what it looks like.
At least the RX 7700 XT and above are technically playable; the RX 7600 and 7600 XT and below, not so much. Interestingly, VRAM capacity still doesn't seem to matter much, even with full RT. The RX 7600 XT and RX 7600 — as well as the RTX 4060 Ti 16GB and RTX 4060 Ti — basically offer the same level of performance. Unreal Engine 5 may have some faults, but Black Myth Wukong manages some amazing visuals without requiring a boatload of VRAM...
...Unless you have an Intel Arc A750 (and presumably the other 8GB Arc cards as well). Here again, the A770 16GB offers quite a bit more performance, more than just the raw compute should provide. It's 33% faster than the A750 on average fps, but more tellingly it's nearly twice as high on 1% low fps. The A750 also consistently dropped in performance after our initial test run with full RT enabled, so the drivers need some work in that regard. (We saw the same thing with 8GB Arc GPUs in the Bright Memory Infinite benchmark that finally got fixed last year, incidentally.)
If our 'medium RT' testing results looked back, the 'maximum RT' performance truly hurts. The RTX 4070 Super and above manage to average 60 fps or more, and the RTX 4060 and above are still perhaps playable with more than 30 fps. But AMD's fastest GPU right now can't even break 30 fps, and it only gets worse from here.
Also, while the arc A770 16GB does respectably, nearly catching the RX 7700 XT, the A750 performance falls off a cliff. Again, drivers, and we'll just skip any further testing of the A750 in Black Myth Wukong for now. This is also why we didn't bother testing the A580, as we expect it to have similar issues.
There are image quality reasons to use the full RT very high setting, as it stabilizes the foliage shadows, adds some nice reflections, improves overall scene lighting, and even has caustics on the water. You can mostly get the same image quality by using the high preset with maxed out RT, but that won't really improve performance much as many of the default rendering options get overridden by RT anyway.
1440p with maxed out settings, including RT, only gets above 60 fps on the RTX 4090. You could use higher levels of upscaling as well, but image fidelity will degrade if you go that route. But if you have Nvidia's reigning champion, it still chugs along at a comfortable 76 fps.
AMD's GPUs all fall below 20 fps now, so unless you like trying to play a slideshow you'll want to stick with less demanding settings. But again, we note that VRAM capacity hasn't shown up as a limiting factor for Black Myth Wukong. We can also see this by the GPU power draw, which is basically hitting the specified limit for all the cards we tested. Normally, if you hit VRAM capacity constraints, power use will drop quite a bit due to the GPU being forced to wait for data.
And last but not least (unless you're talking about framerates), we have 4K with fully maxed out settings. The RTX 4090 averages 44 fps, and the 4080 Super just squeaks past 30 fps, with minimums dropping to the mid-20s. Everything else lands in unplayable land. We're a long way from doing full RT with all the bells and whistles on games with the graphical complexity of Black Myth Wukong, in other words.
AMD's single-digit results are clearly less than ideal, and we can only wonder whether it's a lack of RT hardware performance or if the game simply doesn't have any optimizations for doing full RT on AMD hardware. Probably it's both of those things, to varying degrees.
If you want to create a 4K chart that makes it look like a viable option, the solution is simple. First, use higher levels of upscaling — the game normally would use about a 5X upscaling factor at 4K. Then turn on framegen. Then you can show the RTX 4070 Ti Super hitting 66 fps like Nvidia does.
1080p maxed out settings with frame generation
Frame generation is often a highly controversial feature, and rightly so. Some people refer to the generated frames as "fake frames," and they're not really wrong. Because there's no additional sampling of user input, and with the added overhead of framegen, it often feels more like a case of two steps backward to go two steps — maybe 2.5 steps — forward.
Let's give a concrete example, though, before we get to the performance chart. Suppose you have a game that's running at 50 fps, and you're hoping to improve that result via frame generation. In a best-case scenario, framegen would double your frames to monitor rate up to 100 fps, while adding a bit of latency. If that's how it usually worked, we'd be far more forgiving of the tech.
The reality is that if you're running at 50 fps, turning on framegen typically has a decent amount of overhead. Instead of a simple doubling of framerates — half of which are generated — what you usually get is around 50% higher perceived fps. But a result of 75 fps using framegen means the base fps has dropped to 37.5 fps, which can definitely start to feel a bit sluggish if you're attuned to such things.
The TLDR is that for frame generation to offer a decent experience, we typically want the generated framerate to be over 80 fps — meaning the user input sampling rate would still be 40 fps. There are situations where that can happen, but Black Myth Wukong with maxed out full RT settings running at 4K tends to be a bit too demanding on most GPUs, unless you have an RTX 4080 or 4090.
Disclaimers aside, we did at least want to provide some performance data with framegen. We used our maxed out settings at 1080p — so the cinematic preset with very high RT quality, plus 67% resolution scaling. Then we turned on frame generation: DLSS 3 for RTX cards and FSR 3 for AMD and Intel cards (XeSS doesn't currently have its own flavor of framegen). That gives us the above chart.
There are some interesting results, like the fact that FSR3 framegen boosts the framerate much more than DLSS3 framegen. Part of may be because the AMD GPUs are tanking so hard with the full RT settings we used for testing. But the RTX 4070 as an example gets 77 fps with framegen, versus 53 fps without, so that's a 45% increase. We'd call that maybe acceptable, at best — going from a rendered 53 fps to 38.5 fps, but with framegen doubling that.
The RX 7900 XTX on the other hand goes from 28 fps to 52 fps, an 85% improvement. If we were looking at that level of scaling with a generated fps result of 100, it would be excellent, but 52 fps generated means the game feels like it's running at 26 fps. And that's AMD's best result. Still, thanks to the better scaling (and we're not discussing framegen image quality, which often involves more compromise), the RX 7900 XTX and XT are finally able to at least surpass the performance of the RTX 4060. Yay?
The 7900 GRE as a second example goes from 20.9 fps to 39.3 fps, for an 88% boost in framerate, but less than 40 fps with framegen enabled just isn't a great experience. We tried it, we didn't like it, and we died a lot more often in Black Myth Wukong as a result. It's not completely unplayable, but it's also not the same as a non-framegen 40 fps. The other AMD GPUs are likewise far from delivering a good experience using these settings — framegen can only do so much.
Of course you can get much higher framerates, via framegen, if you're not using ray tracing — or if you just have a high-end RTX 40-series card. The 4070 Ti Super, 4080 Super, and 4090 all break into the desirable 100+ fps range in our 1080p testing, for example. It's about a 45~50 percent increase in framerate for all three, so the gains are reasonably consistent. The base rendered fps still drops, but the result is certainly acceptable in terms of being playable.
The other Nvidia GPUs aren't quite as good of an experience. The 4070 Super and 4070 run okay, but the 4060 Ti and below fall below 60 fps, which means they may look like they're running okay but they feel like they're running at less than 30 fps. That's been our experience with framegen, at least.
Black Myth Wukong settings and image quality
You've seen the performance disparity, with Nvidia destroying the competition in full ray tracing performance but with reasonably close rasterization performance overall. How much of a difference do these settings make in practice? As usual, it's a more nuanced discussion — which you can read as "you definitely don't need full ray tracing to enjoy the game" if you prefer.
Black Myth Wukong uses Unreal Engine 5, including the Nanite feature that allows for incredible levels of detail. It also uses the software-based Lumen RT for lighting and other effects, however, which definitely isn't perfect. Alternatively, you can enable the full RT option that 'fixes' some of the less desirable rendering aspects of Lumen — shimmering and blobby shadows, broken (occluded) screen space reflections, and some other aspects. Lumen doesn't leverage RT hardware in GPUs like the RTX 40-series and RX 7000-series, but it does run even on older DX11-class GPUs like the GTX 1060 and RX 580.
The pros and cons of the Lumen rendering engine are starting to become reasonably well known. It looks very nice most of the time, and Black Myth Wukong is often a stunning looking game. But occasional stuttering can be a problem, and even high-end PC hardware may not be enough. While the full RT lighting does look better, for most gamers, Lumen looks "good enough" and runs much better on a wider variety of hardware than the full RT mode. Put another way, using software approximations for rendering is often preferable to more accurate RT rendering that runs significantly slower.
Something else to mention again is that this is an Nvidia promoted game. While it uses Unreal Engine 5, it's not clear how much optimization was done specifically for Nvidia hardware, especially for the full ray tracing mode. Another interesting tidbit is that Black Myth Wukong doesn't support Nvidia's DLSS 3.5 Ray Reconstruction technology, which seems like a missed opportunity. As we've discussed with Cyberpunk 2077 Phantom Liberty and Alan Wake 2, if a game is going to support a feature that practically requires an Nvidia GPU, why not include DLSS 3.5 support as well? It offers clearly superior visuals in the games that support at, often with a performance benefit as well. Perhaps it will get added in a post-launch patch to Black Myth Wukong.
To start, we have four different screenshot collections, taken from captured videos of the benchmark sequence so that we can get close to identical frames. That's not ideal, as it introduces video compression artifacts, but since enabling or disabling ray tracing requires a restart of the game, not to mention the need to swap GPUs, we felt this was at least a good starting point.
We've captured the cinematic preset, running at 1080p with 67% scaling (FSR3 for AMD and DLSS3 for Nvidia), on an AMD RX 7900 XTX and an Nvidia RTX 4080 Super. The two cards offer generally similar rasterization performance, with slightly higher prices on the Nvidia card but also slightly higher performance. For each GPU, we captured screenshots with and without full ray tracing (at the maximum very high setting) enabled.
As we said above, the differences can at times seem nuanced. There's clearly some benefit from the RT lighting, shadows, and reflections in some of the comparisons, but there are also clear differences between the AMD and Nvidia results.
First, it's pretty obvious that DLSS provides a generally clearer image right now. FSR image quality is being worked on, according to the developers, so things should improve over time. For now, FSR upscaling causes a loss of fine detail in some areas, and the overall blurriness becomes even more noticeable when looking at the ray traced results.
Comparing the ray tracing images to the rasterization results again shows some very obvious differences in every scene, but the changes aren't always clearly in favor of ray tracing — like in the fourth sequence where there aren't any reflections to discuss, the shadows and lighting look different but not inherently superior with RT. That's partly because these are still shots rather than moving images, however.
One aspect of the RT effects that's worth pointing out is the support for particle-based reflections, which isn't something that the benchmark sequence shows. There are some battles where the addition of reflected lightning as an example looks visually striking and you notice its loss when you turn off RT — but you'll also need to use the very high setting for RT to get those particle reflections, which rules out non-Nvidia cards unless you're running 1080p with higher levels of FSR upscaling.
Overall, the ray tracing can look impressive, and it's cool to see a game like this supporting the feature, even if it's generally impractical on a lot of GPUs. At least it's something people can point to and say, "This is what full ray tracing can bring to the table, and it's also why it's nowhere near going mainstream right now." If you have a high-end Nvidia GPU, you can definitely get decent performance with full ray tracing, particularly if you're willing to tweak a few settings and use higher levels of upscaling.
The other thing we want to look at is how the various presets compare in overall image quality, with and without ray tracing. There are five standard presets: low, medium, high, very high, and cinematic. The last two of those can definitely move into the realm of placebo effect, at least as far as image fidelity goes, but their performance impact is very real. But there are three RT options as well: low, medium, and very high.
You can use any of those three RT options with each of the global presets, or you can even opt to customize the ten individual settings — there's no real customization of the RT options, though, other than the three preset levels. Black Myth Wukong uses ReSTIR global illumination for its lighting effects, and it also supports RT shadows, RT reflections, and RT caustics.
You can see the breakdown of what RT options are used for each of the RT presets in the above image. Below, we have a gallery of screenshots showing the five graphics presets without RT, plus three more images (for each scene) using the high preset with the three different RT options, and then a final three with the cinematic preset and the three full RT options.
Even the low preset looks pretty decent, a testament to how good Unreal Engine 5 looks, though the minimum setting does compromise on things like lighting, texture, and shadow quality — there's no dynamic shadows to speak of, and lots of areas that should show a static shadow simply don't. The cutoff point for shadows is also very noticeable when moving around in the game world if you're on the low setting, and the amount of vegetation also gets reduced quite a lot.
Medium represents a more reasonable compromise that a lot of PCs should be able to manage without too much difficulty. The shadows look good, there's more vegetation, and everything looks like what we'd expect from a demanding modern game. Certainly, no one should feel bad about "only" being able to run the medium settings.
From there, stepping up to the high, very high, and cinematic presets only shows relatively minor changes, at least in the still images. One thing that's not immediately obvious is how much shimmering and blobbing in and out of higher resolution assets there is on the shadows, even at maximum quality. You have to experience the game in motion before you can see how distracting this can be. That alone is enough to make us want to use the full RT option, which basically totally fixes the shadow issues.
Except, full RT with the low quality settings is a case of giving and taking away. The shadows look more stable and much better overall, but the lack of proper reflections means all the water surfaces look pretty awful — the water is basically worse looking than the low global preset using Lumen. Given the choice between shadow issues and water/reflection issues, and also factoring in the performance drop, it's pretty easy to make the case for sticking to traditional rasterization (or at least software lighting and shadows via Lumen) methods.
Using the medium setting for full RT significantly improves the look of water, but even then it's still pretty blurry due to the use of half-resolution reflections. You also don't get the RT caustics or the particle reflections. However, if you look at the drop in performance — the RTX 4080 Super goes from the mid-40s to the low 30s with the very high RT setting — you might be willing to live with the blurriness.
Setting | Avg FPS | vs. Cinematic | Speedup |
---|---|---|---|
Cinematic | 56 | — | 1.00X |
Very High | 66 | 18% | 1.18X |
High | 88 | 57% | 1.57X |
Medium | 106 | 89% | 1.89X |
Low | 131 | 134% | 2.34X |
High + RT Low | 55 | -2% | 0.98X |
High + RT Med | 53 | -5% | 0.95X |
High + RT Very High | 39 | -30% | 0.70X |
Cine + RT Low | 45 | -20% | 0.80X |
Cine + RT Med | 44 | -21% | 0.79X |
Cine + RT Very High | 33 | -41% | 0.59X |
High + RT VH + FG | 61 | 9% | 1.09X |
Setting | Avg FPS | vs. Cinematic | Speedup |
---|---|---|---|
Cinematic | 49 | — | 1.00X |
Very High | 58 | 18% | 1.18X |
High | 80 | 63% | 1.63X |
Medium | 101 | 106% | 2.06X |
Low | 130 | 165% | 2.65X |
High + RT Low | 20 | -59% | 0.41X |
High + RT Med | 19 | -61% | 0.39X |
High + RT Very High | 10 | -80% | 0.20X |
Cine + RT Low | 17 | -65% | 0.35X |
Cine + RT Med | 16 | -67% | 0.33X |
Cine + RT Very High | 9 | -82% | 0.18X |
High + RT VH + FG | 20 | -59% | 0.41X |
Here's a different look at performance, using 4K with 67% scaling with all the presets, including both high and cinematic combined with the three ray tracing settings. We have results for the RTX 4080 Super and RX 7900 XTX, showing relative performance compared to the cinematic preset without full RT.
On the 4080 Super, baseline cinematic performance lands at 56 fps — definitely playable but not perfectly smooth. The very high preset boosts performance by 18%, breaking the 60 fps threshold, while the high preset yields a 57% improvement and gets the GPU to a solid 88 fps. Medium runs 89% faster than cinematic, and finally the low preset gives a 2.34X speedup and gets the GPU past the 120 fps mark.
Turning on full RT drops performance, but not by that much if you run with the high settings. High with full RT set to low basically performs the same as the cinematic setting, while high with medium RT only runs 5% slower. High with maxed out RT drops performance by 30%. Using cinematic with RT shows larger drops of 20%, 21%, and 41%. One final option is the high preset with max RT, and also with frame generation enabled, where the 4080 Super manages a respectable 61 fps (that still feels like half that speed).
AMD's RX 7900 XTX shows relatively similar scaling at first when looking at the rasterization results, though it picks up speed as the quality settings decrease. Baseline 4K cinematic performance is 49 fps, and stepping down through the presets shows relative performance improve by 18% with very high — the same improvement we saw on the 4080 Super. But then it gets 63% faster at high, 106% for medium, and a 2.65X speedup at minimum (low) settings. It's interesting that AMD's GPU seems to benefit more from lower settings than the Nvidia GPU, but then look what happens when we turn on the full RT modes.
The high preset with RT low causes a 59% drop in performance compared to the baseline cinematic score. Ouch! RT medium causes a 61% reduction, and the maximum RT setting slashes performance by 80% — as in, the 7900 XTX runs one fifth as fast as the cinematic baseline. The GPU goes from being at least reasonably playable with maxed out non-RT settings to being completely inadequate. Bumping to the cinematic preset with the RT modes shows even larger deltas: Performance drops by 65%, 67%, and 82% with the low, medium, and very high full RT modes. And with frame generation, using the high plus RT very high option, you can get back to 20 fps.
Compared to Nvidia's at least somewhat similar performing RTX 4080 Super, AMD's fastest GPU struggles badly with all the full RT modes. We've seen before that Nvidia's RT hardware tends to perform much better, and larger numbers of rays (or RT effects) widen the gap, but it's hard to say precisely why games with full RT, aka path tracing, fall flat on their face. Is it truly just a hardware problem, or is there a lack of software optimizations also playing a role?
Here's perhaps a better look at image quality, or at least a scene with a waterfall (not captured from a video) that shows off the ray tracing potential. The full RT at max settings looks really nice, and the caustics reflecting from the water all animate as you move around. Dropping to the medium full RT setting with half resolution reflections looks okay but not nearly as impressive, while the low RT setting looks worse than the Lumen for rendering the water (in my opinion at least).
Regardless of how it runs, it really feels like the ray tracing options in Black Myth Wukong are something where you'll want to either go whole hog or else leave RT off. The problem is that the very high RT setting needs an equally high-end GPU — from Nvidia. Basically, you're looking at an RTX 3080 or RTX 4070 or above just to manage 1080p with quality mode upscaling and maxed out RT settings, and that won't even hit a steady 60 fps; AMD's top GPU can't even manage a consistent 30 fps.
Needless to say, frame generation at such low base framerates feels very much like a placebo, and we wholly discount the claimed performance gains that Nvidia might show when using framegen. 60 fps with framegen is effectively running at 30 fps for user input and doubling that value, so when there's a hiccup and the generated framerate drops to 40 fps, that means the user feels it as big drop to 20 fps, and anything below 30 fps registers as a major stutter. Also, we've seen frame generation, both DLSS and FSR, start glitching out if performance is too low, as the differences between the rendered frames can become too great. 50–60 fps with frame generation in Black Myth Wukong is generally playable, but then so is 25–30 fps without framegen; it's just not a great experience.
So, for current AMD GPUs, we suggest forgetting about the full RT options, until or unless driver and/or game updates improve the situation. Full RT on the very high setting just isn't viable on any RDNA 3 hardware, never mind RDNA 2. Or at least, the way full RT is done in Black Myth Wukong isn't viable on AMD, though there are probably ways of doing full RT that would run better on AMD's hardware. Even 1080p with upscaling drops below 30 fps on the 7900 XTX when using the very high RT setting with the cinematic preset, so even a drop to the high preset with full RT would probably only just manage 30 fps.
Black Myth Wukong closing thoughts
As a game, Black Myth Wukong looks great, but higher settings absolutely require a capable graphics card. 1080p medium with upscaling runs well enough on lower spec GPUs, easily breaking 60 fps on an RX 6650 XT, but the cinematic "ultra" settings cause a massive spike in requirements, dropping performance by more than half. The medium to high presets should be more than sufficient if you're not too worried about missing out on a few visual extras.
Full ray tracing at higher settings, which is what we really want from full RT, basically requires an Nvidia GPU. AMD's RX 7900 XTX managed just over 60 fps with our medium + low RT testing, but then water doesn't look as good as the standard Lumen rendering. Performance dropped to just 28 fps with maxed out RT settings, however, which isn't really playable in our book. The competing RTX 4080 Super more than doubled that and remained fully playable at 1440p, and even 4K was okay.
What's not clear is how much of the poor performance for the full RT mode on non-Nvidia cards stems from those GPUs' lack of ray tracing prowess, and how much of it is due to the game being heavily optimized for Nvidia's brand of RT hardware. Full path tracing, if we want to use Nvidia's term for it, will always be extremely demanding, and all indications are that it's mostly only viable on lighter games like Minecraft RTX or on a very high-end RTX 30- or 40-series GPU. Could the game be better optimized to run on AMD's brand of RT hardware? Almost certainly. However, it's probably a case of double-digit percentage gains rather than a doubling or tripling in performance to close the gap with Nvidia's GPUs.
We didn’t perform CPU testing, but we may add the game to our CPU test suite (sans ray tracing) in the future. The system requirements suggest the game doesn't really need more than a 6-core CPU, maybe 8-core at the top, as nothing above a Core i7-9700 or Ryzen 5 5500 is listed. The GPU recommendations are much higher, as you'd expect from the performance we've shown here.
Reviews of Black Myth Wukong have been very positive, and its blend of quirky and visually interesting bosses and other enemies helps it stand out from the crowd. The Chinese mythology can be interesting as well. It's a Souls-like game, though at least so far I wouldn't rate it as being as difficult as any of the Dark Souls games. (To be fair, I haven't progressed that far in the story, so maybe the difficulty picks up later.)
After all of this initial testing, as usual our best advice is to not get too caught up in chasing the highest graphics settings if you don't have a top-tier GPU. Medium to high, without full ray tracing, should be within reach of most decent gaming PCs, and extra visual pizazz doesn't make for an inherently better gaming experience. Still, if you have a high-end RTX 40-series GPU, the full RT experience with a particle reflection system and caustics can look quite impressive in the many areas of the game where you see it in action.
Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.
-
oofdragon But.. have you actually PLAYED the game? I did and I can tell you Ray tracing ISN WORTH AT ALL. It's not "realistic" neither "better", it's actually just "different". Now cut the bReply -
JarredWaltonGPU
Yes, I have. What GPU are you playing it on? Because if you don't have a GPU that can run the full RT at very high, like RTX 4070, it's not worth it. If you have a top-tier Nvidia GPU, though, it looks much better in action than the non-RT stuff. Shadows don't flicker around and go blobby, water and reflections look nicer, the lighting looks more accurate (stuff gets indirectly lit and so dark areas aren't always quite as dark).oofdragon said:But.. have you actually PLAYED the game? I did and I can tell you Ray tracing ISN WORTH AT ALL. It's not "realistic" neither "better", it's actually just "different". Now cut the b
Does it make the game better? Not really, as I say multiple times. Don't just read the headline and decide you know what I'm thinking. My point in the headline is that this is a tour de force for Nvidia's ray tracing hardware. It looks better, and it only works well on Nvidia GPUs. Nothing else comes close. My point isn't that you should feel bad if you can't enable the very high RT setting and get good performance, but if you have an RTX 4070 or above? Sure, you can run it at max settings and it looks amazing and plays just fine.
Upscaling is here to stay, especially as games become more demanding. So when I say "1080p with 67% scaling via DLSS/FSR/XeSS" that's fundamentally different than "native 720p." If you don't have RTX hardware and can't run DLSS, though, I can understand why people would think upscaling isn't that great. DLSS > XeSS > FSR 2/3. The first two only cause a minor drop in image fidelity for a boost in FPS, while the last causes clear image degradation, even at the "Quality" setting.tyns78 said:Why are we calling 720p 1080p now? -
brandonjclark OK, I'll add it to my Wishlist. But I'm still not paying full price for it. I'll scoop it up when it hits around $20. I don't care if it takes years.Reply -
JRStern I don't even "game" but I might go for the tech just as a nerdy thing, maybe, if there were some local shops that would do some side-by-side comparisons like you describe here.Reply
I'd probably be completely satisfied by HD (1920x) and 30fps, maybe that will work fine on most cards? -
Heat_Fan89
I still have NO interest in RT. I have seen it on some games and I just turn it off. I prefer FPS and smooth gameplay over graphical tricks. Maybe when the hardware can actually keep up with RT with no impact on performance, then maybe i'll leave it on but by that time we'll probably talking about the RTX 8090.oofdragon said:But.. have you actually PLAYED the game? I did and I can tell you Ray tracing ISN WORTH AT ALL. It's not "realistic" neither "better", it's actually just "different". Now cut the b -
Roland Of Gilead Liked this review a lot! Frame Gen is still hit and miss it seems.Reply
I wonder what the nVIdia GPU's would run like with AMD Frame Gen? Any better or worse than nVidia's implementation. I'm asking this because as a lowly RTX3xxx series owner, I can only use AMD Frame Gen, Interesting for comparison purposes. -
valthuer @JarredWaltonGPU I ran the game's benchmark at 4K maximum (Cinematic) settings, at TSR, with Full Ray Tracing On, Vsync Off, Full Ray Tracing Level Very High, without Frame Generation, and with Super Resolution set at 100. I only got an average of 22 FPS, with a minimum of 18 and a maximum of 27.Reply
My rig, consists of an i9-13900K, with 64GB RAM and an RTX-4090.
Are those numbers normal for me?
Thank you in advance for your time. -
JarredWaltonGPU
So, even though the game appears to have FSR 3.1, you can't enable AMD framegen with Nvidia DLSS upscaling. That means you need to run FSR upscaling and framegen. I have tried this (testing some 30-series GPUs now...) and visually FSR right now just doesn't look good in this game. Ghosting and other artifacts, plus blurriness. You can use it, but it's fugly.Roland Of Gilead said:Liked this review a lot! Frame Gen is still hit and miss it seems.
I wonder what the nVIdia GPU's would run like with AMD Frame Gen? Any better or worse than nVidia's implementation. I'm asking this because as a lowly RTX3xxx series owner, I can only use AMD Frame Gen, Interesting for comparison purposes.
Um, RTX 3060 12GB saw 'performance' with framegen and FSR (compared to baseline DLSS) improve by 78%, so that's pretty similar to the AMD scaling from framegen. I'd suggest skipping framegen and full RT and just use DLSS with whatever settings your card can handle. Medium preset at 1080p gets ~80 fps on the 3060, while the cinematic preset gets just 28 fps. High should be around 40~45 fps.