Today's Best Tech Deals
Picked by PCWorld's Editors
Top Deals On Great Products
Picked by Techconnect's Editors
- Nvidia GeForce RTX 3090 specs and features
- Nvidia GeForce RTX 3090 Founders Edition design
- Our test system
- GeForce RTX 3090 content creation benchmarks
- GeForce RTX 3090 4K and 1440p gaming benchmarks
- GeForce RTX 3090 8K gaming benchmarks
- Power draw, thermals, and noise
- Should you buy a GeForce RTX 3090?
Let’s continue our look at the benefits Nvidia-specific software can provide with a pair of tools that require GeForce graphics cards. The Radeon VII, obviously, isn’t included in the next couple of tests.
Maxon’s Redshift is a GPU-accelerated biased renderer that requires CUDA-capable graphics. It’s been used in the real world by major companies like Jim Henson’s Creator Shop and Blizzard Entertainment. We tested our cards using the hard-to-find Redshift “Age of Vultures” benchmark in the demo version. Redshift has also implemented OptiX and supports additional acceleration with RTX graphics cards. We enabled that for all applicable GPUs, while the GTX 1080 Ti stuck to bare CUDA. The results listed are seconds to render, so again, lower is better.
Yep, it’s a whupping. The GeForce RTX 3090 is over 65 percent faster than the GeForce RTX 2080 Ti, which offered most of the performance of the $2,500 RTX Titan, albeit with lower memory capacity. As you can see, though, if your specific workloads don’t tap into the 3090’s massive 24GB memory buffer, it’s only slightly faster than the much cheaper RTX 3080.
Next up: OctaneBench 2020 v1.5. This is a canned test offered by OTOY to benchmark your system’s performance with the company’s OctaneRender, an unbiased, spectrally correct GPU engine. OctaneBench (and OctaneRender) also integrate Nvidia’s OptiX API for accelerated ray tracing performance on GPUs that support it. The RTX cards do; the GTX 1080 Ti again sticks to CUDA. The benchmark spits out a final score after rendering several scenes, and the higher it is, the better your GPU performs.
The GeForce RTX 3090 scores a massive 86 percent higher than the RTX 2080 Ti, and 19 percent higher than the RTX 3080.
Okay, let’s get back to non-CUDA-specific tests. When you’re talking about professional tools, viewport performance matters. Unlike with gaming, faster GPUs aren’t always better through brute strength in ProViz. SPECviewperf 13 measures GPU viewport performance using traces in 3ds Max, Maya, Siemens NX, Creo, CATIA, and SolidWorks, as well as energy and medical tests that draw on datasets typical of those industries. Both AMD and Nvidia contribute to the project as part of SPECgpc.
Higher scores are better. Also note that this particular set of specialized benchmarks are likely to score higher with Nvidia Quadro or Radeon Pro cards thanks to their optimized professional drivers, but we don’t have any on hand to test.
The GeForce RTX 3090 generally offers better viewport performance than other options, though it doesn’t smash quite as hard as it does in other tests. These are very application-dependent results. Still, if you use these tools, these benchmarks indicate you’ll notice faster viewport responsiveness with Nvidia’s new GPU.
Our final set of benchmarks examines rendering performance in DaVinci Resolve Studio 16, a production tool that “combines professional 8K editing, color correction, visual effects and audio post production.” (Nvidia provided an activation code for our testing.) It’s very popular among creative professionals, especially for 8K media editing.
We tested these GPUs using Puget System’s DaVinci Resolve Studio Benchmark. Puget Systems specializes in creating high-end, custom professional workstations, and the company crafted a serious of benchmarks for various applications to quantify performance.
Puget offers several different benchmarks for DaVinci Resolve. We used the 4K benchmark, which requires 16GB of system RAM and at least 8GB of GPU VRAM. It tests a variety of media codecs, and each of the codecs gets put through the following gauntlet, per Puget:
“For the 4K and 8K render tests, we benchmark 5 different levels of grade:
- ”Optimized Media” - No effects applied in order to simulate the performance when generating optimized media
- ”Basic Grade” - Simple color wheel and similar adjustments with a base grade plus 4 power windows.
- ”OpenFX - Lens Flare + Tilt Shift Blur + Sharpen” - Basic Grade plus four OpenFX
- ”Temporal Noise - Better 2 Frames” - Basic Grade plus a single TNR node
- ”3x Temporal Noise - Better 2 Frames” - Basic Grade plus three TNR nodes using splitter/combiner nodes
The “Optimized Media” timeline is rendered out to MXF OP1A DNxHR LB at 1920x1080, while the others are all rendered out to Quicktime DNxHR HQ at the native resolution of the timeline (UHD or 8K).”
Puget’s tool measures how many frames per second it takes to complete each benchmark on each codec, then spits out an average score for each type of test, along with an overall score. Those scores are based on performance relative to their reference workstation with a Core i9-9900K and a 24GB RTX Titan; the higher the score, the better. Nvidia GPUs use CUDA in DaVinci Resolve, while the Radeon VII leans on OpenCL.
As the Overall Score and Basic Grade results show, you’ll notice a decent performance bump with the GeForce RTX 3090. But once you start going heavy with the GPU effects, the GeForce RTX 3090 starts to roar, as evidenced in the Open FX and Temporal Noise results.
Puget’s benchmarking tool also proved the worth of one of the GeForce RTX 3090’s most key features: its massive 24GB of GDDR6X VRAM. We hoped to run Puget’s 8K benchmark suite as well, but every Nvidia card except the 3090 ran out of memory and crashed during the attempt. If you’re editing 8K video, you need a graphics card with 24GB of VRAM, and the GeForce RTX 3090 offers it for $1,000 less than the RTX Titan did.
Exceeding your GPU VRAM makes some applications (like Blender and DaVinci Resolve) fail their tasks. Other tools may allow you to use general system memory if the scene you’re working on exceeds the capacity of your VRAM, but doing so delivers a huge performance impact on rendering time.
Nvidia’s reviewer’s guide walked through just such a scenario with OctaneRender. The results above show how significantly performance improves if you can keep the entire workload directly on your graphics card’s VRAM rather than going “out-of-core” to system memory.
Next page: 4K and 1440p gaming benchmarks
Nvidia GeForce RTX 3090 Founders Edition
The extravagant GeForce RTX 3090 is a poor value for pure gamers, but a stunning value for creators who can use its massive 24GB of memory. Nvidia's Founders Edition cooler is exceptional.
- Amazing value for professionals
- Fastest graphics card for gaming
- Massive 24GB memory capacity
- Exceptionally cool, quiet Founders Edition design
- NVLink support
- HDMI 2.1, AV1 decode makes some 8K gaming possible
- Nvidia CUDA and OptiX supercharge content creation
- Poor value for pure gamers
- Very large, potentially limiting multi-GPU deployment
- 8K gaming is very hit and miss
- Only 1 HDMI 2.1 port
- 12-pin power adapter is ugly
Dell Coupon code
Extra $200 off on Inspiron 27 7000 Touch All-In-One with Stand with Dell coupon code
Eastbay Promo Code
15% off $75 with Eastbay promo code
AT&T Wireless Promo Code
$700 off iPhone 12 Pro
Rosetta Stone Promo Code
Save $100 on lifetime access - Father's Day promotion
Samsung Promo Code
Samsung Promo Code: Up to 40% off your order
Motley Fool Discount
Motley Fool discount: 84% off Rule Breaker