➀ NVIDIA benchmarks STALKER 2 on GeForce RTX 4090, achieving over 120FPS at 4K with DLSS 3; ➁ DLSS 3 significantly boosts performance on RTX 40 series GPUs; ➂ NVIDIA tests STALKER 2 at max settings with DLSS 3 enabled, achieving impressive frame rates across the range of RTX 40 series GPUs.
Recent #GeForce RTX 4090 news in the semiconductor industry
➀ Apple's upcoming M4 Ultra GPU is expected to outperform NVIDIA's GeForce RTX 4090 in OpenGL and Vulkan APIs; ➁ The M4 Ultra will feature a 32-core CPU and an 80-core GPU, doubling the GPU cores of the M4 Max; ➂ Benchmarks suggest the M4 Ultra could achieve a Geekbench 6 compute score of 333,000.
➀ NVIDIA is discontinuing the GeForce RTX 4080 and RTX 4090; ➁ GeForce RTX 4090 stock is dwindling, causing prices to rise; ➂ Production halt to make room for the upcoming RTX 50 Series.
1. Two visually impressive PC games, Pax Dei and Still Wakes The Deep, push the GeForce RTX 4090 to its limits; 2. Without DLSS 3, the RTX 4090 averages 51.7 FPS in Pax Dei at 4K max settings; 3. With DLSS 3 enabled, performance increases significantly, reaching 145.6 FPS on the RTX 4090 and 138.3 FPS in Still Wakes The Deep; 4. The question arises whether technologies like upscaling and frame generation are a crutch for the RTX 4090, which should be able to run any game at 4K 60 FPS natively if properly optimized; 5. The developers of Pax Dei and Still Wakes The Deep are pushing 'Max Settings' image fidelity with technologies like DLSS in mind, and the quality of upscaling technologies is now being considered in game performance expectations.
➀ The GeForce RTX 4080 Super, priced at $999, may see its production end in October 2024; ➁ The RTX 4090, despite its high price, remains popular and is expected to be succeeded by the RTX 5090 with 32GB of VRAM; ➂ The RTX 5090 is rumored to have a 512-bit memory bus and is anticipated to be announced at CES 2025.
➀ A modder in China has installed a full desktop RTX 4090 GPU in a heavily customized laptop, offering desktop-level performance in a mobile form factor; ➁ The laptop uses a mini-ITX ASUS ROG Strix B650i motherboard and AMD's Ryzen 9 7959X3D desktop CPU, with 32GB of RAM; ➂ Despite its impressive performance, the laptop lacks a battery and is quite heavy, weighing 6.7 kg with its power supply.
AMD's high-end data center APU, the AMD Instinct MI300A, has been benchmarked on Geekbench 6.3.0. The performance of the MI300A in Geekbench 6.3.0 was lower than that of the Core i5-14600K across a variety of tests. According to the Geekbench 6.3.0 benchmark results, the MI300A in a single-core test scored 1988 points, while the Core i5-14600K scored 2806 points. In a multi-core test, the MI300A scored 11908 points, while the Core i5-14600K scored 19368 points. The difference in performance between the two is quite noticeable. The MI300A is a top-tier data center APU designed for high-performance computing, and it was expected that it would outperform the mainstream consumer CPU, such as the Core i5-14600K. However, in the Geekbench 6.3.0 benchmark, the MI300A's performance was significantly lower than that of the Core i5-14600K. This is a worrying sign for AMD, as the MI300A is expected to be a competitor to the NVIDIA GeForce RTX 3090, RTX 3080, and RTX 4000 in the data center market. The difference in performance between the MI300A and the Core i5-14600K is quite significant. The MI300A is equipped with 64个Zen 4 cores and 256个VE硬件加速器,但其性能表现并不出色, compared to the performance of the Ryzen Threadripper PRO series, it is still slightly better than the MI300A. However, compared to high-end consumer CPUs like the Ryzen Threadripper PRO series, the MI300A's performance is still significantly lower. This may indicate a potential design issue with the MI300A, or it may be a limitation of the Geekbench 6.3.0 benchmark itself.
1. Brazilian modders at TecLab upgraded NVIDIA's GeForce RTX 4090 with faster VRAM and tweaked BIOS settings, resulting in a 13% performance boost in benchmarks. 2. The modifications suggest NVIDIA may be limiting the full potential of its GPUs for cost and usability reasons. 3. NVIDIA's upcoming RTX 50 series is rumored to feature even faster GDDR7 VRAM, promising significant performance improvements.