Does that mean it's also possible for Nvidia to just use defective GB203 dies and make a 12G 192-bit RTX 5070 too? Or by same logic, they could've made a 192-bit 5060 on GB206 but they've chosen not to?
Fewer CUs, less bandwidth, but still the same 220W TDP? Efficiency doesn't look too good to me. Also, how did they manage to cut the bus width from 256-bit to 192-bit while still using the same Navi 48 die?
Does that mean it's also possible for Nvidia to just use defective GB203 dies and make a...
So, it doesn’t look as bad as I thought. Why are HUB’s performance figures lower than others, and why is its power consumption much higher than average (including his 9070 review having worse efficiency than 5070)?
60 class needs to be 192-bit bus and 12G of ram by today’s standard. It feels like a total ripoff leaving that much of performance gap, similar to how Nvidia “treats us” gamers.
This doesn’t make sense. If the 9060 XT is 128-bit and performs similarly to the 7700 XT, then which card is replacing the performance tier of the 7800 XT? The gap between the 9070 and 9060 XT is way too large.
I meant the non-XT should stay where it is, while the XT version should have a much higher CU count but be downclocked instead. They are obviously clocking the XT to the moon which tanks the efficiency completely.
It seems that the 9070 is being overlooked. It should've been the real star of the RDNA4 lineup. I assume AMD’s original plan was to position the 9070 to match the performance of the RX 7900 XT, but they somehow managed to push the clocks higher, bringing it closer to the RX 7900 XTX. As a...
https://videocardz.com/newz/amd-ryzen-9-9950x3d-tested-two-weeks-ahead-of-launch-similar-cinebench-performance-to-9950x
V-cache die doesn't seem to be clocked much lower this time?
Single-core performance and efficiency I meant, considering the fact that it is using the full-fat Zen 5 cores. (*edit: Dave2D got the multi-core number of U9 285H wrong so I got misleaded)
Can you see the split wattages for the CPU and GPU? Reviews looked good because they were comparing Halo products to older Intel and Ryzen parts, but I think ARL-H might stand a chance. Considering it's using the full-fat Zen 5 cores, I found it a bit underwhelming. Single-core efficiency isn’t...
Review is live:
https://www.notebookcheck.net/AMD-Ryzen-AI-Max-395-Analysis-Strix-Halo-to-rival-Apple-M4-Pro-Max-with-16-Zen-5-cores-and-iGPU-on-par-with-RTX-4070-Laptop.963274.0.html
Imagine AMD dominated Nvidia 3 generations in succession (that's all it takes for AMD to take over intel for 3 generations of X3D) and Nvidia has to lower its price to compete. I'm daydreaming lol.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.