Compute requirements will go down, perhaps very quickly, but only when controlling for output (amount and robustness) requirements. Output requirements on the other hand are skyrocketing, and the ceiling for them probably doesn't exist.
Same. Right up until AMD rug pulled TR4 and abandoned any pretense of providing a reliable cadence. Spending that much money and then being stuck in the dark when it comes to upgrade path really sucks.
I wouldn't bet on it, but I also wouldn't be surprised to see a future socket go triple channel again.
1) Core counts are going up
2) If Zen6 leaks are any indication, even desktop is moving towards non-crap tier integrated graphics.
3) The emerging AI prosumer market means +50% capacity...
It is of course possible to have tasks that are massively parallel which are *not* that bandwidth sensitive.
I wonder what the LLC looks like for the ostensible 52-core monster? There's more than one way to skin a cat, and something like v-cache, or Crystal Well, or just something large and...
That sounds extremely dysfunctional. The obviously un-optimal (in terms of area and performance) dual CCX config is what I'd expect if AMD made the decision to sacrifice quality on the alter of cadence, and not something I'd expect given the opposite. Or do you mean the Zen5 core rather than the...
As nice as 8 CUs sounds AMD isn't just giving us that on desktop from the goodness of their hearts. Aside from the AI theory, it could make sense if the same IOD is being reused for something else where some GPU grunt makes more sense.
Nah. You aim for the crown always and never stop, no matter what hand you have to play with or what kind of architectural deficit you're suffering.
If Nvidia had cancelled GF100 that generation would have resulted in something like a straight duopoly.
If Nvidia had abandoned the high end with...
The right answer.
Yet unless AMD is able to satisfy the market with a sufficiently large quantity of units at (or at the very least near) MSRP over the coming months, then the optics win won't go as far as it could have, and if/when in the future AMD finds itself in a similar situation...
It feels like there's tension here because I can see us getting a G7 version or a 32GB version in the future, but I have a hard time seeing both. It would create a confused product stack, while the main thing G7 would do for a theoretical 32GB "ghetto prosumer" SKU is balloon cost and/or shrink...
None right now, but there are obviously hypotheticals. There's a scenario where NPUs make sense to free up GPU resources when games start to incorporate local AI models while *also* wanting to look pretty. Even if the GPU has enough compute to spare, memory capacity is tricky.
Now, whether on...
GB multi-core scores suck because Primate Labs decided to make them suck.
Phoronix scores are awesome because when you take an enormous number of samples the central limit theorem kicks in and makes them awesome.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.