adroc_thurston
Diamond Member
- Jul 2, 2023
- 6,038
- 8,515
- 106
That was still bleeding edge.A CPU that used a 2 year old process at the time of its launch
That was still bleeding edge.A CPU that used a 2 year old process at the time of its launch
Where did this leak come from?Idk how we're still debating the nodes
Venice (MS server) uses N2P
Venice-Dense (Cloud server) uses N2P
Olympic Ridge (desktop) uses N2P
Gator Range (high-end laptop) uses N2P
Medusa Point 1 (premium laptop) uses both N2P+N3P for top SKUs and N3P only for lower end SKUs
All of this information has been given from AMD to OEMs and AIBs.
The other half is "and now an ad from a sponsor..."More than 1/2 of the video is Tom's tariff retardation
His secret sponsor is AMD( cause his AMD leaks are pretty accurate but his Intel leaks are hit or miss)The other half is "and now an ad from a sponsor..."
That was still bleeding edge.
To be specific it's more like using it when the yields are still somewhat in the toilet, but the profits gained from the end product are so good (a la iPhones) that it doesn't really matter.Bleeding edge means being the first to use the node
Where did this leak come from?
Surely, some OEM leaked out
Personally, I've never heard of it
Venice (MS server) uses N2P
Venice-Dense (Cloud server) uses N2P
Olympic Ridge (desktop) uses N2P
Gator Range (high-end laptop) uses N2P
Medusa Point 1 (premium laptop) uses both N2P+N3P for top SKUs and N3P only for lower end SKUs
Not even a question - especially at the 2S SKU range.That said, which one is more profitable, the server or the desktop? I think it's a server
TSMC doesn't usually fail... but There is also an example called N3B. After all, it may be just right to use it after waiting for a while when it comes to a new process. As time goes on, production increasesTo be specific it's more like using it when the yields are still somewhat in the toilet, but the profits gained from the end product are so good (a la iPhones) that it doesn't really matter.
As for the chiplet, when AMD first introduced the chiplet, the degree of perfection was lower than it is now. It's just an accumulation of experience and improvements I think it's the same for Intel and AMD.If this indeed turns out to be true, it'd represent the most aggressive move ever by AMD, a serious attempt at a "kill shot" against Intel.
If that is the case, Zen 6 is shaping up to be a much more impressive jump performance wise than Zen 5. 2nm + 12core CCD + new GPUIOD + potential clock boost should be far > new uarch on same node at same frequency that Zen 5 was. Now, Im not expecting miracles in latency and mem speed improvements from the new IO, but I do expect at least SOME improvement. Intels fancy looking "tiled" IO seemed to suffer from the same issues as Ryzens, leading me to believe that the trade offs are just inherent to chiplet designs and short of a true breakthrough, improvements will be incremental.
If this indeed turns out to be true, it'd represent the most aggressive move ever by AMD, a serious attempt at a "kill shot" against Intel.
If your server/AI earnings increase...If you think about it, Client CPU performance doesn't matter anymore. Only AI (and VCache for DIY).
The pricing will be crazy I am guessing.
And how many people would buy a 1k cpu?One possibility is that they sell the 48 thread part for $1000 while the 32 thread part maintains its current price more or less.
Most certainly!That said, which one is more profitable, the server or the desktop? I think it's a server
If the server/AI is more on track, Consumer markets may not be so important...And how many people would buy a 1k cpu?
Most certainly!
I think AMD is all in on Server first strategy. But you can't ignore their increased margins in high end desktop even though the volumes are much lower than corporate laptop.
Almost everybody who will buy 6090, so that's a few millions at leastAnd how many people would buy a 1k cpu?
A lot of people buy the highest end because of the prestige of owning a flagship CPU and also due to FOMO because of scalpers.And how many people would buy a 1k cpu?
No, at least in the area where I live
9800x3D is traded at a fairly high price
Ryzen7's X3D model is... The concept was to boast the same performance as a high-end CPU at a middle range price… 5800X3D…
It's a good dollar box for AMD, so I can't blame it... I think the current 9800x3D is out of that concept.
In the first place, the packaging lanes for implementing V-cache are limited, so there are some places that can't be helped...
This is just a story about AMD using N2...
AMD is not the first to order N2
Some manufacturers may have ordered it at the same time as AMD, or have already ordered it before AMD ordered it.
This is just an announcement that it was tape-out and produced (probably engineered sampling).
Chances of us encountering alien life in the next 5 years are higher than that.
Almost everybody who will buy 6090, so that's a few millions at least
And given that Unreal Engine 5 will be everywhere soon a fast CPU will be essential to enjoy gaming, does not need to be 1k version however
They've only improved it just about now in 5.6 - most big game dev use much older versions as it takes 5-6-7 years to make game these days, and upgrading isn't trivial so they will ship using older versions most certainly. And frankly 5.6 isn't exactly solving the problem completely, they only hope to achieve it in UE 6 - so that's for games a decade from now.A lot of games are already on first implementations of Unreal 5 engine, but it is the newer ones and upcoming versions that will, apparently, reshuffle the CPU-GPU interactions and increase parallelism.