Still it dosent change anything, If AMD decides to launch a Navi 12 APU it would be a product on top of -or a replacement of- Picasso, on both desktop and mobile.
Picasso APU is Zen 1, that is not a Zen 2 APU. Im talking about a Zen 2 APU.
Picasso is just Bristol Ridge all over again, we just need to wait it out.
Hi, I've been lurking here for a while now and finally decided to make an account.
I've been thinking, and I agree that the most logical solution for AMD is to have a smaller IO die, and just use binned chiplets. Whether the smaller IO Die is a quartered EPYC die or not is up in the air, but it makes absolutely no sense for AMD to have a different 7nm die for Ryzen. It'd simply be too expensive, and throw away their effort to make EPYC modular.
With that in mind, and assuming the rumors are "mostly correct", I can see how the lineup would work
Ryzen 3 with 6C/12T is 1 chiplet + an IO die
Ryzen 3G with 6C/12T is 1 cpu chiplet + 1 gpu chiplet + an IO die
Ryzen 5 and 5G is the above but with a fully functioning chiplet of 8C/16T
Ryzen 7 and 9 use two chiplets with 12C/24T and 16C/32T respectively and an IO die.
This also makes the most sense because from Zen, we know that with Infinity Fabric, cores in each CCX correspond with each other (core 0 in CCX 0 is connected to core 0 in CCX 1, CCX 2, etc), which is why they have been disabled in pairs with Ryzen, quads in Threadripper, and octals EPYC.
However, this gives rise to a few questions that could invalidate these rumors:
Are 7nm yields really high enough that AMD can just ignore chiplets where 4 cores are disabled? If not, what are 4 core chiplets being used for (maybe consoles)?
Are chiplets and IO dies cheap enough to allow the pricing structure seen in the rumors?
I'd agree with most of you that the power consumption and frequencies in the rumors are mostly incorrect, however I think the improvements with TSMC 7nm are good enough to allow for 16C on a package as small as Ryzen.
Hi, I've been lurking here for a while now and finally decided to make an account.
I've been thinking, and I agree that the most logical solution for AMD is to have a smaller IO die, and just use binned chiplets. Whether the smaller IO Die is a quartered EPYC die or not is up in the air, but it makes absolutely no sense for AMD to have a different 7nm die for Ryzen. It'd simply be too expensive, and throw away their effort to make EPYC modular.
With that in mind, and assuming the rumors are "mostly correct", I can see how the lineup would work
Ryzen 3 with 6C/12T is 1 chiplet + an IO die
Ryzen 3G with 6C/12T is 1 cpu chiplet + 1 gpu chiplet + an IO die
Ryzen 5 and 5G is the above but with a fully functioning chiplet of 8C/16T
Ryzen 7 and 9 use two chiplets with 12C/24T and 16C/32T respectively and an IO die.
This also makes the most sense because from Zen, we know that with Infinity Fabric, cores in each CCX correspond with each other (core 0 in CCX 0 is connected to core 0 in CCX 1, CCX 2, etc), which is why they have been disabled in pairs with Ryzen, quads in Threadripper, and octals EPYC.
However, this gives rise to a few questions that could invalidate these rumors:
Are 7nm yields really high enough that AMD can just ignore chiplets where 4 cores are disabled? If not, what are 4 core chiplets being used for (maybe consoles)?
Are chiplets and IO dies cheap enough to allow the pricing structure seen in the rumors?
I'd agree with most of you that the power consumption and frequencies in the rumors are mostly incorrect, however I think the improvements with TSMC 7nm are good enough to allow for 16C on a package as small as Ryzen.
Does dual channel DDR4 memory have enough bandwidth to support 16c/16t CPUs? I'm thinking that even with 12/12t it will be pushing against the limits of dual channel DDR4.
Intel has been behind AMD before. Athlon 1GHz ring a bell? Or how about the Athlon 64? Tech goes in cycles, I believe Intel has made the right choice in sacking the CEO and also recruiting Jim Keller. They will be fine, even if they have to endure some short term pain.
Also, I believe using nm figures to compare progress is inaccurate and an oversimplification, because Intel 7nm != TSMC 7nm in terms of density. I believe Intel's 10nm node (at least the initial version) was roughly on par with TSMC 7nm density wise. I'll hazard a guess that Intel 7nm will have higher transistor density than TSMC 7nm and may rival TSMC 5nm in this regard. Regardless, I think 10nm is almost in write off territory for Intel and may be one of the shortest lived nodes ever. They should just cut their losses and move to 7nm ASAP.
I don't think so. They may have gotten close, but Intel was always the leader...
This time, Intel had to play catch up REALLY hard, as they were just cruising with the quad cores for a very LONG time... This time AMD broadsided them with 8/16 cores... Where they basically panicked and had to clobber something together from their server line and it shows that they had no response for threadripper. Now that AMD has come out with 32 Core TR, they are back to picking up the pieces. Intel hasn't been in this situation before, and as they keep missing their targets and deadlines.
I don't think so. They may have gotten close, but Intel was always the leader...
This time, Intel had to play catch up REALLY hard, as they were just cruising with the quad cores for a very LONG time... This time AMD broadsided them with 8/16 cores... Where they basically panicked and had to clobber something together from their server line and it shows that they had no response for threadripper. Now that AMD has come out with 32 Core TR, they are back to picking up the pieces. Intel hasn't been in this situation before, and as they keep missing their targets and deadlines.
When you're in denial about AMD ever being ahead of Intel, I'm not sure how seriously I can take you... but I agree that Intel is in for some short to medium term pain. That is obvious. Saying they will never come back is a massive call though.
Still it dosent change anything, If AMD decides to launch a Navi 12 APU it would be a product on top of -or a replacement of- Picasso, on both desktop and mobile.
Picasso APU is Zen 1, that is not a Zen 2 APU. Im talking about a Zen 2 APU.
Picasso is just Bristol Ridge all over again, we just need to wait it out.
Zen2 APUs won't be out until 2020. We won't know what they look like until then. I think it's important to remember that AMD is still producing their baseline APUs (Raven Ridge, Picasso, etc.) to maintain the SVM model supported fully by Carrizo/Bristol Ridge and partially by Kaveri. Moving the GPU to an IF-connected chiplet on interposer may break the SVM model with too-high latency. Hard to say for sure, though. We don't know what the inter-chiplet latencies will be like for any Zen2 product, and if the memory controller is mostly equidistant from the CPU and GPU elements of an IF-connected APU (rather than a mono-die APU), then it may not break the SVM model at all.
When you're in denial about AMD ever being ahead of Intel, I'm not sure how seriously I can take you... but I agree that Intel is in for some short to medium term pain. That is obvious. Saying they will never come back is a massive call though.
I think a lot of enthusiasts discount the value of branding. If AMD released something with say twice the performance of Intel's best (No, that won't happen. It's just for this discussion), Intel could continue pumping out 14++ Skylake revisions for years before it would start to take a toll. Enthusiasts would move fairly quickly, because FPS... But OEM's take far longer to change direction.
AMD is doing all the right things, and making a good product. I have one, and it's very nice. But it's a fight of many years. Not a quarter or two here and there.
Intel will never be back? That's a new one. I hope they come back and give AMD a fight. As it's great to have some competition. Even tho, I too now own AMD, actually I now own two Zen CPU's, that's a first. Oh well. I was going to wait for Picasso, even tho I may not get linux to run on it.
Yup, without competition AMD would eventually stagnate just as Intel did. AMD didn't release 8-core mainstream chips out of the goodness of their hearts, they did it because that's what they needed to do in order to survive as a company because another Bulldozer wasn't going to win them any customers. Now, Intel is going to need to release an even more impressive CPU in order to win back some of the folks who've gone over to AMD. Hopefully it doesn't take them as long as it took AMD though.
Does dual channel DDR4 memory have enough bandwidth to support 16c/16t CPUs? I'm thinking that even with 12/12t it will be pushing against the limits of dual channel DDR4.
As others have mentioned before, Epyc has max 4C/memory channel and Rome will have 8C/channel. AMD must have made allowance for this from the early design stages. TR 32 is the unplanned opportunistic child. Definitely a lesson in the power of glue and chiplets/small die to address new markets quickly and cheaply.
More and better cache in Zen2. Higher allowable memory clocks. They claim better latency, which should improve single thread and gaming performance, but we don't know the details.
I think one of the most important take-aways from the leaks is that APU=! just Picasso or Renoir.
The SOC version of Zen2 with IGP and IMC will launch in 2020, but the leak points to APU versions for the desktop in 19Q3. I think there will indeed be chiplet based APU's arriving around that time. For mobile, power is everything so the chiplet approach doesnt make any sense for Renoir.
The time-frame between 'EPYC derived' CPU only and Renoir also makes sense.
I think a lot of enthusiasts discount the value of branding. If AMD released something with say twice the performance of Intel's best (No, that won't happen. It's just for this discussion), Intel could continue pumping out 14++ Skylake revisions for years before it would start to take a toll. Enthusiasts would move fairly quickly, because FPS... But OEM's take far longer to change direction.
AMD is doing all the right things, and making a good product. I have one, and it's very nice. But it's a fight of many years. Not a quarter or two here and there.
I don't buy this. Maybe back in the days where everyone bought desktops. Now, its not consumers who are making the decision for most of the market. It's OEMs for laptops and Datacenters buying most of the CPUs. Those guys are absolutely going to compare specs, especially around heat, price and power. And they will go with whoever is best there, a datacenter will not show much loyalty to Intel or AMD when your 3 year CPU retirement cycle is up and one brand offers better perf/watt, absolute perf, etc.
Yup, without competition AMD would eventually stagnate just as Intel did. AMD didn't release 8-core mainstream chips out of the goodness of their hearts, they did it because that's what they needed to do in order to survive as a company because another Bulldozer wasn't going to win them any customers. Now, Intel is going to need to release an even more impressive CPU in order to win back some of the folks who've gone over to AMD. Hopefully it doesn't take them as long as it took AMD though.
The key or most important part is price=performance ratio. If Intel not going to go on similar "Chiplet route" like AMD, well Intel will end like "in the past very powerful city of Detroit automotive industry."
Zen 2 3000 series 6/12 CPU, the cheepest model let's say $130 that is kind of nuts good price.Intel cant do absolutely nothing vs this price=performance ratio, not for old 14nm( poor supplies)and especially not "for cheaper 7nm manufacturing proces".Hm hey Intel please lower the price of 6/6 i5 8500(or maybe i5 9500) let say 150$, yeah right it is posible but only in Twilight Zone.
Lower the nm manufacturing proces+monolithic chips, well the kind of chips are very expensive.Intel old 14nm is significantly expensive, compared to 14nm/12 on Ryzen/Ryzen+ CPU-s.
Hm Intel 7nm proces+monolithic chips, huh the cost of that will be 50% higher+painful CPU yields.
Do we remember Intel 28core CPU, the cost of that beast is what?The reality of that is simple, that 28 Core CPU is like not worth mentioning or "it was born dead".
I don't buy this. Maybe back in the days where everyone bought desktops. Now, its not consumers who are making the decision for most of the market. It's OEMs for laptops and Datacenters buying most of the CPUs. Those guys are absolutely going to compare specs, especially around heat, price and power. And they will go with whoever is best there, a datacenter will not show much loyalty to Intel or AMD when your 3 year CPU retirement cycle is up and one brand offers better perf/watt, absolute perf, etc.
Even with OEMs making most of the decisions, they're going to care about branding. Maybe not for their data center lineup, but they're going to be using whatever makes their products sell more.
With Intel's brand recognition, I'm sure most oems would still rather use Intel chips on the majority of their lineup.
I'm sure that's also why AMD has been showing up in the cheapest laptops and computers. People at that price point don't care about brand, all they want is something cheap that will work.
Even with OEMs making most of the decisions, they're going to care about branding. Maybe not for their data center lineup, but they're going to be using whatever makes their products sell more.
With Intel's brand recognition, I'm sure most oems would still rather use Intel chips on the majority of their lineup.
I'm sure that's also why AMD has been showing up in the cheapest laptops and computers. People at that price point don't care about brand, all they want is something cheap that will work.
I don't think Intel's brand is nearly as strong as it once was. I don't think it moves units like it used to in the past. I think its a mix of things. A big one is that average consumers just don't care that much about the markets Intel is in, or like in laptops where almost all of the premium products are Intel that they've practically tuned out the Intel brand in favor of the chassis - like Macbook, Surface, etc.
We've seen plenty of Intel products flop. Intel couldn't pay companies to use their chips in phones and non-Windows tablets (they literally tried to do that). RealSense? Didn't really take off. The entirety of the endeavor that brought us RealSense and the like (wearable push, forget what all else they were doing in that whole movement) flopped hard. Intel thought they could get people into it based on their brand but the mix of the tech often not being that great, or it being full on developmental, and most people didn't care. And remember when Intel thought they could get into fashion? That market is hyperfocused on brand and Intel got less than nowhere. Intel couldn't stop the PC market woes a few years back.
For anyone other than enthusiasts (and even for many of them), they don't really care who makes the parts that go into it, they're more focused on the overall product, and the companies making those things have started to notice. They push their own branding more. They might mention the components and who makes them, but they see how Apple focuses on its own brand.
While AMD's brand isn't where Intel or Nvidia's is, I have noticed a change. People are aware of it, and it seems they have an indifferent or slightly positive view of it. That might not seem like much, but compared to what it was, that's pretty big. Back when AMD was competitive and even better in CPUs (up to the Athlon 64 X2), most people weren't aware of them or had negative view of them. ATi fared a bit better, but around the time AMD bought them that started to change so that after 2010, even when AMD had competitive products, they'd lose out to Nvidia. Polaris did surprisingly well I thought (Nvidia was competitive in that market with the 1060 and 1050, and had the flagship 1070 and 1080, then the 1080Ti, but Polaris seems to have stayed pretty strong even though it didn't change much in perf/$ and got worse in perf/w over the years). And even though I think Vega was one of the least competitive in years for GPUs above mainstream market, it did well. Yes a lot of that were miners, but that still seems to have helped them (as I see people who don't keep up with tech talk about Vega and just look at its performance being in line with the 1070/1080 and then the price, possibly mentioning Freesync). I don't know if its the consoles, Freesync, or what, with GPUs and then of course Ryzen doing well, but average people have started to see AMD differently. AMD should look to capitalize on this, and I hope they can.
I don't think Intel's brand is nearly as strong as it once was. I don't think it moves units like it used to in the past. I think its a mix of things. A big one is that average consumers just don't care that much about the markets Intel is in, or like in laptops where almost all of the premium products are Intel that they've practically tuned out the Intel brand in favor of the chassis - like Macbook, Surface, etc.
We've seen plenty of Intel products flop. Intel couldn't pay companies to use their chips in phones and non-Windows tablets (they literally tried to do that). RealSense? Didn't really take off. The entirety of the endeavor that brought us RealSense and the like (wearable push, forget what all else they were doing in that whole movement) flopped hard. Intel thought they could get people into it based on their brand but the mix of the tech often not being that great, or it being full on developmental, and most people didn't care. And remember when Intel thought they could get into fashion? That market is hyperfocused on brand and Intel got less than nowhere. Intel couldn't stop the PC market woes a few years back.
For anyone other than enthusiasts (and even for many of them), they don't really care who makes the parts that go into it, they're more focused on the overall product, and the companies making those things have started to notice. They push their own branding more. They might mention the components and who makes them, but they see how Apple focuses on its own brand.
I agree with you. But I'd also like to make a quick distinction; The Intel products that have flopped were all attempts to enter a different market. Intel's bread and butter has been high power CPUs and memory. They made the mistake of entering the mobile market late, and failed as a result of that. I think they would have had a fair chance with mobile cpus if they invested more earlier. I also don't believe that their failure in wearables was unique or due to their brand; wearables in general just flopped hard because many companies couldn't create a product that was useful enough for people to buy. Intel's brand has only mattered when it comes to CPU's (and maybe memory), because that's what its identity is. If Old Navy suddenly started designing a car, and tried to sell it, I'm sure the name "Old Navy" wouldn't help them either.
Intel's brand is definitely weaker now, and I think AMD recognizes that and is trying to take advantage of it.
Adored now says that Ryzen 3000 will not have an IO die, new video coming from him today. Guess that's what the Hardware Unboxed guy meant when he said that it would be nothing like Rome.
Adored now says that Ryzen 3000 will not have an IO die, new video coming from him today. Guess that's what the Hardware Unboxed guy meant when he said that it would be nothing like Rome.
As far as I understood, AdoredTV still believes that there will be models from 6C up to 16C so let's go with that. His source mentions "chiplets" (all on 7nm) so it doesn't seem to be just a monolithic design with 16C. That would only leave few options and I have made some illustrations to present the most obvious one that came to mind. Let's first start by showing how Summit/Pinnacle Ridge and Raven Ridge AM4 packages differ:
I'm using Raven Ridge as a base but that doesn't really matter as new Ryzen 3000 packages will have new SMD layouts anyway. We all know that having memory controllers on die will lower memory latencies at least slightly. If Ryzen 3000 is going to be all 7nm then there is very little point to make just a pure I/O die using 7nm with two chiplets. Therefore MCM design with just two dies in total would make the most sense.
I know some of you won't like how 12C and 16C models would look like with the following design using a separate "compute" chiplet (somewhat similar to TR 2990WX but likely with lower latencies). If any of you have a better suggestion the please share it with us.
If we go by what AdoredTV said then that's basically what it could look like. I think that he added those dummy dies to the equation because he was speculating on two chiplets and one I/O die design for AM4 desktop. Sure you cold have 4C + 4C model (8 cores in total) but for mainstream, in my opinion, 8C and a missing CCD (or a dummy die) would make much more sense. So only 12C and 16C dies would have that chiplet. Sure that would not be an UMA design (12C & 16C) but for 6C and 8C models (if they just use that main die) it would be just fine. But I'm sure there are some other possibilites also.
So what is claimed here is that:
There could be a monolithic 8-core 7nm Ryzen CPU die (full SoC) that has one IF-link (IFOP 2.0) that can connect to (at least) one other chiplet. If chiplets have 32MB of L3 then the main die should likely also have the same amount of L3 cache. It's hard to calculate exact die size but I've given my best effort for now.
CCD's are the exact same chiplets that are used with AMD Rome server CPU and should have full 32MB L3 enabled to alleviate some latency penalties due to an extra IF-link.
Sure there's still a strong point regarding yields and binning using the same chiplet for all SKUs but maybe after all that's not the case this time. Still the die sizes would be smaller than the 14nm ones but highest end AM4 models may not exactly be what some of you wished for. I'm not claiming that this is the only option but given what AdoredTV said, well... you do the math.
AdoredTV earlier calculated some yields for the Rome chiplet as I already linked in this post (no reason to repeat it here). I will just add some wafer maps for both the Rome chiplet and this new speculative Ryzen CPU die (12mm x 10mm) using the same parameters as previously.
There are 817 dies (#617 good, #200 defective) per wafer for the Rome chiplet and 489 dies (#308 good, #181 defective) for the speculative Ryzen CPU die. AdoredTV also estimated (like itsmydamnation here) that each 7nm wafer would cost $12000.
(Edited:) Let's assume a scenario where 98% of all possible dies would be usable using die salvaging. Then there would be 0.98 x #817 = #801 usable Rome dies and 0.98 x #489 = #479 usable Ryzen CPU dies.
Then a single Rome chiplet die would cost $12000 / 801 = ~$15 and a speculative Ryzen CPU die $12000 / 479 = ~25$ to manufacture. Other costs including amortization of design costs, packaging, shipping etc. would have to be added on top of those.
Based on the previous speculation, do you think that AMD could sell Ryzen 3 3300 using salvaged 6C CPU die (120mm²) that would only be used for mainstream desktop for $100? Sure even for a separate Ryzen CPU die, there are higher priced SKUs that would rise ASP for each wafer.
Regarding design costs, porting all IO (DDR4, PCIe4, etc.) to 7nm (or 7nm+) needs likely to be made at some point for Renoir which might be a monolithic mobile APU on 7nm. Still nothing requires that AMD has to do that now and they could get away with keeping most IO on a separate 14nm IO chip. Still that's not what one of AdoredTV's sources is saying.
This message is getting really long so I will add some Navi GPU-chiplet speculation to another message. Please point out any mistakes or other points of view on the subject. This is all just s peculation based on what AdoredTV said.
Addition: I'm in no way trying to imply how good (or bad) this design is, i'm just trying to understand what AdoredTV said. It should also be mentioned that this is in no way confirmed information and we'll have to wait untill CES 2019 to know some more. Ryzen 3000 could still have a small IO die if it has 3 dies in total. AdoredTVs sources didn't seem to be sure for some things and it's all speculation for now. As has been brought up in this thread, AdoredTV presumably got many things right.
As far as I understood, AdoredTV still believes that there will be models from 6C up to 16C so let's go with that. His source mentions "chiplets" (all on 7nm) so it doesn't seem to be just a monolithic design with 16C. That would only leave few options and I have made some illustrations to present the most obvious one that came to mind. Let's first start by showing how Summit/Pinnacle Ridge and Raven Ridge AM4 packages differ:
I'm using Raven Ridge as a base but that doesn't really matter as new Ryzen 3000 packages will have new SMD layouts anyway. We all know that having memory controllers on die will lower memory latencies at least slightly. If Ryzen 3000 is going to be all 7nm then there is very little point to make just a pure I/O die using 7nm with two chiplets. Therefore MCM design with just two dies in total would make the most sense.
I know some of you won't like how 12C and 16C models would look like with the following design using a separate "compute" chiplet (somewhat similar to TR 2990WX but likely with lower latencies). If any of you have a better suggestion the please share it with us.
If we go by what AdoredTV said then that's basically what it could look like. I think that he added those dummy dies to the equation because he was speculating on two chiplets and one I/O die design for AM4 desktop. Sure you cold have 4C + 4C model (8 cores in total) but for mainstream, in my opinion, 8C and a missing CCD (or a dummy die) would make much more sense. So only 12C and 16C dies would have that chiplet. Sure that would not be an UMA design (12C & 16C) but for 6C and 8C models (if they just use that main die) it would be just fine. But I'm sure there are some other possibilites also.
So what is claimed here is that:
There could be a monolithic 8-core 7nm Ryzen CPU die (full SoC) that has one IF-link (IFOP 2.0) that can connect to (at least) one other chiplet. If chiplets have 32MB of L3 then the main die should likely also have the same amount of L3 cache. It's hard to calculate exact die size but I've given my best effort for now.
CCD's are the exact same chiplets that are used with AMD Rome server CPU and should have full 32MB L3 enabled to alleviate some latency penalties due to an extra IF-link.
Sure there's still a strong point regarding yields and binning using the same chiplet for all SKUs but maybe after all that's not the case this time. Still the die sizes would be smaller than the 14nm ones but highest end AM4 models may not exactly be what some of you wished for. I'm not claiming that this is the only option but given what AdoredTV said, well... you do the math.
AdoredTV already calculated some yields for the Rome chiplet as I already linked in this post (no reason to repeat it here). I will just add some wafer maps for both the Rome chiplet and this new speculative Ryzen CPU die (12mm x 10mm) using the same parameters as previously.
There are 817 dies (#617 good, #200 defective) for Rome chiplet and 489 dies (#308 good, #181 defective) for speculative Ryzen CPU die. AdoredTV also estimated (like itsmydamnation here) that each 7nm wafer would cost $12000. Let's assume the best case scenario where all possible dies would be usable. Then Rome chiplet would cost $14.7 and speculative Ryzen CPU die $24.5. That's only what the die would cost not including packaging costs and all the other costs (including lower than 100% yields expected here to simplify calculations). Based on that, do you think that AMD could sell Ryzen 3 3300 using salvaged 6C CPU die (120mm²) that would only be used for mainstream desktop for $100?
This message is getting really long so I will add some Navi GPU-chiplet speculation to another message. Please point out any mistakes or other points of view on the subject. This is all just speculation based on what AdoredTV said.
Calculating yield shows that it's profitable to manufacture, but what about the upfront costs of developing the masks and everything?
This article puts the cost of developing a die at $300 million. Let's say AMD gets to save $150 million on commonality between the ryzen chip and Rome. Would it be worth it for AMD to spend that much on a solely consumer chip?
The consumer chips having a worse yield than the server chiplets is a huge flag for me, no way they decide to do that (including the cost having an additional separate die).
If there's indeed no IOC in Ryzen I'd expect the chiplet to contain enough IO that can be repurposed for AM4 platform interfacing necessity.
I made some comparisons for Summit/Pinnacle Ridge and Raven Ridge using Die Per Wafer Calculator. I used otherwise the same parameters as before but for 14nm I assumed defect density of 0.08 #/mm² and low $3000 cost per wafer as AdoredTV did here. This is nothing new and if you have the time, I suggest to watch some of the older AdoredTV videos where he has some more wafer yield calculations. Anyway, here are the wafer maps:
(Edit:) Both dies are very similar size and if we assume 98% yield and there are 264 dies per wafer then 0.98 x #264 = ~#259 of the would be usable using die salvaging.
Then with $3000 wafer cost, the cost per die would be $3000 / 259 = ~$11.6. If we assume wafer cost of $6000 then cost per die would be double of the prevous i.e. ~$23.2. It should be pretty clear that 120mm² die on 7nm would be more expensive to manufacture than 210mm² die on 14nm. By how much, depends on actual wafer costs for both 7nm and 14nm.
It should still be noted that porting DDR4 memory controllers, PCIe4 and all other IO and uncore stuff to 7nm is not an easy or a cheap task. Therefore the original IO die concept made more sense from a financial and a manufacturing standpoint.
Would a small 72mm² chiplet be able contain 8 cores, 2xDDR4 memory controllers, 24-32xPCIe4 and alll other IO? Seems like the chiplet is too small to have all that. Let's hope that we get clearer info a little bit later.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.