CHHCaptain(380X?) power Preview

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Why is AMD forced to buy from GloFo? I know they contract wafers, but they (and everyone else) contract the same way with TSMC, etc...

Its part of the original deal with Mubadala when the foundry was separated from AMD. Amd used the money on ati and didnt have the money to pay for the foundry business also. Mubadala step in and buys the foundry - and the obligations - and as part of that deal amd said yes to pay a certain amount of capacity. If they dont use and pay for all the capacity they will get huge fines.
There is a second element in it besides the fines and thats willingness from Mubadala to help keeping amd afloate. As it is Mubadala is the decisive factor in amd.
Now until this day amd have used tsmc as foundry for their gpu. If future gpu will be produced on gf they not only does not have to pay fines, they will also have more freedom for other parts moving to eg tsmc and Mubadala might get more interested in amd.
Gf is the expensive investment here. So if their processes will actually work good and amd will have success porting the new gpu the entire relationship will benefit. Not only economically - but i guess after years of tough situation and colaboration - a moral boost.

It would be fantastic for competition and us consumers but for my part i havnt read a gf ppt for 3 years and will only start (if) the day the results starts showing.

Still man - its like since 7970 - nothing major really happened. I am so desperate i have started to have hopes for gf. Lol.
 

DownTheSky

Senior member
Apr 7, 2013
800
167
116
That sums it up better.
raghu78 is essentially claiming that an AMD single chip solution will game at 4K on a laptop in 2015 at GTX 970 SLI speeds...

I mean... I'm not saying I don't WISH it would happen? But it clearly won't. If it does happen, sign me up for Crossfire of the Desktop Card please....

I'm thinking more like 2016. Samsung/GF 14nm FinFET should be more than ready by then. That plus arch advancements, and I'm hoping for huge GPU performance leaps in the next couple of years. We're still basically running AMD 3 year old tech.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Why is AMD forced to buy from GloFo? I know they contract wafers, but they (and everyone else) contract the same way with TSMC, etc...

Actually, it came from a much more reliable source than anyone on these boards. There was a slide from the company (sorry the name escapes me ATM) that AMD uses to design their chips that claimed they had made one for a client. 2+2 makes it AMD.

The company is Synapse. AMD is their client

http://www.kitguru.net/components/g...-gpus-made-using-28nm-hpm-process-technology/

http://www.synapse-da.com/Corporate/Clients

Though I am still not convinced that AMD's next gen flagship GPU is built at TSMC 28HPM. My guess is GF 28SHP because of AMD's R&D history on 2.5D stacking with GF,Amkor and Hynix from way back in 2011.

http://sites.amd.com/la/Documents/TFE2011_001AMC.pdf
http://sites.amd.com/se/Documents/TFE2011_006HYN.pdf
http://www.setphaserstostun.org/hc2...Bandwidth-Kim-Hynix-Hot Chips HBM 2014 v7.pdf
http://www.amkor.com/index.cfm?objectid=E6A2243B-0017-10F6-B680958B1E902E87
http://www.globalfoundries.com/news...r-next-generation-chip-packaging-technologies
http://semiaccurate.com/2011/10/27/amd-far-future-prototype-gpu-pictured/
http://www.microarch.org/micro46/files/keynote1.pdf
http://electroiq.com/blog/2013/12/amd-and-hynix-announce-joint-development-of-hbm-memory-stacks/

AMD is already building Kaveri, semi-custom game console chips and GPUs at GF 28SHP. This was confirmed in their Q2 2014 earnings call conference.
 

Abwx

Lifer
Apr 2, 2011
11,825
4,766
136
Though I am still not convinced that AMD's next gen flagship GPU is built at TSMC 28HPM. My guess is GF 28SHP because of AMD's R&D history on 2.5D stacking with GF,Amkor and Hynix from way back in 2011.

For the record GF fabbed Kabinis s GPU has 38% less leakage than the ones manufactured at TSMC.

Leakage is about 35-40% of TDPs, a 250W TSMC fabbed GPU would yield 214W at GF, perhaps a little less since GF s 28nm seems to require a slightly lower voltage at GPUs frequencies.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
For the record GF fabbed Kabinis s GPU has 38% less leakage than the ones manufactured at TSMC.

Leakage is about 35-40% of TDPs, a 250W TSMC fabbed GPU would yield 214W at GF, perhaps a little less since GF s 28nm seems to require a slightly lower voltage at GPUs frequencies.

I did not know leakage is such a significant contributor to overall power draw. With a better process and substantial architectural efficiency improvements combined with HBM which provides the same bandwidth as GDDR5 at 1/3rd the power its quite possible that AMD are able to leapfrog Nvidia in efficiency. Obviously Nvidia will also gain from HBM transition but that happens sometime in 2016 (more likely H2 2016) with their Pascal architecture .

AMD can exploit this 12 - 18 month lead to improve market share. More importantly there is a distinct possibility of having a notebook flagship GPU at 100w running a fully enabled, clock and voltage reduced R9 390X. Such a chip will be a behemoth which can easily provide better than desktop GTX 980 performance in a notebook. Two of these in CF would then finally allow enthusiasts to max out games on 4K gaming notebooks.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
Its part of the original deal with Mubadala when the foundry was separated from AMD. Amd used the money on ati and didnt have the money to pay for the foundry business also. Mubadala step in and buys the foundry - and the obligations - and as part of that deal amd said yes to pay a certain amount of capacity. If they dont use and pay for all the capacity they will get huge fines.
There is a second element in it besides the fines and thats willingness from Mubadala to help keeping amd afloate. As it is Mubadala is the decisive factor in amd.
Now until this day amd have used tsmc as foundry for their gpu. If future gpu will be produced on gf they not only does not have to pay fines, they will also have more freedom for other parts moving to eg tsmc and Mubadala might get more interested in amd.
Gf is the expensive investment here. So if their processes will actually work good and amd will have success porting the new gpu the entire relationship will benefit. Not only economically - but i guess after years of tough situation and colaboration - a moral boost.

It would be fantastic for competition and us consumers but for my part i havnt read a gf ppt for 3 years and will only start (if) the day the results starts showing.

Still man - its like since 7970 - nothing major really happened. I am so desperate i have started to have hopes for gf. Lol.

AMD had to pay GloFo one time for lack of production for what they had agreed to. That doesn't mean they need to change GPU production to GloFo though. I'm not privy to their exact agreement. Maybe you know more?


Thanks. That's the reference I was referring to.
 
Last edited:

tential

Diamond Member
May 13, 2008
7,348
642
121
I did not know leakage is such a significant contributor to overall power draw. With a better process and substantial architectural efficiency improvements combined with HBM which provides the same bandwidth as GDDR5 at 1/3rd the power its quite possible that AMD are able to leapfrog Nvidia in efficiency. Obviously Nvidia will also gain from HBM transition but that happens sometime in 2016 (more likely H2 2016) with their Pascal architecture .

AMD can exploit this 12 - 18 month lead to improve market share. More importantly there is a distinct possibility of having a notebook flagship GPU at 100w running a fully enabled, clock and voltage reduced R9 390X. Such a chip will be a behemoth which can easily provide better than desktop GTX 980 performance in a notebook. Two of these in CF would then finally allow enthusiasts to max out games on 4K gaming notebooks.

Couple of posts back it was One of these chips. Now two...

Now I'll just wait for you to say "One of these in desktop form" Then "Two of these in desktop form!"

Moving the goalposts and it's not even been what a day since you posted that?
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Couple of posts back it was One of these chips. Now two...

Now I'll just wait for you to say "One of these in desktop form" Then "Two of these in desktop form!"

Moving the goalposts and it's not even been what a day since you posted that?

please read my post properly before mouthing off. I said R9 390X will be a 4k powerhouse even in a notebook form. I did not say a single R9 390X when run at slower clocks and voltages to fit a 100w TDP will max games at 4k. I even mentioned that two of these is an enticing prospect for 4k enthusiasts. I suggest you first improve your comprehension rather than posting sarcastic replies

With just 37.5% shaders R9 290X was 35% faster than R9 280X. This was on the same architecture with slight modifications (GCN 1.1 vs GCN 1.0).

http://www.computerbase.de/2013-10/amd-radeon-r9-290x-test/5/#diagramm-rating-2560-1600-4xaa-16xaf

http://www.hardwarecanucks.com/foru...s/63742-amd-radeon-r9-290x-4gb-review-18.html

With 45% more shaders, a much more significantly improved GCN architecture (over and above the GCN 1.2 Tonga enhancements) and a brand new HBM system with massive bandwidth and improved bandwidth efficiency (as seen in Tonga) I am beginning to think that the rumours of R9 390X getting close to R9 290X CF (when considered as an avg across many games) is not far fetched. If you look at the chiphell chart the R9 290X CF is 70% faster than R9 290X. Already we can see the R9 380X at 30% faster than R9 290X. Assuming this is a 3072 sp SKU AMD can gain another 25 - 30% from the fully enabled 4096 flagship SKU (from 33% more shaders). That puts R9 390X perf at 82 - 85.6 on the chart. This is right inline with R9 290X CF. Even accounting for lower core and memory clocks the R9 390X chip in a notebook will be a 4k powerhouse even as a single GPU. Add to it CF configs and it becomes a mouth watering prospect.

as for screen resolutions people are buying phones and tablets with 3 - 4 million pixels. So definitely 4k on a 15 - 17 inch gaming notebook makes a lot more sense than the millions of users who have insane resolutions on their mobile devices.

Already we have seen 4k in a few laptops. 2015 could finally bring the trinity of factors which finally delivers 4k gaming to masses - Powerful GPUs like R9 390X, Freesync and affordable 4k monitors (both desktops and notebooks) with Freesync.

http://www.engadget.com/2014/04/15/toshibas-first-4k-laptop-arrives-next-week-for-1-500/

http://www.pcworld.com/article/2453340/lenovo-ships-first-4k-laptop-challenging-toshiba.html

http://venturebeat.com/2014/11/03/acer-unveils-its-first-4k-laptop-available-this-month-for-1500/

https://www.cyberpowerpc.com/system/Fangbook_Edge_4K_Gaming_Laptop

http://www.engadget.com/2014/06/03/asus-gx500-gaming-laptop-hands-on/
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,825
4,766
136
I did not know leakage is such a significant contributor to overall power draw.

With a better process and substantial architectural efficiency improvements combined with HBM which provides the same bandwidth as GDDR5 at 1/3rd the power its quite possible that AMD are able to leapfrog Nvidia in efficiency. Obviously Nvidia will also gain from HBM transition but that happens sometime in 2016 (more likely H2 2016) with their Pascal architecture .

That was the number published by Altera when refering to TSMC 28nm process, more importantly they also specified that they use a custom 28nm process at TSMC to reduce those leakage, this make me suspicious of TSMC, they have never really played nice with AMD, not counting that they are too expensive.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
looks like AMD may get back the GPU crown after a very long time.

http://www.chiphell.com/thread-1196441-1-1.html

Bermuda XT seems to be all set to give GM200 a run for its money. :thumbsup: I don't believe the chips are built on GF 20nm process. I am quite sure its GF 28SHP. There is no way AMD can use a process which has been essentially canned. For GF its 28SHP in 2015 to Samsung 14nm FINFET in late 2015 / early 2016.
 
Last edited:
Feb 19, 2009
10,457
10
76
They are saying R390X (Bermuda XT) is 65% faster than R290X and on 20nm GF.

Power consumption is similar to R290X, with hybrid AIO cooler as a reference design.

IF true, well done AMD for listening and moving away from the trash blower reference. Also the performance is spot on for a node shrink.

The R380X (Fiji XT) is faster than 980 and uses a little bit more power, for similar efficiency.

GM200 "Full die" is 34% faster than 980. <- This seems very low. Is the 780ti ~35% faster than 770?

Summary (not including Bermuda XT):



Edit: One thing that isn't great, is that AMD gets these gains by going to 20nm to combat NV's Maxwell on 28nm so it isn't a fair fight. What happens when they both duke it out on 16nm finfet for example. AMD definitely needs to improve the architecture efficiency itself, without reliance on jumping to the next node earlier to compete. :/
 
Last edited:

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
They are saying R390X (Bermuda XT) is 65% faster than R290X and on 20nm GF.

Power consumption is similar to R290X, with hybrid AIO cooler as a reference design.

IF true, well done AMD for listening and moving away from the trash blower reference. Also the performance is spot on for a node shrink.

The R380X (Fiji XT) is faster than 980 and uses a little bit more power, for similar efficiency.

GM200 "Full die" is 34% faster than 980. <- This seems very low. Is the 780ti ~35% faster than 770?

Summary (not including Bermuda XT):


GTX 770 - 1536 cuda cores
GTX 780 Ti - 2880 cuda cores (87.5% more cuda cores)

780 ti perf roughly 50% more than gtx 770.

GTX 980 - 2048
GM 200 - 3072

roughly 34% faster. so the scaling is slightly better than GK104->GK110.

but as far as process is concerned I am still betting on GF 28SHP :thumbsup: btw silverforce the 225w power is for cut down GM200 and not the full fat GM200
 
Last edited:

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Edit: One thing that isn't great, is that AMD gets these gains by going to 20nm to combat NV's Maxwell on 28nm so it isn't a fair fight. What happens when they both duke it out on 16nm finfet for example. AMD definitely needs to improve the architecture efficiency itself, without reliance on jumping to the next node earlier to compete. :/

It sure is fair because:

1) No one forced NV to launch 980 at $550, with after-market cards at $580-600, and market it as a "flagship" card on 28nm nearly 1 year after 290X launched, knowing full well the real competing generation from AMD is 300 series, not 200 series. Fair would have been to call it 960Ti and price it at $399-429, which is what it was from day 1 as far as next gen SKUs/architectures go.

2) Whether it's moving to GDDR5 or HBM or 20nm before your competitor can, it's all fair since each GPU maker has those options on the table; same with choosing to go 256/384 or 512-bit memory bus. No one precludes NV from making GM200 on 20nm if AMD could somehow achieved a milestone of making 390X on 20nm.

3) As you already mentioned, if 390X smashes 290X/980 into the ground, then a true competitor is GM200/210 anyway as we all know that 980 is just a mid-range card. Therefore, 390X not destroying 980 would be an even bigger failure with it launching 6 months late.

4) No one is stopping NV from doing a GM204B respin on 20nm in 2015 (again if AMD can do 20nm in 2015, so can NV).

Saying "Oh well AMD couldn't compete so that they were forced to use 20nm" is a cop out. Engineers can choose to wait for a more efficient node or create a more efficient architecture or both. All of these are sound solutions depending on market timing and financial resources of the firm.

Power consumption is similar to R290X, with hybrid AIO cooler as a reference design./

NV focus group/PR will have an emergency meeting early next year on how to prepare negative marketing spin on forums and media to suggest that:

1) AMD needed 20nm to compete with NV's 28nm. Boohoo, AMD using cutting edge node to compete, suckers! (and their response when moving to 16nm FinFET will be "we've adopted cutting edge node to push graphics to the next level!")

2) AMD needed WC to cool down the volcano that is 390X. Without it, the card would be running 100C load! (Despite articles proving that modern ASICs running at 90-100C do not pose a problem), and countless articles proving that hybrid air cooling is an excellent solution that can even take max overclocking without sweat.

3) AMD needs 100W more power to beat out 980 by X%, which means they are still far behind us in perf/watt. Perf/watt > absolute performance! Yay! (until we can launch GM200/210 to take the performance crown).

GM200 "Full die" is 34% faster than 980. <- This seems very low. Is the 780ti ~35% faster than 770?

NV can play around with clock speeds and make a 250-275W card instead of a 225W one. 780Ti achieved amazing overclocks on 28nm despite such a large die vs. Tahiti/Hawaii. If NV only wants to bring 34-35% over 980 and limit their flagship to just 225W TDP, that's disappointing to say the least.

780Ti was 45% faster than a 770 at 1080P and 57% faster at 1600P.
http://www.computerbase.de/2014-09/geforce-gtx-980-970-test-sli-nvidia/6/#

Considering 980's mediocre 4K performance, GM200 can open up a wider gap at 4K against GM204.

Bermuda XT is looking to be 70-80% faster than 780Ti/R9 290X at BF4 multiplayer and DAI at 4K.

 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
Same think has happened before, NVIDIA released GTX280 at 65nm in early June 2008 and ATI released HD4870 at 55nm less than a month later.
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
It sure is fair because:

1) No one forced NV to launch 980 at $550, with after-market cards at $580-600, and market it as a "flagship" card on 28nm nearly 1 year after 290X launched. Fair would have been to call it 960Ti and price it at $399-429, which is what it was from day 1 as far as next gen architectures go.

2) Whether it's moving to GDDR5 or HBM or 20nm before your competitor can, it's all fair since each partner has those options on the table; same with choosing to go 256/384 or 512-bit memory bus. No one precludes NV from making GM200 on 20nm if AMD could somehow make 390X on 20nm.

3) As you already mentioned, if 390X smashes 290X into the ground, then a true competitor is GM200/210 anyway as we all know that 980 is just a mid-range card.

4) No one is stopping NV from doing a GM204B respin on 20nm in 2015 (again if AMD can do 20nm in 2015, so can NV).

Saying "Oh well AMD couldn't compete so that they were forced to use 20nm" is a cop out. Engineers can choose to wait for a more efficient node or create a more efficient architecture or both. All of these are sound solutions depending on market timing and financial resources of the firm.



NV focus group/PR is already meeting on how to prepare negative marketing spin on forums and media to suggest that:

1) AMD needed 20nm to compete with NV's 28nm. Boohoo, AMD using cutting edge node to compete, suckers!
2) AMD needed WC to cool down the volcano that is 390X. Without it, the card would be running 100C in idle!
3) AMD needs 100W more power to beat out 980 by X%, which means they are still far behind us in perf/watt. Perf/watt > absolute performance! Yay!

Just wow. And no, we are not, will not, and never have. Unreal where you've gone to RS.

And you changed this:

"NV focus group/PR is already meeting on how to prepare negative marketing spin on forums and media to suggest that:"

"NV focus group/PR will have an emergency meeting early next year on how to prepare negative marketing spin on forums and media to suggest that:"

Do you think it better to accuse or slander under the "will have" time frame instead of the "is already" time frame?
Enough.
 
Last edited:
Feb 19, 2009
10,457
10
76
Saying "Oh well AMD couldn't compete so that they were forced to use 20nm" is a cop out. Engineers can choose to wait for a more efficient node or create a more efficient architecture or both. All of these are sound solutions depending on market timing and financial resources of the firm.

What I meant was that if AMD's GCN 2 is inherently not much more efficient than GCN 1.1/1.3 and it relies on a node shrink to compete, it will not be good for AMD when NV and AMD are together on the same node. Node shrinks as you all know will be infrequent moving forward and we're likely to be stuck on one for much longer.

AMD may have a head start and look really uber, but when NV catches up and remains on the same node for a few years..

I am somewhat disappointed the architecture itself isn't massively more perf/w, that is all.
 
Feb 19, 2009
10,457
10
76
Also if NV ever manages to sell the idea that AIO water cooling is inferior than a noisy blower or open air design that dumps heat in your case requiring even more fans for case airflow.. I'll ROFLMAO.

It would take a very "special" consumer to believe that.

I love the idea of going with AIO for reference for HIGH-END cards. Who the heck buys a beast of a GPU without having a case that has a 120mm slot? Water cooling by default is a damn win for gamers. All that heat, out your case, you don't need to install extra fans, you use the radiator fan as exhaust. Two birds, one fan.

ps. I had my doubts about earlier leaks with Fiji XT performing so well with low power use because I still felt they were on TSMC 28nm like NV. It did not occur to me GF was actually ready for quality 20nm production until a post awhile ago discussing it. Looks like AMD's future will be on GF 20nm, then transition to Samsung/GF 14nm finfets! TSMC is way too crowded now with mobile SOC demands, from Apple & Qualcomm alone. On 20nm, these figures are SPOT on as what one would expect, ~60-70% performance leap at similar TDP. Chiphell has also been very accurate in recent times, for both AMD & NV.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
What I meant was that if AMD's GCN 2 is inherently not much more efficient than GCN 1.1/1.3 and it relies on a node shrink to compete, it will not be good for AMD when NV and AMD are together on the same node. Node shrinks as you all know will be infrequent moving forward and we're likely to be stuck on one for much longer.

AMD may have a head start and look really uber, but when NV catches up and remains on the same node for a few years..

I am somewhat disappointed the architecture itself isn't massively more perf/w, that is all.

By the time NV will be on the same node as AMD(16nm ??), both will have new architectures.
 
Feb 19, 2009
10,457
10
76
By the time NV will be on the same node as AMD(16nm ??), both will have new architectures.

I dunno, TSMC has 16nm finfet scheduled for mass production in about a year last I saw.

AMD is going with GF/Samsung 14nm FF, who knows when its ready. Also Pascal is due also in about a year from the roadmap. We've heard nothing about post GCN 2.

Either way, if Bermuda XT is AIO cooled and 65% faster than R290X at similar TDP, I'm getting two for CF and a nice 4K Samsung Freesync monitor.
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
Also if NV ever manages to sell the idea that AIO water cooling is inferior than a noisy blower or open air design that dumps heat in your case requiring even more fans for case airflow.. I'll ROFLMAO.

It would take a very "special" consumer to believe that.

I love the idea of going with AIO for reference for HIGH-END cards. Who the heck buys a beast of a GPU without having a case that has a 120mm slot? Water cooling by default is a damn win for gamers. All that heat, out your case, you don't need to install extra fans, you use the radiator fan as exhaust. Two birds, one fan.

ps. I had my doubts about earlier leaks with Fiji XT performing so well with low power use because I still felt they were on TSMC 28nm like NV. It did not occur to me GF was actually ready for quality 20nm production until a post awhile ago discussing it. Looks like AMD's future will be on GF 20nm, then transition to Samsung/GF 14nm finfets! TSMC is way too crowded now with mobile SOC demands, from Apple & Qualcomm alone. On 20nm, these figures are SPOT on as what one would expect, ~60-70% performance leap at similar TDP. Chiphell has also been very accurate in recent times, for both AMD & NV.

Until you can show a die shot there is no guarantee that its a 20nm GPU. AMD has been manufacturing Kaveri, Beema, Mullins and semi custom game console chips at GF 28SHP.

http://seekingalpha.com/article/272...-technology-conference-transcript?part=single

John Pitzer-Credit Suisse Securities - Credit Suisse Securities

How do you think - one, are you on track to meet the $1.2 billion for this year; and 2, as we go into 2015 and beyond, how do you think that relationship will evolve between yourselves and GlobalFoundries?

Devinder Kumar - SVP and CFO

You know, if I step back, and I have been involved in the GlobalFoundries transaction way back in 2008. GlobalFoundries [indiscernible] in 2009. I can tell you that with the changes that have occurred on GlobalFoundries with the management team and the focus that Abu Dhabi has and the investment that they have as a partnership with GlobalFoundries, the relationship between AMD and GlobalFoundries is the best in the history of the relationship. The folks that we're working on in fact we just had a meeting in Abu Dhabi just a couple of weeks ago. Really good discussions, very business oriented.

And I think the execution of GlobalFoundries has improved significantly and that helps us from an overall standpoint. In 2014 for the first time, some folks may not know then. For the first time in the history of the relationship we went beyond PC product and actually we are making graphics, PC, and semi-custom products at GlobalFoundries in 2014 and that continue into 2015. When you diversify the product that you make at a foundry like GlobalFoundries, it benefits them from a mix standpoint and benefits us from a mix standpoint. And like I said, the execution is continuing to get better and we are very pleased - very, very pleased with that relationship.

explain to me how in the world is AMD going to get a 300 sq mm high performance GPU built on a low power 20nm process barely suitable for mobile SOCs. Qualcomm seems to be having issues with their 810 even at < 2Ghz speeds . btw TSMC 20nm is the only 20nm process which is in high volume production.

http://www.tomshardware.com/news/qualcomm-snapdragon-810-delays-denial,28179.html

Samsung is shipping very low volume of 20nm chips only for very few of their phone models

http://www.anandtech.com/show/8382/samsung-announces-exynos-5430-first-20nm-samsung-soc

Add to it the fact that AMD changed their 2015 roadmap.

http://www.anandtech.com/show/8742/amd-announces-carrizo-and-carrizol-next-gen-apus-for-h1-2015

http://www.anandtech.com/show/7989/...bridge-pincompatible-arm-and-x86-socs-in-2015

If AMD cannot manufacture a < 100 sq mm mobile APU with a 15w TDP at 20nm with decent yields what are the chances of a 300 sq mm high performance 20nm GPU which has a 250W TDP. nearly zero.

Did you forget what anandtech said about Beema GPU

http://www.anandtech.com/show/7974/...hitecture-a10-micro-6700t-performance-preview

"AMD claims a 19% reduction in core leakage/static current for Puma+ compared to Jaguar at 1.2V, and a 38% reduction for the GPU. The drop in leakage directly contributes to a substantially lower power profile for Beema and Mullins."

GF 28SHP (with 38% lower leakage than TSMC 28HP) + HBM (which cuts power by 2/3rd for memory controller and GDDR5 memory chips) + architectural efficiency changes + power efficiency improvements

http://images.anandtech.com/doci/8742/Carrizo Efficiency.png
http://images.anandtech.com/doci/8742/Voltage Adaptive.png


should be enough to get AMD to deliver Bermuda XT at 250W- 260W TDP. The die size should be 500 - 550 sq mm which is possible on a mature 28nm process. :thumbsup:
 
Last edited:

beginner99

Diamond Member
Jun 2, 2009
5,315
1,760
136
Also if NV ever manages to sell the idea that AIO water cooling is inferior than a noisy blower or open air design that dumps heat in your case requiring even more fans for case airflow.. I'll ROFLMAO.

It would take a very "special" consumer to believe that.

I love the idea of going with AIO for reference for HIGH-END cards. Who the heck buys a beast of a GPU without having a case that has a 120mm slot? Water cooling by default is a damn win for gamers. All that heat, out your case, you don't need to install extra fans, you use the radiator fan as exhaust. Two birds, one fan.

My gripe with this is that my case (Fractal Define R2) doesn't offer much space behind my NH-D14. I'm not sure if a Radiator + fan fits in there and if yes it will get the hot air from the CPU. The Define R2 has a 120mm side vent which could be used. But for cases that don't have a side vent cooling the GPU with hot air from CPU doesn't make a lot of sense?
 
Feb 19, 2009
10,457
10
76
My gripe with this is that my case (Fractal Define R2) doesn't offer much space behind my NH-D14. I'm not sure if a Radiator + fan fits in there and if yes it will get the hot air from the CPU. The Define R2 has a 120mm side vent which could be used. But for cases that don't have a side vent cooling the GPU with hot air from CPU doesn't make a lot of sense?

You don't have to put the rad to the rear, the front has slots for 120mm also, then there's the side slots and the top slots. Many cases have multiple 120mm positions, most these days have 240mm rad support.

@raghu78 Those are good points.

I'll side with the Chiphell leak that its 20nm GF until other more concrete info says otherwise though.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |