Why is AMD forced to buy from GloFo? I know they contract wafers, but they (and everyone else) contract the same way with TSMC, etc...
That sums it up better.
raghu78 is essentially claiming that an AMD single chip solution will game at 4K on a laptop in 2015 at GTX 970 SLI speeds...
I mean... I'm not saying I don't WISH it would happen? But it clearly won't. If it does happen, sign me up for Crossfire of the Desktop Card please....
Why is AMD forced to buy from GloFo? I know they contract wafers, but they (and everyone else) contract the same way with TSMC, etc...
Actually, it came from a much more reliable source than anyone on these boards. There was a slide from the company (sorry the name escapes me ATM) that AMD uses to design their chips that claimed they had made one for a client. 2+2 makes it AMD.
Though I am still not convinced that AMD's next gen flagship GPU is built at TSMC 28HPM. My guess is GF 28SHP because of AMD's R&D history on 2.5D stacking with GF,Amkor and Hynix from way back in 2011.
For the record GF fabbed Kabinis s GPU has 38% less leakage than the ones manufactured at TSMC.
Leakage is about 35-40% of TDPs, a 250W TSMC fabbed GPU would yield 214W at GF, perhaps a little less since GF s 28nm seems to require a slightly lower voltage at GPUs frequencies.
Its part of the original deal with Mubadala when the foundry was separated from AMD. Amd used the money on ati and didnt have the money to pay for the foundry business also. Mubadala step in and buys the foundry - and the obligations - and as part of that deal amd said yes to pay a certain amount of capacity. If they dont use and pay for all the capacity they will get huge fines.
There is a second element in it besides the fines and thats willingness from Mubadala to help keeping amd afloate. As it is Mubadala is the decisive factor in amd.
Now until this day amd have used tsmc as foundry for their gpu. If future gpu will be produced on gf they not only does not have to pay fines, they will also have more freedom for other parts moving to eg tsmc and Mubadala might get more interested in amd.
Gf is the expensive investment here. So if their processes will actually work good and amd will have success porting the new gpu the entire relationship will benefit. Not only economically - but i guess after years of tough situation and colaboration - a moral boost.
It would be fantastic for competition and us consumers but for my part i havnt read a gf ppt for 3 years and will only start (if) the day the results starts showing.
Still man - its like since 7970 - nothing major really happened. I am so desperate i have started to have hopes for gf. Lol.
The company is Synapse. AMD is their client
http://www.kitguru.net/components/g...-gpus-made-using-28nm-hpm-process-technology/
http://www.synapse-da.com/Corporate/Clients
Though I am still not convinced that AMD's next gen flagship GPU is built at TSMC 28HPM. My guess is GF 28SHP because of AMD's R&D history on 2.5D stacking with GF,Amkor and Hynix from way back in 2011.
http://sites.amd.com/la/Documents/TFE2011_001AMC.pdf
http://sites.amd.com/se/Documents/TFE2011_006HYN.pdf
http://www.setphaserstostun.org/hc2...Bandwidth-Kim-Hynix-Hot Chips HBM 2014 v7.pdf
http://www.amkor.com/index.cfm?objectid=E6A2243B-0017-10F6-B680958B1E902E87
http://www.globalfoundries.com/news...r-next-generation-chip-packaging-technologies
http://semiaccurate.com/2011/10/27/amd-far-future-prototype-gpu-pictured/
http://www.microarch.org/micro46/files/keynote1.pdf
http://electroiq.com/blog/2013/12/amd-and-hynix-announce-joint-development-of-hbm-memory-stacks/
AMD is already building Kaveri, semi-custom game console chips and GPUs at GF 28SHP. This was confirmed in their Q2 2014 earnings call conference.
I did not know leakage is such a significant contributor to overall power draw. With a better process and substantial architectural efficiency improvements combined with HBM which provides the same bandwidth as GDDR5 at 1/3rd the power its quite possible that AMD are able to leapfrog Nvidia in efficiency. Obviously Nvidia will also gain from HBM transition but that happens sometime in 2016 (more likely H2 2016) with their Pascal architecture .
AMD can exploit this 12 - 18 month lead to improve market share. More importantly there is a distinct possibility of having a notebook flagship GPU at 100w running a fully enabled, clock and voltage reduced R9 390X. Such a chip will be a behemoth which can easily provide better than desktop GTX 980 performance in a notebook. Two of these in CF would then finally allow enthusiasts to max out games on 4K gaming notebooks.
Couple of posts back it was One of these chips. Now two...
Now I'll just wait for you to say "One of these in desktop form" Then "Two of these in desktop form!"
Moving the goalposts and it's not even been what a day since you posted that?
With just 37.5% shaders R9 290X was 35% faster than R9 280X. This was on the same architecture with slight modifications (GCN 1.1 vs GCN 1.0).
http://www.computerbase.de/2013-10/amd-radeon-r9-290x-test/5/#diagramm-rating-2560-1600-4xaa-16xaf
http://www.hardwarecanucks.com/foru...s/63742-amd-radeon-r9-290x-4gb-review-18.html
With 45% more shaders, a much more significantly improved GCN architecture (over and above the GCN 1.2 Tonga enhancements) and a brand new HBM system with massive bandwidth and improved bandwidth efficiency (as seen in Tonga) I am beginning to think that the rumours of R9 390X getting close to R9 290X CF (when considered as an avg across many games) is not far fetched. If you look at the chiphell chart the R9 290X CF is 70% faster than R9 290X. Already we can see the R9 380X at 30% faster than R9 290X. Assuming this is a 3072 sp SKU AMD can gain another 25 - 30% from the fully enabled 4096 flagship SKU (from 33% more shaders). That puts R9 390X perf at 82 - 85.6 on the chart. This is right inline with R9 290X CF. Even accounting for lower core and memory clocks the R9 390X chip in a notebook will be a 4k powerhouse even as a single GPU. Add to it CF configs and it becomes a mouth watering prospect.
as for screen resolutions people are buying phones and tablets with 3 - 4 million pixels. So definitely 4k on a 15 - 17 inch gaming notebook makes a lot more sense than the millions of users who have insane resolutions on their mobile devices.
Already we have seen 4k in a few laptops. 2015 could finally bring the trinity of factors which finally delivers 4k gaming to masses - Powerful GPUs like R9 390X, Freesync and affordable 4k monitors (both desktops and notebooks) with Freesync.
http://www.engadget.com/2014/04/15/toshibas-first-4k-laptop-arrives-next-week-for-1-500/
http://www.pcworld.com/article/2453340/lenovo-ships-first-4k-laptop-challenging-toshiba.html
http://venturebeat.com/2014/11/03/acer-unveils-its-first-4k-laptop-available-this-month-for-1500/
https://www.cyberpowerpc.com/system/Fangbook_Edge_4K_Gaming_Laptop
http://www.engadget.com/2014/06/03/asus-gx500-gaming-laptop-hands-on/
I did not know leakage is such a significant contributor to overall power draw.
With a better process and substantial architectural efficiency improvements combined with HBM which provides the same bandwidth as GDDR5 at 1/3rd the power its quite possible that AMD are able to leapfrog Nvidia in efficiency. Obviously Nvidia will also gain from HBM transition but that happens sometime in 2016 (more likely H2 2016) with their Pascal architecture .
They are saying R390X (Bermuda XT) is 65% faster than R290X and on 20nm GF.
Power consumption is similar to R290X, with hybrid AIO cooler as a reference design.
IF true, well done AMD for listening and moving away from the trash blower reference. Also the performance is spot on for a node shrink.
The R380X (Fiji XT) is faster than 980 and uses a little bit more power, for similar efficiency.
GM200 "Full die" is 34% faster than 980. <- This seems very low. Is the 780ti ~35% faster than 770?
Summary (not including Bermuda XT):
Edit: One thing that isn't great, is that AMD gets these gains by going to 20nm to combat NV's Maxwell on 28nm so it isn't a fair fight. What happens when they both duke it out on 16nm finfet for example. AMD definitely needs to improve the architecture efficiency itself, without reliance on jumping to the next node earlier to compete. :/
Power consumption is similar to R290X, with hybrid AIO cooler as a reference design./
GM200 "Full die" is 34% faster than 980. <- This seems very low. Is the 780ti ~35% faster than 770?
It sure is fair because:
1) No one forced NV to launch 980 at $550, with after-market cards at $580-600, and market it as a "flagship" card on 28nm nearly 1 year after 290X launched. Fair would have been to call it 960Ti and price it at $399-429, which is what it was from day 1 as far as next gen architectures go.
2) Whether it's moving to GDDR5 or HBM or 20nm before your competitor can, it's all fair since each partner has those options on the table; same with choosing to go 256/384 or 512-bit memory bus. No one precludes NV from making GM200 on 20nm if AMD could somehow make 390X on 20nm.
3) As you already mentioned, if 390X smashes 290X into the ground, then a true competitor is GM200/210 anyway as we all know that 980 is just a mid-range card.
4) No one is stopping NV from doing a GM204B respin on 20nm in 2015 (again if AMD can do 20nm in 2015, so can NV).
Saying "Oh well AMD couldn't compete so that they were forced to use 20nm" is a cop out. Engineers can choose to wait for a more efficient node or create a more efficient architecture or both. All of these are sound solutions depending on market timing and financial resources of the firm.
NV focus group/PR is already meeting on how to prepare negative marketing spin on forums and media to suggest that:
1) AMD needed 20nm to compete with NV's 28nm. Boohoo, AMD using cutting edge node to compete, suckers!
2) AMD needed WC to cool down the volcano that is 390X. Without it, the card would be running 100C in idle!
3) AMD needs 100W more power to beat out 980 by X%, which means they are still far behind us in perf/watt. Perf/watt > absolute performance! Yay!
Saying "Oh well AMD couldn't compete so that they were forced to use 20nm" is a cop out. Engineers can choose to wait for a more efficient node or create a more efficient architecture or both. All of these are sound solutions depending on market timing and financial resources of the firm.
What I meant was that if AMD's GCN 2 is inherently not much more efficient than GCN 1.1/1.3 and it relies on a node shrink to compete, it will not be good for AMD when NV and AMD are together on the same node. Node shrinks as you all know will be infrequent moving forward and we're likely to be stuck on one for much longer.
AMD may have a head start and look really uber, but when NV catches up and remains on the same node for a few years..
I am somewhat disappointed the architecture itself isn't massively more perf/w, that is all.
By the time NV will be on the same node as AMD(16nm ??), both will have new architectures.
Also if NV ever manages to sell the idea that AIO water cooling is inferior than a noisy blower or open air design that dumps heat in your case requiring even more fans for case airflow.. I'll ROFLMAO.
It would take a very "special" consumer to believe that.
I love the idea of going with AIO for reference for HIGH-END cards. Who the heck buys a beast of a GPU without having a case that has a 120mm slot? Water cooling by default is a damn win for gamers. All that heat, out your case, you don't need to install extra fans, you use the radiator fan as exhaust. Two birds, one fan.
ps. I had my doubts about earlier leaks with Fiji XT performing so well with low power use because I still felt they were on TSMC 28nm like NV. It did not occur to me GF was actually ready for quality 20nm production until a post awhile ago discussing it. Looks like AMD's future will be on GF 20nm, then transition to Samsung/GF 14nm finfets! TSMC is way too crowded now with mobile SOC demands, from Apple & Qualcomm alone. On 20nm, these figures are SPOT on as what one would expect, ~60-70% performance leap at similar TDP. Chiphell has also been very accurate in recent times, for both AMD & NV.
Also if NV ever manages to sell the idea that AIO water cooling is inferior than a noisy blower or open air design that dumps heat in your case requiring even more fans for case airflow.. I'll ROFLMAO.
It would take a very "special" consumer to believe that.
I love the idea of going with AIO for reference for HIGH-END cards. Who the heck buys a beast of a GPU without having a case that has a 120mm slot? Water cooling by default is a damn win for gamers. All that heat, out your case, you don't need to install extra fans, you use the radiator fan as exhaust. Two birds, one fan.
My gripe with this is that my case (Fractal Define R2) doesn't offer much space behind my NH-D14. I'm not sure if a Radiator + fan fits in there and if yes it will get the hot air from the CPU. The Define R2 has a 120mm side vent which could be used. But for cases that don't have a side vent cooling the GPU with hot air from CPU doesn't make a lot of sense?