New Zen microarchitecture details

Page 134 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

itsmydamnation

Diamond Member
Feb 6, 2011
3,028
3,800
136
1U pizza boxes are pretty much dead, at least as far as volume is concerned. 2x4s (and 1x3s for wide racks) have pretty much replaced 1Us for anything except disk boxes.

As far as more memory channels, it is already well known that next gen Xeons will be 6 channels.
Can i ask what area's you work in/have visibility of?

In terms of VM farms i still see far more 1RU boxes then the HPC styled cluster in a box, The cluster in a box solutions tend not to have enough dimms per node for your enterprise vm farm that has a high level of CPU over subscription. But most 1RU boxes i see aren't using local storage so there is really no reason they couldn't be something else. The one thing i am seeing far more of is 2RU boxes chock full of disk running nutanix or vsan.

Either way it misses my point which is there is plenty of space in standard 600mm x~1000mm rack for 2x sockets with 16 dimms slots each. I Dont work in HPC so maybe those 2RU cluster chassis with 4 nodes of 1x32 core + 8dimms slots is attractive thax to the additionally memory throughput /compute density but i don't see it in the markets i work in. Thinking about it i guess they could also work for cloud as the client is always paying when something is on and cpu over subscription is far lower compared to enterprise where every app has 4+ versions of its self active at anyone time.
 

imported_ats

Senior member
Mar 21, 2008
422
63
86
Can i ask what area's you work in/have visibility of?

In terms of VM farms i still see far more 1RU boxes then the HPC styled cluster in a box, The cluster in a box solutions tend not to have enough dimms per node for your enterprise vm farm that has a high level of CPU over subscription. But most 1RU boxes i see aren't using local storage so there is really no reason they couldn't be something else. The one thing i am seeing far more of is 2RU boxes chock full of disk running nutanix or vsan.

You can do 512GB per node in 2x4s these days for basically lowest per GB memory cost (aka the sweet spot).

Either way it misses my point which is there is plenty of space in standard 600mm x~1000mm rack for 2x sockets with 16 dimms slots each. I Dont work in HPC so maybe those 2RU cluster chassis with 4 nodes of 1x32 core + 8dimms slots is attractive thax to the additionally memory throughput /compute density but i don't see it in the markets i work in. Thinking about it i guess they could also work for cloud as the client is always paying when something is on and cpu over subscription is far lower compared to enterprise where every app has 4+ versions of its self active at anyone time.

You know who cares about things like 2x4s, 1x2s, and 1x3s? Google, Facebook, Microsoft, Amazon, et al. Basically the vast majority of the volume. That's not HPC, that's basically the mainstream and the market that both AMD and Intel and hell EVERYONE ELSE is targeting. You are basically looking at roughly 4 sockets and 32-48 dimms per U in that market.

For storage, people are generally moving or have moved to either 1U flat packs (12-16 3.5" + Xeon-D/Avoton), 4u vertical slotted 45-90 3.5" drive bulk storage, or 24-48 2.5" NVMe 2U.
 

itsmydamnation

Diamond Member
Feb 6, 2011
3,028
3,800
136
You can do 512GB per node in 2x4s these days for basically lowest per GB memory cost (aka the sweet spot).
32gb dimms? 16gb can still be a fair bit cheaper (10-20% a gb) in my part of the world.

You know who cares about things like 2x4s, 1x2s, and 1x3s? Google, Facebook, Microsoft, Amazon, et al. Basically the vast majority of the volume.
Anyone actually have any data on this? no doubt they are a large percentage but i would be very surprised at "majority". I've already seen client shift worklaod back from the cloud as it turns out to be not as cheap as it sounds ( it all comes down to use cases).

That's not HPC, that's basically the mainstream and the market that both AMD and Intel and hell EVERYONE ELSE is targeting. You are basically looking at roughly 4 sockets and 32-48 dimms per U in that market.
With what product i haven't seen anything remotely like this with 48dimm a U.
You have super micro style mirco blades with the most expensive models getting 4 dimm a socket, Thats way to much compute to memory for your average Enterprise VM farm. Or 2x4's with 8 dimms a proc netting 32 dimm a RU.

For storage, people are generally moving or have moved to either 1U flat packs (12-16 3.5" + Xeon-D/Avoton), 4u vertical slotted 45-90 3.5" drive bulk storage, or 24-48 2.5" NVMe 2U.
I guess this is where the difference is i see either traditional SAN's or hyper converged. None of the things you say everyone uses make any sense for hyper converged (not enough disk per node) and most traditional SAN users are still following a pattern that uses either 1RU pizza boxes or blade servers.

But now to actually bring this on topic , kind of.....

Given the 2x4s footprint, would you go 2x Proc with 8 dimm each or 1x Proc with 16 dimm? If your just talking about compute to memory i'ld probably take 1 proc 16 dimm as it will be cheaper for your average VM farm. Obviously compute density will be lower while memory density remains the same but i rarely see a vm farm thats compute bound.

now im like 30mins late for work.......lol
 

imported_ats

Senior member
Mar 21, 2008
422
63
86
32gb dimms? 16gb can still be a fair bit cheaper (10-20% a gb) in my part of the world.

Pricing I have, has 32GB 2xR RDIMMs at parity with 16GB RDIMMs.


Anyone actually have any data on this? no doubt they are a large percentage but i would be very surprised at "majority". I've already seen client shift worklaod back from the cloud as it turns out to be not as cheap as it sounds ( it all comes down to use cases).

Only anecdotal.


With what product i haven't seen anything remotely like this with 48dimm a U.
You have super micro style mirco blades with the most expensive models getting 4 dimm a socket, Thats way to much compute to memory for your average Enterprise VM farm. Or 2x4's with 8 dimms a proc netting 32 dimm a RU.

As examples: Dell FC630 - 4x2s in 2U each 2S with 24 dimms. and Dell FC830 2x4s in 2U with 48 dimms per 4S.

I guess this is where the difference is i see either traditional SAN's or hyper converged. None of the things you say everyone uses make any sense for hyper converged (not enough disk per node) and most traditional SAN users are still following a pattern that uses either 1RU pizza boxes or blade servers.

Not enough disks per node??? Those are literally the highest disk per node configurations available anywhere.

Given the 2x4s footprint, would you go 2x Proc with 8 dimm each or 1x Proc with 16 dimm? If your just talking about compute to memory i'ld probably take 1 proc 16 dimm as it will be cheaper for your average VM farm. Obviously compute density will be lower while memory density remains the same but i rarely see a vm farm thats compute bound.

2x4 gives you a lot more flexibility in which CPUs you end up getting, allowing you to better optimize costs.
 
Reactions: PliotronX

itsmydamnation

Diamond Member
Feb 6, 2011
3,028
3,800
136
Not enough disks per node??? Those are literally the highest disk per node configurations available anywhere.
not compared so something like this there not:
http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-c240-m3-rack-server/index.html
yeah you get around the same amount of total drives but you have 1/4 the disk per node. That makes tiered storage "interesting" or you have to go all SSD and buy very expensive large SSD's. If your going "hyperconvergence" ( i hate that term) and you make yourself IO limited you just wasted your time.
 

imported_ats

Senior member
Mar 21, 2008
422
63
86
not compared so something like this there not:
http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-c240-m3-rack-server/index.html
yeah you get around the same amount of total drives but you have 1/4 the disk per node. That makes tiered storage "interesting" or you have to go all SSD and buy very expensive large SSD's. If your going "hyperconvergence" ( i hate that term) and you make yourself IO limited you just wasted your time.

Eh? your example is 24 2.5" drives in 2U. My examples were 12-16 3.5" drives in 1U, 48 2.5" NVMe drives in 2U, and 60-90 3.5" drives in 4U. Those are sitting at 12-16/U, 24/U, and 15-22/U. For storage, there is nothing denser. And anyone buying rust for anything but Tier 2/3 storage is just ripping themselves off these days. Esp when you can get Intel DC P3520s for under 50c/GB. You can quite honestly get enterprise NVMe SSDs these days for marginally more than the cost 10k disks and they run rings around them. The 60-90/4u and 12-16/1U are primarily Tier 2/3. The 48 NVMe in 2U is basically what a lot of people are moving to for Tier 1 storage.
 
Reactions: PliotronX

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Why would you say it like that? The Clustered DAS Model is the fastest growing Storage market this year. http://www.gartner.com/newsroom/id/3308017

High growth is easy when the numbers are small. Using your link to put things in perspective, the entire 2016 market for HCIS is equivalent to about a months worth of EMC's storage sales.

Anyway, they aren't discussing HCIS, they are talking about discrete servers.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
High growth is easy when the numbers are small. Using your link to put things in perspective, the entire 2016 market for HCIS is equivalent to about a months worth of EMC's storage sales.

Anyway, they aren't discussing HCIS, they are talking about discrete servers.

High growth "in comparison" means nothing in the context of your statement:

You guys are still using DASD???

There is no other way to interpret "still using DASD" other than viewing it from the form a technology of decreasing use, which couldn't be further form the truth. Hyper-converged designs is seeing large YoY growth, while Enterprise External Storage has been in steady decline.

They're discussion has nothing to the fact that you specifically singled out DASD as an aging out technology, which isn't true at all. HCIS is absolutely DASD, whether or not you'd like to separate it, and even if the use of the term "DASD" itself is rather aged and ambiguous in today's virtualized world (IBM has always been the hold out for that acronym and I haven't seen it in IBM documentation in 5 years).
 

imported_ats

Senior member
Mar 21, 2008
422
63
86
High growth is easy when the numbers are small. Using your link to put things in perspective, the entire 2016 market for HCIS is equivalent to about a months worth of EMC's storage sales.

Anyway, they aren't discussing HCIS, they are talking about discrete servers.

It is probably worth pointing out that HCIS is the entire basis around pretty much all non-legacy EMC storage sales. Same is pretty much true for any other major storage vendor. They are all moving towards effectively commodity storage nodes with a high speed backend, all controlled by software.
 

imported_ats

Senior member
Mar 21, 2008
422
63
86
You guys are still using DASD???

Everyone has always been using DASD. Everyone, even EMC. What's primarily changed is how the DASD is structured and what interfaces it uses. These days, the intelligent part of storage systems are so cheap, it no longer makes really any sense to have dumb shelves/JBODs. Need more storage, you plug in another server with DASD instead of connecting the original server to more DASD. Those servers of course may be directly using the storage or merely managing it for other servers. With a advent of SSDs, the number of storage element needed for high performance storage has rapidly decreased as well, making it possible to get by with a faction of the drives that were previously required (it wasn't unusual to find storage using 1/10th of 15k HDD capacity because they needed the numbers to keep up IOPs), this means that what required shelves and shelves of drives for performance can easily be handles by a 2-4 high end SSDs.

Likewise, a large number of SANs never really services more than 1 server bank, the SAN was basically just a really high end raid controller at the end of the day. And the move aware from dedicated hardware raid controllers to complete software managed raid is just another evolution predicated on CPUs being cheap, why spend 4k on a high end raid controller when a CPU is faster, cheaper, and more reliable.
 

blublub

Member
Jul 19, 2016
135
61
101
I suppose it is possible that they found no issues with A0, that would be pretty rare, but not unheard of.

Normally, it was said that you need 45-60 days between steppings, so if A0 is final silicon they could make that Feb deadline that was mentioned. If not, then looks like this could get pushed to into May/June.
For all we know, they might even have a Samsung made chip up and running as well that is a later revision.
Who knows how old this A0 already is !? - maybe 2-3 weeks while it was in AMD's internal validation.
And I am sure the next rev./spin was started while A0 was in production - so if they start production of final silicon now or end if November a start in Q1 is possible...but it would be at the end if q1 which is bad....we will see
 

CHADBOGA

Platinum Member
Mar 31, 2009
2,135
832
136
Yeah, like Nvidia's TDP numbers with respect to Maxwell. ;-)

Anyways guys, be careful about lolfail9001, he's an notorious anti-AMD Poster from HardOCP and Reddit. I wouldn't waste my time engaging with him.

He seems pretty good on this forum.
 

Erenhardt

Diamond Member
Dec 1, 2012
3,251
105
101
He seems pretty good on this forum.

I would say he is almost at your level. Do you feel threatened by him?

Also, your golden quote in sig:
Fjodor2001;37368111 said:
You're just bitter because in 2016 you'll be sitting on an expensive and slow 4 core Intel CPU, while others will be using a cheaper and faster 8 core AMD CPU.

Have you seen this already?
 
Last edited:

KTE

Senior member
May 26, 2016
478
130
76
This is interesting, the neural net that is inside Zen and talked about is also called perceptron.
It is used for the branch prediction.

https://en.wikipedia.org/wiki/Perceptron

https://www.reddit.com/r/Amd/comments/5in72i/funny_stuff_the_neural_network_of_zen/

A pdf with more explanation about how neural net branch predictors work in general :

https://www.cs.utexas.edu/~lin/papers/tocs02.pdf
Brad used the 'HP' since Bobcat/Jaguar, but this is more advanced. He's also used it for the Samsung M1 http://www.androidauthority.com/closer-look-samsung-mongoose-cpu-712587/


Sent from HTC 10
(Opinions are own)
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |