What was the purpose of that? Honestly? Smileys mean nothing. :thumbsdown:
At least my post was relevant to the claims made in another poster's contents. What's your excuse?
What was the purpose of that? Honestly? Smileys mean nothing. :thumbsdown:
At least my post was relevant to the claims made in another poster's contents. What's your excuse?
That's not what he said. The fact is AMD can match intel at performance per watt in servers when being a whole node + HKMG down.
Sure intel could MCM those Xeons but where is the extra 100W+ of power going?
You mean like attempting to make a career out of selling server gear for a company that steadily loses server market share?
A lot of threads over a much smaller number of cores (8C/64T for the T2). As an SoC with server features and RAS to boot, it's been pretty efficient, given what it can do. If Oracle would offer cheaper servers, and improve Linux support, it could get pretty wide use (not that Oracle would ever even dream of doing that, of course!).But wasn't that Sun's strategy with their Niagara processors, lots of weak cores to maximize throughput?
That's not what he said. The fact is AMD can match intel at performance per watt in servers when being a whole node + HKMG down.
Sure intel could MCM those Xeons but where is the extra 100W+ of power going?
Why wouldn't Intel do the same thing as AMD and use LV Xeons that are significantly clocked lower from the top achievable clock speeds. A pair of 2.26GHz Westmere Xeons would fit into a 130W TDP.That's not what he said. The fact is AMD can match intel at performance per watt in servers when being a whole node + HKMG down.
Sure intel could MCM those Xeons but where is the extra 100W+ of power going?
Why wouldn't Intel do the same thing as AMD and use LV Xeons that are significantly clocked lower from the top achievable clock speeds. A pair of 2.26GHz Westmere Xeons would fit into a 130W TDP.
John if you recieved this to be anything other than friendly joshing then I sincerely apologize, it was not meant to be an insult or anything of that nature. Given that so many others here found it offensive on your behalf, I apologize in advance if you took it as such as well. That was so totally not the intent, I had a smile on my face when I wrote it thinking you'd get a chuckle out of the irony and nothing more, if I failed in that endeavour then that was my bad :roses:
They cost that much because of current market segmentation. When 6-core Opteron EEs were released, they were significantly more expensive than regular 6-core Opterons.They'd cost a lot more than the normal Xeons while performing a lot worse in low threads as you mentioned in your last post. They might perform slightly better than MC overall, that's difficult to say for sure but at such a prohibitive cost I very much doubt there would be a market for them.
Bulldozer strong point - It gives people hope for the future.
Bulldozer weak point - It isn't available yet.
My point is this AMD strategy can be duplicated by Intel relatively easily since it's basically the same strategy of what both have done for the last several years; which is to throw more cores into a CPU and run the cores at less than maximum achievable clock speed to get higher throughput at the same power use level.
But that's not the whole story, because MC is superior when the software actually makes use of the extra cores.But Intel's current process and core design advantage means that AMD must use a lot more lower-clocked cores to achieve comparable throughput at the same power levels.
...but only if you need those extra cores, and it's not like the Xeons lost by much. While server performance tends to scale out better, there is a growing portion of servers that are powerful enough, and for which an upgrade need not have tons of cores. Or, they are bound by storage/network resources, rather than CPU threads, RAM, etc.. Often, for these, Intel offers better CPUs for the money, right now, because the extra cores would just go to waste. Props to AMD for getting server chips not costing an arm and a leg, but when even 8 real cores is planning for the distant future, and could let you skip an upgrade cycle, Intel's got compelling offerings.When the software is in place the Xeon's 12 faster cores + HT can't compete with 24 "real" cores. There are obviously other factors to consider but on a pure hardware level MC is every bit as good as Xeon on perf/watt.
And there are other highly threaded and scalable benchmarks like SAP-SD and Cinebench 11.5 where the Xeon's per core advantage is enough to overcome the core count deficit.But that's not the whole story, because MC is superior when the software actually makes use of the extra cores.
When the software is in place the Xeon's 12 faster cores + HT can't compete with 24 "real" cores. There are obviously other factors to consider but on a pure hardware level MC is every bit as good as Xeon on perf/watt.
Bulldozer strong point - It gives people hope for the future.
Bulldozer weak point - It isn't available yet.
Its also sad that no one knows hardly anything more about bd than when this thread was started. :thumbsdown:
I would like to see the nda that AMD gave out to their partners...
But you don't have the same performance profile. You only have roughly comparable throughput if the customer uses enough threads to push every core. Otherwise, the Xeon is going to be faster. The fewer the cores needed to hit a certain theoretical throughput, the better that CPU is going to be in actual use.
I wouldn't bet on the strategy that requires twice the cores and three times the die size to match the theoretical throughput of my competitors products and a strategy that my competitor could easily adopt and leave me with no other response... than maybe to slow down my cores further and double up their count again?
But wasn't that Sun's strategy with their Niagara processors, lots of weak cores to maximize throughput?
There are different philosophies areound processors. Big cores with HT or a larger number of smaller cores. We each have chosen our strategy. I have never said that our strategy addresses every single server workload.
More cores address virtualization, cloud, database, web and HPC. I am not necessarily going to be the best choice for a standalone exchange server. But, guess what. All of those applications that don't scale well are getting virtualized. So, essentially, as virtualization and clould continue to grow, so does my opportunity.
There will always be a set of applications that you are not going to choose my processor for, but when you look at the estimates of where growth in the server market is, you see more cores being a better option.
If more cores was a bad choice, then why is intel continuing to ratchet up their core counts? Are they stupid? No.
The argument of "the applications need to be able to take advantage of the cores" is great if you are a gamer running a single threaded first person shooter, but that is not the server market.
Analysts are expecting ~40-50% of the workloads to be virtualized by next year (sitting on ~20% of the physical servers). HPC is 8-10% of the market. Cloud will be ~20%. Database is ~10-12%. Web is another ~15-20%.
So, I guess there is only ~75% of the market that I have access to. Given the fact that I am the #2 player in the market I would rather go after the 75% of the market then the 25% that wants fewer high clock speed cores. Expecially because that 25% keeps getting smaller.
There are so few quads sold in the server market today that you'll be hard pressed to see server parts with core counts like that in the future.