gorydetails
Member
- Oct 28, 2011
- 50
- 0
- 61
Yea, fp a big deal, I'm guessn that's y sparc took it a step further and shared fpu among 8 cores. Improve fpu utilization. Amd modules r nothin new under te sun
Don't put words in my mouth. I NEVER said FPU doesn't matter for a server. You have trouble understanding the word us. Meaning for my company, not for all companies, just for our company (one of the terms of the english language to describe this is 'us'). For us, FPU doesn't matter. 'Us' is a term indicating an ensemble that doesn't include every possible server usage scenarios. Understand 'us' in the context of my posts now?
I guess I have to draw a picture, but I am not good at it so going to have to try simple words:
For
us
FPU
doesn't
matter.
For
us
IPC
doesn't
matter.
Understand now?
PS: But I am curious, what is your server development/administration/architecture experience, since you seem to know that for us FPU is a must. Can you come over and make some black magic voodo so we can reduce our number of servers down from ten to maybe 2-3 and save a lot of money in the process. You seem to know it so well how to optimize non FPU/IPC demanding server tasks. Please, come down, we'll pay you very, very, very well.
I am dead serious by the way.
Thus my zombie comment. AMD is kept artificially alive. They just don't have the resources to compete, never had. Athlon 64 was an Intel error. Catching Intel performance wise was only possible during a short time when Intel was managed by marketing (Gigahertz uber alles folk!).
Apples to oranges. NVidia was in a market that was pretty open. There wasn't a big competitor that utterly dominated and had a 2+ year advantage in manufacturing with 10 times the monetary resources. So AMD had no choice. Instead of trying something new like Bulldozer, they could have done what? Die shrinked Thuban/Deneb and try to catch up with Intel? Good luck. There is virtually no reason for someone to get an AMD processor on desktop unless some rare cases where they are limited by budgets. That would not have changed with a die shrink of Thuban either. They had to try something new and get traction with more cores. Did/will it work, who knows.
Why isn't any Linux distro taking Windows Desktop market share? Why are Macs hovering around what ... 7% of marketshare despite the insane cash Apple has? It's very hard or nearly impossible for a competitor to dethrone someone like Microsoft or Intel, they are too entrenched and have years of advantages.
Don't put words in my mouth. I NEVER said FPU doesn't matter for a server. You have trouble understanding the word us. Meaning for my company, not for all companies, just for our company (one of the terms of the english language to describe this is 'us'). For us, FPU doesn't matter. 'Us' is a term indicating an ensemble that doesn't include every possible server usage scenarios. Understand 'us' in the context of my posts now?
I guess I have to draw a picture, but I am not good at it so going to have to try simple words:
For
us
FPU
doesn't
matter.
For
us
IPC
doesn't
matter.
Understand now?
PS: But I am curious, what is your server development/administration/architecture experience, since you seem to know that for us FPU is a must. Can you come over and make some black magic voodo so we can reduce our number of servers down from ten to maybe 2-3 and save a lot of money in the process. You seem to know it so well how to optimize non FPU/IPC demanding server tasks. Please, come down, we'll pay you very, very, very well.
I am dead serious by the way.
Really? Thank you for adding value to this conversation.
I suppose some people are just happy that they can hate something and don't care about anything beyond that.
Bulldozer is a very promising CPU architecture that can only become more relevant with time - software will get more threaded, not less. I will probably not buy an FX now and will wait for a stepping or generation update, but I'm very much looking forward to what AMD can make of it. You, meanwihle, hate on!
You clearly have no clue of what you're talking about. You're even so self-centered that you think YOUR particular situation accounts for all those in the server world, even though you try to make it sound like it doesn't. If IPC doesn't matter then I'm sure you'd be just fine with a 32-core Netburst Pentium, right?
There's no point in discussing with a Bulldozer apologist like you. Goodbye.
If you don't care about IPC, then what do you care about? Does the server just sit there and look pretty to impress the folks who know nothing about technology?
You: 'Look at the server'
Masses: 'Oooooooooh'
Your missing the whole point. Companies are in the business of making money. Gaining marketshare is great, but profit is better. Why do you think Apple is worth more than MS, despite a lower marketshare? Also, Apple, with it's 'puny' 7% marketshare has led the industry in many ways through innovation and cooperation with hardware/software companies.
...in its current form it sucks balls!
Bulldozer is a very promising CPU architecture that can only become more relevant with time - software will get more threaded, not less. I will probably not buy an FX now and will wait for a stepping or generation update, but I'm very much looking forward to what AMD can make of it. You, meanwihle, hate on!
If it "sucks balls", sign me up, in fact I'll take two!
Why do you think Apple is worth more than MS, despite a lower marketshare?
Obviously I was a little disappointed at the bulldozer architecture debut. But there were cases where the benchmarks really did not make sense, in one instance it scales, another it doesn't. Its great, its not and all that mixed stuff. Like a rollerskating CPU. And it runs hot as well as inefficient. (although one wanders what most of us are doing thats keeping out CPU's away from an idle state most of the time).
Anyway. Then there was the little increase in Windows 8 performance over at toms.
And other things that seem to be popping out of the woodwork. It led me to keep looking for reasons. (WHY AMD WHYYYYYY... I cried)
I stumbled upon the Phoronix.com website. Which tests Bulldozer under Linux.... A LOT!!!!
Even in the initial review you get the idea that Bulldozer is better under linux than Wondows.
http://www.phoronix.com/scan.php?page=article&item=amd_fx8150_bulldozer&num=1
I was thinking its not all bad... And then I stumbled upon the second article about performance using different Compilers. Being a bit of a unknown to me as to what compiling is I was surprised by the sheer amount of performance to be gained by doing this.
http://www.phoronix.com/scan.php?page=article&item=amd_bulldozer_compilers&num=2
The conclusion to the article is that Linux users will get these benefits before Windows because the kernel is always the same for windows (whatever a kernel is )
My thoughts are the following, it looks like its actually a great architecture once applications take advantage of the AVX and other extensions available. Which simply are not coded for at the moment, and when they are compiled right... Bingo.
Take Cinebench 11.5, Most people seem to take this benchmark as the defacto " how fast is the CPU benchmark" in heavy threaded apps. simply because its the one thats conveniently available. If it was optimized for AMD and Intel alike, would the Intel CPU's be showing better. C-RAY for example takes a massive performance lead for Bulldozer once its compiled with their compiler.
I am not trying to say that Bulldozer is a magic chip, but there is more to it than just "it sux". Windows 8 will bring a new kernel, other patches and chip optimization that are simply not available to Win7, New programs will certainly take advantage in time, and I'm guessing that at the end of 2012, people will be praising the chip once power usage goes down and clock speeds increase even more. I know this because I own a Phenom I processor, and its really no different to a Phenom II other than a patch I disable and I'm unable to go beyond 2.6GHZ. (I got it cheap).
What do you guys think, does the long term performance benefit look better than the one available now ( I acknowledge that Intel has a considerable lead in normal programs now)
People are actually defending Amd and that Monstrosity BD?
Last I looked, Phenom never bested Intel in perf. Only in price. Right?
I cant say a lot right now but people saying BD is gonna suck in Servers dont know shit.
There are companies that need more cores, there are companies that need more FPU and companies that need more IPC from their Servers.
Interlagos and Valencia will be addressed in those companies that need more core density, companies that can code their own apps and take advantage of BDs strengths.
There are more than IPC and performance per core in the Server market
Well, again...if a company looks and sees that more cores doesn't gain them that much and comes at the cost of higher power draw per server, as well as beefier cooling needed...they might back off. I don't expect this architecture to do well at all in the server market, unless it is dirt cheap. Or they rev it.
For core density, Bulldozer offers more cores per rack space. A 4 Socket per 1 or 2U rack with four 16-core Interlagos gives you 64cores. That translates to lower cost per Rack space.
Bulldozer's Configurable TDP can make those 64cores run at lower power consumption with less Cooling. That translates to lower operational cost.
Since AMD's interlagos have not been released yet, since we havent seen any Benchmarks as of yet, saying that they will suck in servers makes people look stupid.
The problem is that people who make purchasing decisions about servers are at least as knowledgeable as we are. So they wont be swayed by lots of cores, they will look for good performance per core, capital cost, and heat/power requirements. For instance, if I were to ask you to choose between a dual core Pentium 4 clocked at 3.6 or a single core Celeron Sandy bridge clocked at 3, which would you pick? Even though the Pentium as more cores, it will be slower. Its the same with BD - having more cores doesnt magically make it better.
The problem is that BD's long pipeline and high cache latencies make it perform best at high frequencies. Drop BD's frequency and its performance gets even worse. So you have to choose between average performance and high power consumption or poor performance and low power consumption.
Why? Is Interlagos going to be very different to BD somehow? Yes, it will perform better in heavily multithreaded scenarios than it does in single threaded scenarios. I just dont think its performance in heavily multithreaded scenarios, when power requirement is taken into account, will be good enough to warrant buying it.
I would choose a Dual core Atom 32nm 10W 2,13GHz than a single core Intel Celeron G440 1,6GHz 35W Sandybridge for those applications, although the Celeron has Higher IPC.
I've never understand the motivation behind this form of colloquialism where we take something that is actually quite desirable in the literal sense but we use it as a means of denigration in the written or spoken sense.
If it "sucks balls", sign me up, in fact I'll take two!
I know I'm late to the party, but here goes:
1) I can see it. Linux is the Goth half-breed arty half-brother of the OS world, and lots of fanatical coders use it, so, this makes sense.
2) BUT - the thermal issues are a whole other ball of wax. Hard to believe a 32nm CPU would have this problem. Speaks to fatal flaws in the architecture. Perhaps a second spin will fix some of them. Hope so.