According to JFAMD's blog, he said that Bulldozer will be better perf/watt than Bobcat would be in a server environment. I agree that ARM isn't going to have much to offer, but if Microsoft and Facebook are really considering Atom clusters, I'd say they are potential BD customers. Or at least should be.
In all likelihood, it will have better perf/Watt on desktops, too. It's the total wattage and cost that makes Atoms and Bobcat's attractive, not the performance per watt under load. If you underclock the desktop CPUs sufficiently, they don't cost AMD or Intel any less, and so won't cost you any less, either (ULV and EE CPUs, FI).
(ed: the above podspi quote was quoted by Dark Shroud, here)
I haven't followed that Blog. The big push for the use of Atom cores is the decreased cooling requirements. So it comes down to how much heat Bulldozer chips will produce vs the cooling they'll need. Either way AMD has all their bases covered in the server market.
I wonder how they will be implemented, and received, in terms of RAS, as well. RAS to the level of current Xeons is overkill for most (Intel likely plans to begin phasing out Itanium, or mixing x86 and IA64 in high-reliability environments, IMO), but error checking and logging, if not correction, on caches, RAM, and buses, would be a must, if I were specifying servers, especially now that Google's study found most errors to occur during transmit. I could see this not being thoroughly implemented turning enthusiasm into a big fat, "no," for Atom, Bobcat, and/or ARM servers.
They shouldn't be treating them like servers.....VMware will do auto load balancing and use live migration for it. Live migration is great for maintainence of a physical server during work hours. The inability to live migrate between Intel and AMD is a knock against AMD. Intel has the market. AMD is trying to get more of it. You need to work with the various hypervisors to make a mixed enviorment play nice.
While it would be great if they played nice, it is very common for migration between physical servers to be used only as an availability/restoration mechanism only. When not hosting VMs for others, load balancing is easy to ignore: building a server with greater specs than you need right now can be cheaper than upgrading it in the future, and is almost always cheaper than adding a server, later. As software licenses come to take so much of the cost, hundreds, or even a couple thousand, dollars on more RAM, faster storage, beefier CPUs, etc., allow consolidated servers to be assigned all they will need for their lifetime, and only share any meaningful IO when dealing with nonvolatile storage, with minimal or no overallocation of CPU and RAM resources. If you need more than one Windows Server license, this can be true down to <$2000 servers. In addition, the virtuals tend to get software upgrades about the same time as the server they rest in gets replaced with new hardware, so it's more like having 10 dedicated servers in a single 1-4U box (or two boxes, if a SAN is used for primary storage), than it is having an arbitrary set of computing and storage resources.
The few times I've used internal servers in an 'oversold' virtualized environment, where load balancing would be useful, peak times for
other servers always ended up getting me support calls about the apps
I supported running slowly, as if there was something I could do about it
(mind you, the slowest and worst-written parts I did redo, but that only goes so far, when everyone else's big bloated apps are hogging resources all around the same time). I wouldn't be surprised if people supporting those other apps had the same experience. If the users and developers have any say, that doesn't happen on the next upgrade cycle, and most easy to virtualize internal servers don't need massive amounts of CPU and RAM, each. I have not had the experience of working in an environment clustered by type and/or I/O needs, but that does seem like a decent compromise, if there are enough servers of a similar type that can share a network.