Originally posted by: Assimilator1
Sid
Yes ,they were saying that prior to the release of the GF 8800s ,the GF 7950s are indeed less powerful than the X19xx ,though you'd think they'd be powerful enough *shrug*.The 8800s are of course a different story .
waffleironhead
Interesting ,I wonder if Nvidia will ever replie?
Originally posted by: GLeeM
I might be mistaken but I thought ATI helped alot to get the GPU client working.
Originally posted by: waffleironhead
Originally posted by: GLeeM
I might be mistaken but I thought ATI helped alot to get the GPU client working.
you are correct ati had a lot of work on this, it was in their best interests to get a working fah driver. Kinda makes you wonder why the fah ppd are so low if we are indeed generating so many flops of computation. If I were ati I would lean on fah to increase the ppd (might increase sales).
Originally posted by: Insidious
The F@H project leader has always demonstrated an apparent belief that there is really only one CPU manufacturer, (Intel) and only one worthy GPU as well, (ATI)
They get rather indignant when asked to consider writing optimized code for 'lesser' architectures.
I've had several good laughs at their forum reading the rationale (pronounced: 'dogma') for these short sighted project limitations.
-Sid
For those that don't know, he really thinks/means this, even though people here might say otherwise of posts like thisOriginally posted by: 7im
I'm not here to pick a fight,
I think this was in F@H performance, not necessarily other types of performance.Originally posted by: 7im
the ATI hardware had leap frogged the NV hardware in performance,
This kind of vitriol is not appreciated here.Originally posted by: 7im
Optimized code for lesser hardware? What lesser hardware? You calling AMD lesser hardware? You still crying about about the QMD work units that ended 2 years ago? Oh, wait, you are... I just hadn't gotten to your next post yet. Cripes, man, get with the present. You're just upset the SMP client performance on Barcelona doesn't smoke the C2Qs. :evil: And that's simply because of the smaller L2 cache compared to the Intel architecture. You can't blame Stanford for that one.
You can call F@h shortsighted if you want, but F@h is the first DC Project to run on an a GPU, and first to run on a PS3. If you call that shortsighted, you must be blind.:roll:
And yes, some people, like me, become rather indignant when people, like you, spread the kind of false manure you are slinger here.
Originally posted by: 7im
@ GLeeM, I encourage people to run the GPU client all the time. I posted about it earlier today...
Originally posted by: 7im
GLeeM, I apologize for the vitriol, but what you call fury, I call fervor. So then are you saying its okay to spread BS and Dogma in this forum, as long as its done calmly and politely? Or only if I agree with this forum's point of view for bashing the project/project lead? Or do I get a pass regardless of my views if I happen to fold for team # 198? Please explain the difference to me.
==============
Yes, I am only speaking to the performance of GPUs for F@H. The GPU code was developed on NV. The GPU client was released on ATI because the X1xxx series of cards folded significantly better than the previous NV generation. Then the NV 86xx and 88xx came along, and the ATI and NV were on about equal ground for F@h performance. However, NV wasn't as helpful as ATI in getting a working driver for the F@h GPU client, and that's where we are today. Game performance of a GPU has no bearing on the development or usage of a GPU for a F@h client.
And I find it ironic that some would claim the project gives preference to only Intel and ATI, which would also indicate that AMD and NV are getting shorted (They aren't!). Well, it's going to be harder to spread that BS since ATI is now AMD. With that line of thought, the project would have to end the GPU client altogether, and the opposite is going to happen in the next few months.
And yes, the current encouragement appears to be for SMP, as it is one of the most productive (scientifically) clients at this point in the project. However, as I hinted, crow will be served shortly for those who think GPUs aren't being encouraged as well. Don't dump your X1xxx cards just yet boys and girls. Neither should the NV crowd.
Originally posted by: natethegreat
Just curious, how can you compare F@H performance on ATI and NVIDIA GPU's when there is no client for NVIDIA GPU's? How is Stanford running F@H on NVIDIA hardware?
Originally posted by: 7im
"You should check your history Sid. The original GPU code was developed on NV hardware. However, by the time the GPU code was ready to be put in a F@h client, the ATI hardware had leap frogged the NV hardware in performance, so ATI is where the F@H client started. With the release of the latest NV cards, the F@h project would have two GPU clients IF NV could provide working code/drivers, as you well know from the GPU threads in the F@h forum. That's an NV issue, not a Stanford issue. So why would you say Vijay is single minded about hardware when he has clearly demonstrated they support multiple hardware and platforms... CPU clients, GPU clients, SMP clients, PS3 clients, Mac OSX clients...
Optimized code for lesser hardware? What lesser hardware? You calling AMD lesser hardware? You still crying about about the QMD work units that ended 2 years ago? Oh, wait, you are... I just hadn't gotten to your next post yet. Cripes, man, get with the present. You're just upset the SMP client performance on Barcelona doesn't smoke the C2Qs. :evil: And that's simply because of the smaller L2 cache compared to the Intel architecture. You can't blame Stanford for that one.
You can call F@h shortsighted if you want, but F@h is the first DC Project to run on an a GPU, and first to run on a PS3. If you call that shortsighted, you must be blind."
----------------------------------------------------------------------------------------------------
IMHO you have to consider where the $$ to support the Stanford lab are coming from too:
Apple (good OSX client)
ATI (GPU client)
Sony (PS3 client)
Intel (QMD bonus, C2Q performance/bonus)
NSF & NIH (our tax $)
Crunchers (hardware & electricity $$)
NV & AMD need to provide funding & internal help to Stanford to get the same attention.
I'm not critical of the Stanford lab for pandering to these companies that offer them financial support and I know they have to bring in $$ to survive in a highly competetive academic environment but there's no reason to put them on a pedestal either.
The crunchers with AMD & NV hardware are feeling a bit left out so the risk to the Stanford lab is they may take their cycles to another project.
In the end, it's the science that's important, right?
:beer:
Originally posted by: biodoc...
In the end, it's the science that's important, right?
:beer: