DX9 performance on CPU

devers

Senior member
Jul 6, 2003
202
0
0
I'm just curious what people think about the following subject:

If Intel, AMD, or a third party put as much effort into DX9 driver/software optimization as ATI or nVidia do for their GPUs, do you think CPU DX9 performance would be comparable to today's graphics cards?

I know that DX9 has a software mode, but no doubt it is not the most highly optimized code possible. Just as a thought experiment, what do you think performance would look like if you rigged up a system with two of today's fastest CPUs. One to be used a main processor, and one exclusively as a graphics processor. Somehow give the 2nd processor RAM and a memory subsystem which is as fast as those on today's graphics cards.

In other words, given equivalant setups (memory systems and software support) do you think that today's high-clocked, powerful CPUs could compete with the lower-clocked but much more specialized GPUs out now?

DX9 enables programmable/flexible shaders. As this happens, is it possible that more generalized, but higher-clocked processors will render graphics shader programs as quickly as specialized processors?

Again, I'm not looking to actually do this or anything, nor am I asking if it's physically feasible to put together a system that would do this today... just asking out of theoretical curiousity.
 

McArra

Diamond Member
May 21, 2003
3,295
0
0
Originally posted by: nemesismk2
Originally posted by: McArra
Originally posted by: nemesismk2
Originally posted by: VIAN
Hardware support is always better than software.

Unless you have a Geforce FX, sorry couldn't resist!

LOL.

well some people are taking things far too seriously, ok so nvidia's geforce fx line isn't so great, it's hardly the end of the world is it?

Must agree
 

McArra

Diamond Member
May 21, 2003
3,295
0
0
Originally posted by: nemesismk2
anyway when S3 release the DeltaChrome we will all forget about ATI and Nvidia and buy a real video card instead!

hehehe, sure!
 

Johnbear007

Diamond Member
Jul 1, 2002
4,570
0
0
Originally posted by: McArra
Originally posted by: nemesismk2
Originally posted by: McArra
Originally posted by: nemesismk2
Originally posted by: VIAN
Hardware support is always better than software.

Unless you have a Geforce FX, sorry couldn't resist!

LOL.

well some people are taking things far too seriously, ok so nvidia's geforce fx line isn't so great, it's hardly the end of the world is it?

Must agree

No, but it is a huge dissapointment. Really, unless your a fanboy there is no reason to grab one of these cards and that sux. When there is no competition prices stay high. If there was some real competition between products right now we might not have this huge deadzone between high and low end. High cards like the 9700 might have dropped in price if the 5800 or 5600 could compete.
 

PricklyPete

Lifer
Sep 17, 2002
14,582
162
106
To answer the question of the thread...no. While modern day graphics cards to have the ability to run "generic" programs instead of applying standard graphics functions imprinted in the hardware, they do a whole lot more than just that. Pixel Shaders and Vertex shaders are only part of the overall chip.

GPU hardware is specifically designed for graphics applications and have many specialized functions designed specifically to speed up image processing. While you could emulate all of this special hardware in software using a modern CPU...it is not going to be fast at all comparatively.

I know that was arun on sentence...but hopefully it made some sense.
 

devers

Senior member
Jul 6, 2003
202
0
0
Thanks Pete

Have any specifics? I agree with you, just curious for specifics.

Do you think that, at some point, with shader capabilities becoming more and more generalized and programmable, CPUs will be able to render graphics as quickly as GPUs? I mean, CPUs obviously do floating-point calcs very quickly... and more and more, that's pretty much what extensive shader programs come down to. The x86 instructions can handle the matrix algebraics necessary to compute graphics transforms very quickly...

Just wondering if/when CPUs might compete with GPUs, given equivalant memory subsystems, and equally optimized and specialized DX9/OGL driver interfaces.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
> Just wondering if/when CPUs might compete with GPUs, given equivalant memory subsystems, and equally optimized and specialized DX9/OGL driver interfaces.

I'd guess not any time soon, since transistor count / overall "pwer" is likely to follow the same exponential curve for both (Moore's law). We're still a few generations away in both the CPU and GPU from reaching Pixar / Final Fantasy movie level rendering in real-time even with the current division of labor. The CPU's power might need to double a few times after that before it makes sense to stop increasing GPU power.
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
devers, from my limited understanding the reason is quite simple: the two processors are geared toward different tasks. CPUs are specialized for processing lots of different instructions at once, with a lot of work done with optimizing out-of-order execution. GPUs (or VPUs) are specialized for brute force, repetitive tasks done simultaneously on similar objects. You can tell just by transistor count alone that it would be very tough to not only emulate but surpass a GPU on a CPU: the R350 and NV35 are 107-135M transistors each, while the P4 is only around 55M. GPUs have a lot more hardware geared for exclusively 3D work, like making AF and AA cheaper. Heck, they're just starting to allow conditionals and branching in their pipelines with DX9-level "shaders," something essential to most programming languages (and thus the CPUs that run them).

I suppose GPUs may become more similar to CPUs over time, as they become more programmable, but I think each processor will have lots of hardware dedicated to functions unique to their fields.

That's my understanding, from the sidelines. Hopefully it's not too wrong.
 

devers

Senior member
Jul 6, 2003
202
0
0
Cool, thanks for the responses. Yeah, it was just something I was thinking about and got curious.

Your closing statement there is interesting Pete, I think you kind of state my question in a more understandable form.

Given that shader programs are becoming more flexible, needing to do more generalized floating-point calcs, matrix algebra, and eventually conditionals and jumps... when will GPUs start looking very similar to CPUs? These are pretty much the same things that CPUs have to do, and GPUs will likely have to start incorporating cache's and branch-prediction algorithms.

I'm just curious at what point, if ever, a "GPU" will become extraneous. The major CPU manufacturers have a long tradition of optimizing for graphics-like functionality (MMX, et al). Meanwhile, GPU engineers will soon have to start dealing with complex, general shader programs, that will probably look a lot like any numerically-intensive program that is written for general processing.

Sometime ago, graphics processing became very specialized/limited and diverged from central processing... with CPU engineers constantly integrating graphics-like optimizations in their units, and GPU engineers constantly having to design units which can process more and more generalized and complex shader code, do you think that there will be some convergence between GPUs and CPUs in the relatively near future?
 

Pete

Diamond Member
Oct 10, 1999
4,953
0
0
I'm so out of my depth here it's not even funny, but I doubt CPUs and GPUs will converge for a good 10-20 years. Either way, remember that we probably won't move to an all-in-one model anytime soon, as each still needs as much bandwidth as possible. So even if a CPU became like a GPU, you'd need two in your system, each with its own dedicated memory bus. And "regular" PC memory won't approach the speed of that integrated on a video card, as it needs to be much more compatible. The memory on a video card is connected directly to the GPU and is in a controlled, unique environment--that in a PC has to be compatible with tons of systems, and is one slot removed from the CPU.

I think this is as far as I can contribute to this conversation before I start misinforming people.
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: devers
Cool, thanks for the responses. Yeah, it was just something I was thinking about and got curious.

Your closing statement there is interesting Pete, I think you kind of state my question in a more understandable form.

Given that shader programs are becoming more flexible, needing to do more generalized floating-point calcs, matrix algebra, and eventually conditionals and jumps... when will GPUs start looking very similar to CPUs? These are pretty much the same things that CPUs have to do, and GPUs will likely have to start incorporating cache's and branch-prediction algorithms.

I'm just curious at what point, if ever, a "GPU" will become extraneous. The major CPU manufacturers have a long tradition of optimizing for graphics-like functionality (MMX, et al). Meanwhile, GPU engineers will soon have to start dealing with complex, general shader programs, that will probably look a lot like any numerically-intensive program that is written for general processing.

Sometime ago, graphics processing became very specialized/limited and diverged from central processing... with CPU engineers constantly integrating graphics-like optimizations in their units, and GPU engineers constantly having to design units which can process more and more generalized and complex shader code, do you think that there will be some convergence between GPUs and CPUs in the relatively near future?

I seriously doubt there will be convergence between CPU's and GPU's anytime soon. If you think about the architecture of both, even from a layman's point of view, you can see the differences are night and day. CPU's, although they have some decent parallellism with SSE/SSE2, cannot substitute for GPU's in graphics calculations. Take the GeForce 2-4, which have 4 parallel pipelines which can render 2 textures per clock, or the Radeon 9700-9800, which has 8 pipes with 1 texture per clock. These are highly specialized processes which, if emulated by a CPU, would need to be done sequentially (ie in parallel) and would perform significantly slower.

Not to mention the so-called "GPU"s themselves, the Pixel and Vertex shader engines which are designed solely for Pixel/Vertex calculation (ie DirectX 8 and 9 feature).

Another huge factor in keeping Video cards permanently separate is video card memory. GPU's have very fast and wide paths to huge amounts of memory, ie 128-bit and 256-bit paths at 300+ MHz DDR. A good example is the Radeon 9700, which has a ~20GB/s pathway to 128MB of DDR memory. A high-speed memory subsystem is necessary for all that texture data (games take anywhere from a few MB of video memory to 50+ MB for textures). Without some sort of built in memory like video cards with an ultra-high speed pathway to the CPU, there is no way a CPU could even access all that video data fast enough to render it. The fastest platform right now would be the dual-channel P4/Athlon with dual channel PC3200. This yields 6.4GB/s of bandwidth (peak). This isn't anywhere near the 20+ GB/s of modern GPU's (NV4x is rumoured to have 50GB/s of bandwidth!).
 

VIAN

Diamond Member
Aug 22, 2003
6,575
1
0
CPU's are very simple. They only have one pipeline with about 20 stages for the pentium 4 which tops AMD. GPU's on the other hand have 8 pipelines with hundreds of stages. Think of each pipeline as one GPU. GPU's are far too complex for CPU's. While it would be possible for CPU's to handle Graphics and special shaders via software. It really is better to have the shader power embedded in the chip for quick access and use. Although they are slow, GPU's have a lot of power. CPU's on the other hand only do 1 of 3 things at any one time really quickly. I'm guessing GPU's also have cache and registers, but they probably aren't as performance related as for CPU's. Oh and my answer is no.
 

devers

Senior member
Jul 6, 2003
202
0
0
So, thus far people seem to be making to main points:

1) GPUs have very specialized functionality, and can process certain kinds of directives very quickly. There are certain important and repetitive graphics algorithms which, while slow to perform in software, can be executed quite rapidly when they are specifically encoded in hardware, as in today's GPUs.

2) GPUs incorporate an impressive amount of parallelism. Accordingly, GPUs are able to execute more instructions per clock than most CPUs. Thus, even though GPUs are clocked significantly below today's CPUs, they are still very powerful.

People also mentioned specialized graphics memory controllers and sub-systems. As stated previously, for this question to even make sense, we'd have to assume that both processor types utilize the same memory subsystems.

The first two points above are very astute.

To the first point, though, as graphic programs become more and more flexible (incorporating more variables, jumps, and comparisons), hard-wired graphics algorithms will be less and less useful. Then can't we conclude that one of the GPUs main advantages will provide diminishing returns in the future? Especially if we consider that CPUs seem to regularly incorporate instruction sets which have useful graphics applications. Moreover, as GPUs are forced to incorporate branching program structures, the chipsets will no doubt have to incorporate instruction caches and branch prediction... areas where CPU engineers are highly experienced. Perhaps this will confer an additional advantage for future CPUs vs GPUs.

To the second above point, there is no doubt that GPUs, effectively, are able to execute many more instructions per clock than any modern processor. Still, CPUs are constantly incorporating more and more parallelism. Witness SSE, SSE2, and Hyper-threading, as well as other technologies. Thus, CPUs seem to be heading toward increased parallelism in the future as well. GPUs still have higher effective IPCs than CPUs, but how long will this be the case? It is well known that, as far as we know today, there is a (fuzzy) limit to how fast silicon processers can get, in terms of raw megahertz. If this is the case and CPUs, at least for a while, aren't able to match Moore's Law in terms of raw hertz increase, CPU engineers will have to start increasing CPU IPC in order to derive significant performance improvements. As such, it would appear that the second advantage GPUs currently enjoy over CPUs could also be diminished in the not too distant future.

I appreciate all the input in this thread so far. I'm not really trying to argue with anybody... just being curious. Seems like there are some very smart, tech-savvy folks here, so I think it's fun to have mini-debates about such things.

To me... it seems that, given equivalanet memory subsystems, somebody could take, for example, a Pentium4, and write a pretty kickass DX9 driver for it. I don't think it would stack up to other DX9-class processors (well, maybe the FX5200 ). But... what about DX10-class GPUs? Supposing that DX10 supports complex, branching shader programs, GPUs might start looking more like typical CPUs. And CPU engineers seem to consistently encode useful graphics functionality in their products. What might DX11 (or whatever standards besides DirectX, if any, are around at the time) specify in terms of shader programmability? Will there be some point at which GPUs will have to be so generalized as to lose most of the current specialized-hardware advantages they currently enjoy?

Heh... maybe in 2010 I'll have a system with dual ATI CPUs, one of which has a controller for a separate auto-antialiasing graphics memory sub-system.
 

Jeff7

Lifer
Jan 4, 2001
41,596
19
81
One more thing to add - the GPU has direct access to the videocard's fast RAM. The CPU must go through the chipset to access the system RAM, and it travels over the system bus, which is much slower than the CPU itself. The RAM is also slower. RAM speeds on videocards can actually exceed the speed of the GPU nowadays, which means that it has a constant supply of data to process, and isn't kept waiting for it the way the CPU is.
 

xSauronx

Lifer
Jul 14, 2000
19,582
4
81
Originally posted by: nemesismk2
anyway when S3 release the DeltaChrome we will all forget about ATI and Nvidia and buy a real video card instead!

sorry

S3 cannot make me buy a voodoo 5000...
 

Insomniak

Banned
Sep 11, 2003
4,836
0
0
Originally posted by: nemesismk2
Originally posted by: McArra
Originally posted by: nemesismk2
Originally posted by: VIAN
Hardware support is always better than software.

Unless you have a Geforce FX, sorry couldn't resist!

LOL.

well some people are taking things far too seriously, ok so nvidia's geforce fx line isn't so great, it's hardly the end of the world is it?

agreed. Get over it. so the NV3x architecture sucks....whoopdy doo.

 

VIAN

Diamond Member
Aug 22, 2003
6,575
1
0
I guess you could think of a GPU as a multicore CPU with memory controllers and specialized graphics hardware functionality.
Check this out for more info on graphics stuff. It might help - I've never read it - too long.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,000
126
f Intel, AMD, or a third party put as much effort into DX9 driver/software optimization as ATI or nVidia do for their GPUs, do you think CPU DX9 performance would be comparable to today's graphics cards?
Not even if hell froze over, allowing full ice skating access.

Have any specifics? I agree with you, just curious for specifics
CPUs are generic and are designed to handle anything. In constrast GPUs only support a very limited amount of functions, all which pertain to 2D/3D rendering.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |