In A Perfect World

DanInPhilly

Member
Jan 18, 2008
33
2
0
I'm wondering whether the legacy aspect of microprocessors, as well as OS's, is seriously harming performance. IOW, if a CPU (and related hardware) could be designed from scratch, without the need to support older hardware, would it be much faster? This assumes that the OS (and, for that matter, apps) were also designed anew.

If the boost was only a few percent, then no big deal. But if performance rose, say, 50% or more, I'd support a movement to adopt that as the new standard.
 

suszterpatt

Senior member
Jun 17, 2005
927
1
81
While I don't know how much of a performance increase you could achieve with an entirely new architecture/platform, it would probably create more problems with the creation process and incompatibilities than it would solve: as technology advances, that system would also be slowly adapted to these new achievments, and would slowly "degrade" with legacy features. That is unless you want to completely redesign all hardware and software every time a new piece of technology would put a dent in performance, but that's obviously not feasible at the time.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
It would improve performance, there is no doubt of that.
Actually this is done quite a bit, just not in the desktop market.
Look at things like console design.
Yes they are designed for gaming , but at the heart of it is a cpu, gpu, memory, storage, and operating system all designed new and not using legacy parts unless it adds to the performance of the current model.

Take the performance of things like the Playstation 3.
Its very good at processing for little cost.
The thing that would hold it back from becoming a desktop platform is that all the apps are mostly windows with a small minority of users being linux.

But the hardware idea is there
 

firewolfsm

Golden Member
Oct 16, 2005
1,848
29
91
Modelworks hit it, consoles are basically designed from the ground up for that one system. They don't use DirectX either, that's why even something as old as the PS2 can use HDR lighting effects.
 

Foxery

Golden Member
Jan 24, 2008
1,709
0
0
Newer versions of Windows have finally broken away from DOS, and some motherboards are finally being produced without Serial and Parrallel ports on them, but the need for backwards comatibility for consumer products makes it tough to get away from the old techs. Your shiny new Core2Quad 3GHz machine with 4GB RAM *still* knows how to act like a 16-bit 286, with a memory hole at 640KB juuuuust in case.

As far as speedups go, I would guess that the impact of keeping "extra" circuits in hardware should be just about nil if they're sitting around inactive. You save on memory usage and load times when the OS no longer needs to load all the extra legacy drivers into memory, e.g. Windows 95/98 ran on top of DOS, whereas XP and Vista boot as native 32-bit (or 64-bit) and emulate DOS if you need it.

Once the system has booted and detected your hardware's actual capabilities, I can't think of a reason why there would be a performance difference for meaningful work. If your favorite application is written & compiled to expect modern hardware, and does not include provisions for legacy support, vestigial junk elsewhere in the system shouldn't affect it.
 

ChAoTiCpInOy

Diamond Member
Jun 24, 2006
6,442
1
81
Well your situation happened when Apple switched from Mac OS 9 to Mac OS X. And yes performance did increase. When companies don't have to worry about support for legacy stuff, they don't have to build it in and make sure it doesn't conflict with anything from before.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: Foxery

Once the system has booted and detected your hardware's actual capabilities, I can't think of a reason why there would be a performance difference for meaningful work. If your favorite application is written & compiled to expect modern hardware, and does not include provisions for legacy support, vestigial junk elsewhere in the system shouldn't affect it.


If your only looking at it from a software perspective thats true.
But from a hardware viewpoint, legacy does slow things down.
There are much faster interfaces than pci that exist now, yet it will remain in use because its needed for older hardware.

There is space used in the x86 processor die that is there just to support legacy applications.
That space could be used to enhance the processor for more modern software.

Then you have things like storage.
They could make hard drives store more data and cheaper if they could change the physical size of the drive, but they can't because it has to fit legacy cases.

Blu-ray disc are the size they are not because that gave the best storage, but because the same drives had to be dvd compatible. Adding another 1/2" would have increased the storage but then you couldn't use dvd cases for the disc .



Memory slots on the motherboard limit the amount of memory that can be installed because the slots have to fit the current standard. You can only fit so many chips on a pcb and still fit it in the slot. If you were designing from scratch you could add more pins, change placement, etc.


Lots of times standards or legacy items limit current items.

 

firewolfsm

Golden Member
Oct 16, 2005
1,848
29
91
Originally posted by: Modelworks

Then you have things like storage.
They could make hard drives store more data and cheaper if they could change the physical size of the drive, but they can't because it has to fit legacy cases.

Blu-ray disc are the size they are not because that gave the best storage, but because the same drives had to be dvd compatible. Adding another 1/2" would have increased the storage but then you couldn't use dvd cases for the disc .



Memory slots on the motherboard limit the amount of memory that can be installed because the slots have to fit the current standard. You can only fit so many chips on a pcb and still fit it in the slot. If you were designing from scratch you could add more pins, change placement, etc.


Lots of times standards or legacy items limit current items.

You're talking about making bigger hard drives, memory slots and discs, that WOULD increase capacity but at the cost of manufacturing, convenience and heat. Those sizes were chosen for a reason.

There ARE some benefits like removing legacy support on chips but they're small. In fact, 90% of the benefit would come in terms of software, development costs would probably halve if they only supported for one OS or one set of hardware.

 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Those sizes were chosen for a reason at the time of their invention which was many years ago.
Those reasons don't apply now.

Dvd size because of needing cd compatibility.
Hard drive size because thats the size its been for 10+years.
At the time platters in hard drives were very expensive to produce.


Its not a discussion about cost or convenience but about if you started from scratch.



 

Foxery

Golden Member
Jan 24, 2008
1,709
0
0
Originally posted by: Modelworks
If your only looking at it from a software perspective thats true.
But from a hardware viewpoint, legacy does slow things down.
There are much faster interfaces than pci that exist now, yet it will remain in use because its needed for older hardware.

I don't have anything plugged into my PCI slots any more. They're idle circuits. In AMD systems with HyperTransport, they're truly useless, and next year's Intel Nehalem systems will be able to say the same.

There is space used in the x86 processor die that is there just to support legacy applications.
That space could be used to enhance the processor for more modern software.

With today's 45nm manufacturing process, you can fit all of the circuitry required to build an original Pentium CPU on the head of a pin. If mainstream PCs ever break away from the x86 architecture, they could easily do exactly that.

Memory slots on the motherboard limit the amount of memory that can be installed because the slots have to fit the current standard. You can only fit so many chips on a pcb and still fit it in the slot. If you were designing from scratch you could add more pins, change placement, etc.

Each new generation of memory (FPM -> EDO -> DRAM -> DDR1/2/3) does change the pin count and slot size, because they are not compatible. The # of slots on a motherboard is not a design limitation, it's just what they've decided 4 is practical for the average home user. Xeon server boards sometimes have 8 or more RAM slots. Some cheapo OEM machines only come with 2.
 

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
If one could start from scratch, I'd see the entire comptuter architecture except mass storage, on the CPU. All the processing, ram, video, IO, everything on the CPU with an embedded OS. The CPU architecture itself would be mostly programmable and the individual pieces would be driven by software also imbedded. These CPUs would be "plugged" into an adapter board that has connections for video, input devices, usb, power, and mass storage, etc.

Then, scaling for performance would simply be adding another all-in-one cpu to the adatper (if it supported it).

Software would simply either utlize the existing architecture programming or completely rewrite and redirect the CPU resources to a specialized task.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: Foxery


I don't have anything plugged into my PCI slots any more. They're idle circuits. In AMD systems with HyperTransport, they're truly useless, and next year's Intel Nehalem systems will be able to say the same.

So having them on the motherboard is using space that could be used for the faster interfaces, but they will continue to be included because of legacy support.


With today's 45nm manufacturing process, you can fit all of the circuitry required to build an original Pentium CPU on the head of a pin. If mainstream PCs ever break away from the x86 architecture, they could easily do exactly that.
Space is at a premium even for 45nm.
There is a huge list of things that are considered for each revision that are left out because there isn't enough space. If you did not have to have support for previous software a newly designed cpu would be faster than what is currently available.
Its done all the time in the embedded market.



Each new generation of memory (FPM -> EDO -> DRAM -> DDR1/2/3) does change the pin count and slot size, because they are not compatible. The # of slots on a motherboard is not a design limitation, it's just what they've decided 4 is practical for the average home user. Xeon server boards sometimes have 8 or more RAM slots. Some cheapo OEM machines only come with 2.
I'm not talking about the number of slots on the board.
Instead the way they are interfaced to the board.

Ever been to an IEEE meeting when a new standard is being decided ?
The first thing discussed is us engineers talk about what would give the best performance .
Then they bring in the bean counters and you get to see your performance figures start to slide because of 1.) the need for legacy support 2.) it would cost too much retooling of the manufacturing process.

I got a friend at samsung that proposed a vertical cube like memory module, almost like a cpu socket, that would allow more memory, heatsink attachments, and higher speeds.
The problem was the design had great benefits, but because it would cost too much to change out the current manufacturing process, which is tooled for the current dimm format, its not being used.

If you designed a new pc platform right now from scratch it would be faster than what we currently have.


If you throw cost and legacy support out , technology becomes much more interesting.
 

PolymerTim

Senior member
Apr 29, 2002
383
0
0
Originally posted by: KIAman
If one could start from scratch, I'd see the entire comptuter architecture except mass storage, on the CPU. All the processing, ram, video, IO, everything on the CPU with an embedded OS. The CPU architecture itself would be mostly programmable and the individual pieces would be driven by software also imbedded. These CPUs would be "plugged" into an adapter board that has connections for video, input devices, usb, power, and mass storage, etc.

Then, scaling for performance would simply be adding another all-in-one cpu to the adatper (if it supported it).

Software would simply either utlize the existing architecture programming or completely rewrite and redirect the CPU resources to a specialized task.

No offense, but that sounds like one big mess. I know the "perfect world" the OP mentioned could possibly allow for the manufacturing capability to do what you suggest with reasonable performance, but I can not imagine that this kind of an all-in-one setup would ever be more efficient than a modular design given the same technology to work with. A GPU on every CPU (how many do you really want in one computer)? RAM upgrade, sorry, you have to replace the entire all-in-one chip.

I'm not familiar with how programmable CPUs would work, but I've got to think there would be a much better way than this in a "perfect world".
 

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
Originally posted by: PolymerTim
Originally posted by: KIAman
If one could start from scratch, I'd see the entire comptuter architecture except mass storage, on the CPU. All the processing, ram, video, IO, everything on the CPU with an embedded OS. The CPU architecture itself would be mostly programmable and the individual pieces would be driven by software also imbedded. These CPUs would be "plugged" into an adapter board that has connections for video, input devices, usb, power, and mass storage, etc.

Then, scaling for performance would simply be adding another all-in-one cpu to the adatper (if it supported it).

Software would simply either utlize the existing architecture programming or completely rewrite and redirect the CPU resources to a specialized task.

No offense, but that sounds like one big mess. I know the "perfect world" the OP mentioned could possibly allow for the manufacturing capability to do what you suggest with reasonable performance, but I can not imagine that this kind of an all-in-one setup would ever be more efficient than a modular design given the same technology to work with. A GPU on every CPU (how many do you really want in one computer)? RAM upgrade, sorry, you have to replace the entire all-in-one chip.

I'm not familiar with how programmable CPUs would work, but I've got to think there would be a much better way than this in a "perfect world".

Ahh, but realize the individual functions of the chip parts would be entirely programmable so adding in another cpu wouldn't mean 2 of every piece, just that the processing power increases depending on what the software is intending to do.

For example, playing a video game would mean that a second chip would be entirely programmed to act as a powerful GPU. Or for a single CPU setup, any unused portion of the CPU can be redirected to GPU functions.

If you think about it, this would be a very modular design already.

Also, what is really driving our upgrades? Why is it that people want to upgrade their ram or GPU? Even now, generation old GPU offers the same FPS as the newest and fastest GPU playing on a decently sized screen (19"). Also, software used to drive the hardware market but now hardware has moved at such a pace, software can't keep up. My old P2 can still perform the same tasks my blazingly fasy gaming machine can in terms of word procesing and internet and other basic general computing tasks.

Software, at its core, is very innefficient. It is allowed to be bloated because of of the hardware performance gap. Assuming an inbedded OS that offers API to control the pre programmed architecture or direct mode offering reprogramming the architecture, the size and memory footprint can be very small.

For hardware to make any drastic changes, so must software and software design.

The real drawback (the current drawback as well) is mass storage and thus, IO. Until we can get mass solid state storage, this will always be a performance problem.
 

firewolfsm

Golden Member
Oct 16, 2005
1,848
29
91
Originally posted by: Modelworks
Those sizes were chosen for a reason at the time of their invention which was many years ago.
Those reasons don't apply now.

Dvd size because of needing cd compatibility.
Hard drive size because thats the size its been for 10+years.
At the time platters in hard drives were very expensive to produce.

Actually, we used to have 5.25" hard drives at one point, but it was too inefficient and the storage gains weren't much. Also, who the fuck needs more than 1TB? or just buy two, and you have more space than most servers will ever need. Would you rather carry record sized discs with you? Even if they held twice as much, I doubt it's worth it. Dual layer discs can do that with lower dimensions.

We really aren't losing much by supporting legacy, and if you think about it, the amount of money we save by being able to reuse parts and software is well worth the decrease in performance, if there is any significant one. You could use that extra money to buy a faster computer, and there we go, you just negated the problem.
 

Foxery

Golden Member
Jan 24, 2008
1,709
0
0
Originally posted by: KIAman
Ahh, but realize the individual functions of the chip parts would be entirely programmable so adding in another cpu wouldn't mean 2 of every piece, just that the processing power increases depending on what the software is intending to do.

For example, playing a video game would mean that a second chip would be entirely programmed to act as a powerful GPU. Or for a single CPU setup, any unused portion of the CPU can be redirected to GPU functions.

I think this is what AMD is shooting for with their codename "Fusion" project, and was part of their reason for buying ATI. Multiple-core CPUs are the current development path for PCs anyway, so putting both CPUs and GPUs on one die is an option they're exploring. I get the impression Intel's Larabee follows a similar concept.

I think we'll continue to see discrete video cards for a long time because the top performers are too power hungry and too hot to put in the same package, but there's no reason why low-end graphics, which are currently integrated onto motherboards, couldn't be plopped onto the CPU die instead.

The real drawback (the current drawback as well) is mass storage and thus, IO. Until we can get mass solid state storage, this will always be a performance problem.

Hard drives are rarely a bottleneck these days. 4 GB of system RAM is dirt cheap and can hold most of today's demanding apps and games entirely in memory.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: DanInPhilly
I'm wondering whether the legacy aspect of microprocessors, as well as OS's, is seriously harming performance. IOW, if a CPU (and related hardware) could be designed from scratch, without the need to support older hardware, would it be much faster? This assumes that the OS (and, for that matter, apps) were also designed anew.

If the boost was only a few percent, then no big deal. But if performance rose, say, 50% or more, I'd support a movement to adopt that as the new standard.

Performance would not improve that much.
Boot up time would improve significantly.
Programming would get a little easier.
Power usage and system costs would have a noticeable decrease, due to not having to have all the supporting chips for everything else. There's a reason "from the ground up" systems can get away with a CPU, memory chip, and simple IO chip whereas even the simplest x86 systems often have multiple complex IO chips and buffers.

However, getting rid of legacy stuff wouldn't lighten the OS and driver load very much, so you wouldn't be able to have a console style PC while maintaining all the functionality. The legacy stuff mainly effects things prior to the actual operation of the system.

The # of slots on a motherboard is not a design limitation, it's just what they've decided 4 is practical for the average home user. Xeon server boards sometimes have 8 or more RAM slots. Some cheapo OEM machines only come with 2.

Pretty sure it's a power limitation. Additional memory slots would require additional buffers and control logic to operate properly. You can only put so many things in series before voltage drops too much for them to properly operate, and you can only put so many things in parallel before current drops too much.

Actually, we used to have 5.25" hard drives at one point, but it was too inefficient and the storage gains weren't much. Also, who the fuck needs more than 1TB? or just buy two, and you have more space than most servers will ever need. Would you rather carry record sized discs with you? Even if they held twice as much, I doubt it's worth it. Dual layer discs can do that with lower dimensions.

Those sizes are also chosen to balance rotational speed, noise, power usage, and heat.
Larger discs would rotate too slow (poor load times and data transfer speeds), be noisier, use more power, and generate more heat. 3.5" drives appear to be the sweet spot to balance between everything, as under that rotational speeds appear to decrease for whatever reason (though it could be the environments those drives have to operate in).
 

NeoPTLD

Platinum Member
Nov 23, 2001
2,544
2
81
The processor in a DVD player is quite weak compared to a common general purpose computer, yet look at how much CPU usage is needed to process DVD playback.

There's a great deal of inefficiencies in running specific tasks on a general purpose computer.
 

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
Originally posted by: Foxery
The real drawback (the current drawback as well) is mass storage and thus, IO. Until we can get mass solid state storage, this will always be a performance problem.

Hard drives are rarely a bottleneck these days. 4 GB of system RAM is dirt cheap and can hold most of today's demanding apps and games entirely in memory.

I'd agree for general applications but what of databases, of games which have high quality textures, of video editors that process HD quality videos, of music editors using raw wavs?

 

DerekWilson

Platinum Member
Feb 10, 2003
2,920
34
81
Originally posted by: DanInPhilly
I'm wondering whether the legacy aspect of microprocessors, as well as OS's, is seriously harming performance. IOW, if a CPU (and related hardware) could be designed from scratch, without the need to support older hardware, would it be much faster? This assumes that the OS (and, for that matter, apps) were also designed anew.

If the boost was only a few percent, then no big deal. But if performance rose, say, 50% or more, I'd support a movement to adopt that as the new standard.

i know i'm a loser for only having skimmed this thread, so forgive me if anyone else already brought this up ...

but this is kind of what Transmeta originally wanted to do: to build a new CPU with no legacy ties and then transcode x86 to its native ISA. if you are able to get more speed from not tying yourself to legacy then you should be able to run translated code at a relatively passable speed ... this should act as a vehicle for adoption in the short term and offer the capability for a move away from legacy designs in the long term as people wouldn't be tied to x86 hardware to run their code.

didn't work out too well for them though ...
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
The processor in a DVD player is quite weak compared to a common general purpose computer, yet look at how much CPU usage is needed to process DVD playback.

There's a great deal of inefficiencies in running specific tasks on a general purpose computer.

A DVD player is a bit extreme, as it's not general purpose. The original question proposed removing legacy restrictions, not making an appliance for the goal of doing one task. As long as you're doing "general purpose processing", you're not going to gain that much over current x86 processors. Especially if you still are going with x86 ISA and just dumping legacy stuff.


Originally posted by: DerekWilson
Originally posted by: DanInPhilly
I'm wondering whether the legacy aspect of microprocessors, as well as OS's, is seriously harming performance. IOW, if a CPU (and related hardware) could be designed from scratch, without the need to support older hardware, would it be much faster? This assumes that the OS (and, for that matter, apps) were also designed anew.

If the boost was only a few percent, then no big deal. But if performance rose, say, 50% or more, I'd support a movement to adopt that as the new standard.

i know i'm a loser for only having skimmed this thread, so forgive me if anyone else already brought this up ...

but this is kind of what Transmeta originally wanted to do: to build a new CPU with no legacy ties and then transcode x86 to its native ISA. if you are able to get more speed from not tying yourself to legacy then you should be able to run translated code at a relatively passable speed ... this should act as a vehicle for adoption in the short term and offer the capability for a move away from legacy designs in the long term as people wouldn't be tied to x86 hardware to run their code.

didn't work out too well for them though ...

AFAIK, current AMD and Intel cpus have a non-x86 microcode which they convert the macro-ops into. Transmeta went a step further, but to the extent where it actually hurt performance, I've heard it compared to running a Java Virtual Machine for anything you'd want to do.
 

degibson

Golden Member
Mar 21, 2008
1,389
0
0
Originally posted by: DerekWilson

i know i'm a loser for only having skimmed this thread, so forgive me if anyone else already brought this up ...

but this is kind of what Transmeta originally wanted to do: to build a new CPU with no legacy ties and then transcode x86 to its native ISA. if you are able to get more speed from not tying yourself to legacy then you should be able to run translated code at a relatively passable speed ... this should act as a vehicle for adoption in the short term and offer the capability for a move away from legacy designs in the long term as people wouldn't be tied to x86 hardware to run their code.

didn't work out too well for them though ...

Transcoding doesn't get around the inherent dataflow limits in the program itself. That is, there isn't that much performance locked away in how instructions are represented -- it is locked away in inter-instruction data dependencies, which remain as long you correctly transcode.

In other words:
" C = A+B; D *= C; " requires the add to execute before the mult, regardless of whether you're talking about x86, something RISC-esque, or an internal representation.
 

themisfit610

Golden Member
Apr 16, 2006
1,352
2
81
Hard drives are rarely a bottleneck these days. 4 GB of system RAM is dirt cheap and can hold most of today's demanding apps and games entirely in memory.

Maybe the case for general usage and (some) games. Most definitely not the case for high end video / photo editing. Photoshop absolutely gobbles memory - getting around the fact that it's a 32 bit executible on a 64 bit OS by grabbing a bunch of RAM past the usual 2 GB limit, and creating a RAM drive for itself to store scratch on. There's a reason Apple has 8 RAM slots on the MacPro for you to put 32 GB of RAM in. It really does make a difference when you're working on 20MP images with 10 layers or so. Loading these massive images (upwards of 2 GB in some cases) from the hard drive is not fun.

Video editing.. let's not even get started. Hard drives are a massive bottleneck when you start editing more than 1-2 streams of HD video. Here's where massive RAID arrays are able to solve the problem... but still.

~MiSfit
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |