I previously posted this in GH, but it didn't get much of a response...it might fit better here. Here's the CPU that my partner and I had to design from the ground-up for my computer architecture class.
Forget those offerings by Intel and AMD...the WISC-SP01 CPU features:
- 16-bit load/store MIPS-RISC architecture
- 100 MHz clock rate :Q
- 8 general purpose 16-bit registers
- 16-bit ALU with two-level carry-lookahead
- Barrel shifter for shifts and rotates (not that crappy serial shifter that the 8086 has )
- hard-wired PLA control unit
- 64KB DRAM memory
- 4KB direct-mapped cache
- ~15,000 logic gates
The only problem is that it lacks virual addressing and input/output support. You have to load the programs written in the assembly language directly into memory using the simulation program.
I might get around to pipelining it (it's already divided into pipeline stages, I just need to modify the control unit and add branch and data hazard detection units)...currently it's multicycle, taking 2-5 cycles to complete an instruction.
Here's some screenshots of a couple layouts that I took in Mentor, the CAD program we use to design and simulate the circuits:
Datapath & Control
CPU, cache & memory
Simulation of 4 instructions (we had to stare at these traces for dozens of hours to debug everything)
On a side note, is anyone here a student or professional in electrical/computer engineering? I'm actually majoring in comp sci/physics, but I might be going into computer architecture for grad school...
Forget those offerings by Intel and AMD...the WISC-SP01 CPU features:
- 16-bit load/store MIPS-RISC architecture
- 100 MHz clock rate :Q
- 8 general purpose 16-bit registers
- 16-bit ALU with two-level carry-lookahead
- Barrel shifter for shifts and rotates (not that crappy serial shifter that the 8086 has )
- hard-wired PLA control unit
- 64KB DRAM memory
- 4KB direct-mapped cache
- ~15,000 logic gates
The only problem is that it lacks virual addressing and input/output support. You have to load the programs written in the assembly language directly into memory using the simulation program.
I might get around to pipelining it (it's already divided into pipeline stages, I just need to modify the control unit and add branch and data hazard detection units)...currently it's multicycle, taking 2-5 cycles to complete an instruction.
Here's some screenshots of a couple layouts that I took in Mentor, the CAD program we use to design and simulate the circuits:
Datapath & Control
CPU, cache & memory
Simulation of 4 instructions (we had to stare at these traces for dozens of hours to debug everything)
On a side note, is anyone here a student or professional in electrical/computer engineering? I'm actually majoring in comp sci/physics, but I might be going into computer architecture for grad school...