Perhaps thats the reason they have a wider microarchitecture pipeline design to compensate for the lower driver currents ??
So they will have a Gate-First (higher silicon density, more chips in the wafer) with HKMG (less leakage) and SOI (less leakage) 32nm process but lower drive current (lower Frequency).
Gate First will give them lower manufacturing cost but lower drive current (Less CPU frequency).
HKMG will give them less leakage (higher Frequency and better power consumption)
SOI will give them less leakage (higher Frequency and better power consumption)
Wider pipeline design will give them higher frequency to compensate from the Gate-First lower drive current in manufacturing.
what do you think ?
Yeah, to be sure there are a lot of high-level trade-offs going on here, and the best we can do is craft hypothetical if/then logic trees to bound the discussion. It can be enjoyable and profitable nevertheless, provided no one gets pedantic and takes us to task over the volumes of unstated caveats that stand behind each of our posts.
The crucial difference I see here is the SOI. SOI reduces the leakage, leaving more of their "TDP budget" available for cranking up the voltage (Idrive is Vcc dependent) so they hit higher clocks regardless the normalized disadvantage in Idrive that comes with gate-first.
Likewise, just as Idrive is a voltage-normalized metric it is also a transistor width normalized metric, so while gate-first reduces Idrive per micron width of the xtor it also gives you higher density xtor layouts meaning you have the option of maker your xtors "wider", making the die-size larger, and boosting your net drive current in the process.
You see this all the time in sram layouts and cell-size. Your L1$ sram cell size will be huge compared to the cell size (density) L3$ where clockspeed and latency are reduced.
Just look at AMD's 45nm non-HKMG enabled processors, they are bigger chips, and less dense, have higher operating voltages, and yet they fit inside the desired TDP envelopes and clock extremely well compared to Intel's 45nm chips.
This is why I say it is merely a concern, not an outright expectation of disaster, that GloFo went gate-first. Had they gone gate-last for 32nm, as they are expected to do for 22nm, then we know their Idrives would have been all the hotter and the possibility for higher clocks would have been all the higher.
But I suspect that thanks to SOI and the flexibility of making layout density tradeoffs they will have no problems getting their clockspeeds where they want them to be. It just might not happen with the first release of 32nm chips, it took them a year to get 45nm tweaked well enough to enable release of those 1075T's and 1090T's after all.