You keep making up numbers.
I agree. IMHO they should have tested both in the best conditions like many websites did, fast DDR3-2000+ for Haswell and DDR4-3000+ for Skylake (after all these are 'K' chips).
If you want to compare total system performance, yes.
If you want to compare IPC of the CPUs, no.
As crazy as this sounds, this test might have still skewed results a bit. With DDR3 2400 CL11 kits readily available from several manufacturers, AFAIK Haswell would still have a latency advantage.I agree. IMHO they should have tested both in the best conditions like many websites did, fast DDR3-2000+ for Haswell and DDR4-3000+ for Skylake (after all these are 'K' chips).
I agree. IMHO they should have tested both in the best conditions like many websites did, fast DDR3-2000+ for Haswell and DDR4-3000+ for Skylake (after all these are 'K' chips).
IMHO you shouldn't pick results from 2 different reviews.Limited to 1600MHz it was slower than a Core i7 4770K in 3 of 4 games tested. Using 3000MHz memory Core i7 6700K was 11% faster than Core i7 4790K @ GTA V and 15-18% faster @ League of Legends (despite 200-400MHz lower Turbo clocks).
Overall performance using DDR4-1600 was slower than Devil's Canyon, with DDR4-3000 it's 9-10% faster.
I have never seen such scaling behaviour from Haswell using faster RAM, especially at 1080p+ resolutions. Here's some Haswell DDR3 scaling gaming benchmarks by AnandTech:
What I'm really interested in is the i7-6700T. I'm going to be building an extremely compact little dev box (think "Habey 600B" here...) and the idea of a 4c/8t CPU with a 35 watt TDP is very, very appealing.
Hopefully the benches show good news
....Haswell-E will still power my enthusiast PC 'A' but as I've told before I'd love to try Skylake in a mini-ITX configuration, perhaps a lower TDP version (35W Core i7 6700T) + the best graphics card I can find up to ~150W for a small, silent, cool yet still fast summer gaming PC 'B'.
I have never seen such scaling behaviour from Haswell using faster RAM, especially at 1080p+ resolutions. Here's some Haswell DDR3 scaling gaming benchmarks by AnandTech:
To further expand on our discussion, here's more Skylake gaming benchmarks at 1920x1080 and 2560x1440 using DDR4-1600 vs DDR4-3000:
www.gamestar.de/hardware/prozessoren/intel-core-i7-6700k/test/intel_core_i7_6700k,924,3234508,3.html#spielebenchmarks
Limited to 1600MHz it was slower than a Core i7 4770K in 3 of 4 games tested. Using 3000MHz memory Core i7 6700K was 11% faster than Core i7 4790K @ GTA V and 15-18% faster @ League of Legends (despite 200-400MHz lower Turbo clocks).
The tests we have all carried out under Windows 8.1, which is installed on a current SATA3 SSD - the amount of memory is 8.0 GB. We test all not to DDR4 compatible processors with DDR3-1600 RAM, the Core i7 6700K we perform the measurements with DDR4 memory, which is clocked at 1600MHz and even once almost twice as fast 3.000 MHz.
From the review above,
This suggests that the Skylake architecture benefited noticeably more of the greater data throughput at higher memory clock rates than was the case in previous generations. But you have to remember that 1600 MHz is much too low for DDR4 memory clock speed is - already the best modules clocked at 2133 MHz. The relevance of the memory clock we will probably dedicate ourselves more detail in a separate article, currently, we recommend in any case, be placed on the highest possible clock rate value when purchasing DDR4 memory, especially the pricing differences from lower clocked modules are not very large.
Test Used:
Wprime1024 5.1ghz 1.48v
Before: 96C hit and failed within seconds (hard to tell what load after 2 minutes would have been probably 105+)
After: 78C Warmest core and passed thumb.gif
already the best modules clocked at 2133 MHz.
I don't understand, why is Skylake better at Manhattan than Trex while A8X is better at Trex than Manhattan.To heat up the comparison, here's GFXBench scores for Skylake-S GT2:
And here's Apple A8X:
Now obviously Core-M will provide slower scores thanks to lower sustained clocks and we have yet to see how much Apple A9X improves on Apple A8X. There's a rumour out there that only iPad Pro gets a new SoC this while iPad Air 3 will be a minor refresh (with Apple A8X)
If guess I meant GT2 trickle down to the lower end.Skylake GT3e would do that just fine looking at Broadwell-K performance and considering the architectural improvements from Gen 9, Skylake GT4e is in a league of its own.
I don't understand, why is Skylake better at Manhattan than Trex while A8X is better at Trex than Manhattan.
Edit: At the A8X chip the graph says HD at both benchmarks (upper right), so is it comparable?
If guess I meant GT2 trickle down to the lower end.
Uhm, it's in the review...
Okay, much will depend on how much they could bump performance per watt at 4.0W, and how much ofca loss fivr is.I think they mixed T-Rex and Manhattan results for Skylake. It should be:
Intel Skylake-S GT2
T-Rex Offscreen: 139.7 FPS
Manhattan Offscreen: 68.1 FPS
Apple A8X
T-Rex Offscreen: 70.4 FPS
Manhattan Offscreen: 32.7 FPS
I think so. Both are running the offscreen tests, different OS though.
I'm curious to see how much (%) of Skylake-S GT2 graphics performance they can deliver at 4.5W TDP. Short benchmarks like GFXBench shouldn't be a problem.
I measured 120mm²
Of course, but for Haswell they could have used DDR3 at equal speed to the DDR4 used for Skylake.
Gaming benchmarks include results with Skylake using DDR3 as well.There is no Skylake vs Haswell comparison with both using DDR3.
Basically, Intel is getting ridiculous price premiums for rather small dies.So Kentsfield and Lynnfield had about 2.5x larger die area than Skylake, despite not having any iGPU.
Broadwell SoC is a rather good example.Just think what kind of CPU we could have if Intel made Skylake as big as those two... :hmm: I guess it would be mostly more CPu and GPU cores, because making the core itself bigger does not seem to gain as much IPC any longer.