psolord
Platinum Member
- Sep 16, 2009
- 2,122
- 1,256
- 136
Let's hope they take out this stupid re-sizable bar requirement this time. For older systems, the performance uplift between Arc1 and Arc2, would be +130%.
We need to wait for benchmarks but it's a seriously impressive uplift. And also seems like they vastly improved perf/W and Ray-Tracing performance.
N4PIs discrete BM N4 or N3?
SIMD16 isn't fat, that's like, GCN called.fat XVE
They weren't really struggling with FF before, just terrible SM area/efficiency/utilization and terribad LLC.increased performance of the fixed function units
The Xe-cores in Xe2 have been improved for higher performance, better utilisation, and greater compatibility with games. That last point is particularly important, going off Intel's previous form.
These changes take various forms, though I'm told it's not only improvements to the software stack, but changes to the silicon itself to make it gel more easily with modern games.
There's hardware support for commonly used commands, such as execute indirect, which causes headaches and slows performance on Alchemist. Another command, Fast Clear, is now supported in the Xe2 hardware, rather than having to be emulated in software as it was on Alchemist.
Another is execute indirect support baked into the hardware, via the Command Front End, which is a command used commonly in game engines, including Unreal Engine 5. This was previously emulated in software on Alchemist, which led to slowdown
-Ray Tracing Unit width increases from 2 traversal pipelines to 3.The Xe2 architecture's Render Slice includes improvements to deliver 3x mesh shading performance, 3x vertex fetch throughput, and 2x throughput for sampling without filtering. Bandwidth requirements should be lower, and commands are more in line with what games often use.
Yet at release time, weren't the driver team thrown under the bus by the hardware team!?Ha, looks like Intel is doing the underhype, overdeliver. Xe2 now sounds good.
Nice to see suspicions of compatibility and performance issues possibly due to being flaws in design being confirmed. SIMD16 in architecture mentioned straightup for improving compatibility is awesome. Meaning driver development will become faster, because they don't need to write code to explicitly support every game.
Yet at release time, weren't the driver team thrown under the bus by the hardware team!?
That Intel were going to have an enormous task sorting out their drivers after decades of neglect was a given. Handicapping the driver team because of hardware faults and potentially having wasted the driver teams time as hardware kept promising that X or Y would work at release? That must have been tough.
The part-time Intel watchers here and elsewhere have long suspected that Intel is large enough for plenty of internal politics - and when the driver team was to blame stories came out I suspected the hardware team of playing deflection politics!
AFAIK Intel had a perfect storm in their hands, a piece of hardware that required special software attention... and a software team paralyzed by the recent war.Yet at release time, weren't the driver team thrown under the bus by the hardware team!?
I hope that’s what’s happening across their entire product portfolio. Under promised over deliver consistently. They have to do this to regain confidence.Ha, looks like Intel is doing the underhype, overdeliver. Xe2 now sounds good.
Nice to see suspicions of compatibility and performance issues possibly due to being flaws in design being confirmed. SIMD16 in architecture mentioned straightup for improving compatibility is awesome. Meaning driver development will become faster, because they don't need to write code to explicitly support every game.
- "Back in the day we were DX compliant, which turns out to not quite be enough. You need to be similar to the dominant architecture. And that's the direction that we're heading with Xe."-Ray Tracing Unit width triples from 2 traversal pipelines to 3.
Sounds like we will see SIMD32 for Xe3."Back in the day we were DX compliant, which turns out to not quite be enough. You need to be similar to the dominant architecture. And that's the direction that we're heading with Xe."
SIMD32, alongside Gen13 new uArch, would certainly explain why PTL U is only 32 XVE.Sounds like we will see SIMD32 for Xe3.
Sounds about right since their GPUs are more CPU-limited due to greater driver overhead. It also says something about their expectation that someone buying their GPUs would not pair them with slow CPUs.I've heard nvidia is OK leaving more to software in their drivers
AMD has been coasting for far too long with APUs. They won't be able to get away with the 6yrs+ of Vega anymore. Real competition is finally here from the looks of it.
The decision to forego LLC completely in Strix Point's iGPU seems to have been a terrible one. Strix Halo can still be very interesting, but Point seems meh for everyone but those making proper use of 12 cores.