Can probably base things off this, not much will have changed except for mild increases from all sides:
https://www.anandtech.com/show/17024/apple-m1-max-performance-review
(Then all scaled up for the ultra of course.).
There do seem to be some workloads where it really is simply sarcastically...
Wasn't it massively up last year or two though? Looks more like a reversion towards the long term mean than anything actually very worrying.
(Some combination of Crypto & Covid you have to presume.).
Unless/until the AI people stop demanding the biggest dies they can get, there's no way that NW will stop making the biggest dies they can.
Whether those will then still be able to be priced at levels that gamers might want to afford, I have no idea.
Well, how well did the 12400/12600 sell vs the K chips last time? Those were 'only' 4+0/6+0 but also had hugely more manageable power draws than the K chips, which must surely rate as a nice win for main stream customers.
(Definitely why I got a 12600 - no zen as I wanted an iGPU as backup.)...
To be fair, it should work much better than the more basic extant algorithms for doing this.
They're doing a lot of computational work of a sort that NN's are brilliant at. With the images on both sides to work with the results should presumably be very good in terms of image quality. They...
Zen4 will be more efficient I presume, even quite a lot more so. Just need to wait for the versions limited to lower power draws. Like with the recent GPU's, some people need/want all the power they can get. Technology not keeping up so the top things have a higher power draw.
99% of consumers...
Its not insane at all. There's a huge market for these cards in AI these days, and that market wants the biggest, most powerful things they can get.
The power draw reductions from processes aren't keeping up remotely well with the amount of transistors on offer, so you get this. Happy with the...
Machine learning stuff, broadly, I think. Hence them building some of these chips up so big/expensive. Perhaps AR/VR at some future point?
You can definitely see why they started focusing into RT. Without it, even at 4K, there just wouldn't be a remotely sane reason to have the top end cards...
Well, if they'd really pushed perf/W but not mentioned it then I'll choose to be disappointed on those grounds :) I also just don't like the way the top few SKU's are having their TDP raised for really quite marginal gains.
(Obviously Intel have been much worse at this recently!).
The...
Well, actually yes it is isn't it? Clockwork. I mean they could in theory have run some series for an 18 month cycle, the odd random one for 12, some for 30 months etc etc.
As it is its 2 years, with perhaps slight changes in when the cards actually appear between July -> September.
iirc the...
This sort of thing always amuses me when I think of people trying to train models to predict the stock market based on the news :)
(Some reasons of course, I guess but still!).
As per the initial question, it's just not that comparable - Intel's big cores are very, very big and power hungry.
Somewhat undesirably so to be honest. Apple (and AMD) have got their ones rather more under control, so it makes sense to load up on them instead.
Still rather amazing to be beating such a total monster of a dGPU :)
It seems really, really hard though. AMD have been making the console chips for a while now so there's obvious theoretical capability but still nothing 'real' coming out of it. Never mind getting the software to play along.
All mildly amazing, yes.
One thing very gently intriguing me - shouldn't be there some GPU accelerated compute tasks where the massive memory pool and the shared memory between CPU & GPU (+ other bits) mean it gets even sillier? cf Linus Torvald talking about APU's having some distinct...
Or nothing - as people have noted. It is worth repeating just how hard what they're trying to do is. If they could simply 'clone' DLSS then of course they'd manage but without the tensor units that isn't really possible/effective.
So it really isn't obvious what they might be doing. It'd have...
All likely makes sense when mediated by the overall die shortage.
Current policy definitely seems to be to release one die at a time, leaving a long time gap between each. They means they need to try to stretch things slightly to cover the market.
The thing with silicon wafers, especially the smallest processes, is that the supply is quite fundamentally inelastic. The fab's are so incredibly expensive and take so long to build that there's only quite limited scope to adapt/they basically simply refuse to try and match short term demand...
Except in cases where its blatant, you're only really going to get sane judgements on this if people do properly double blinded trials. Which no one will.
Otherwise there's all sorts of conformation bias etc that make a rational judgement incredibly hard.
Keep, surely? It might be very hard to get those replacements at the moment.
I can't think why the 3080ti (or others) would reduce your sales price notably if you wanted to move then. Although I'd honestly wait until you hit actual problems, if you ever do :) There'll be another generation...
I'm at least as intrigued as to what - for instance - Samsung do going forward. They've got fabs, SOC's and so on.
Even if they can't push a totally top end chip, there must be some incentive for people like them to push out some cheap but quite effective laptop chips. It has never seemed to...
Not for quite some time at least - their next step up is still going to be going into a lot of laptops and the smaller cores are very useful then.
In fact Apple's smaller cores are so powerful that they're probably worth keeping about regardless.
Especially when they've already cut down the 6800 so much that its more or less a 3070 competitor as well. A bit weird all round.
Better to let the AIB's do any mad over clocking in premium SKU's.
Or, even more simply, what percentage of PC's playing steam are notably more powerful than the iGPU in an M1? Not that large, I suspect.
I'm not remotely as convinced about AAA games but there's every reason to expect fairly widespread support over time.
Definitely not disagreeing with your basic conclusion but the M1 is giving these devices *much more* than extra CPU performance.
Passive, or near silent (mbp), running.
Considerable extra battery life.
Instant restart from sleep.
Enough iGpu to play games at a reasonable level.
Mildly cheaper...
Yes? Not the 13" pro which has the M1 in now, but they've got a load of 13" models with Intel in at the moment. Those are obviously getting replaced :)
Massacre is rather emotive for the 'bigger' M1 variant of course but so is the idea that you couldn't use M1 for serious computing :) The 16gb is the latest built in limitation.
The bigger chip is still going into laptops so will have some limits, especially as they'll want to keep it...
Maybe it will for a bit longer :) The engineering is certainly hugely impressive.
I guess its also getting potentially quite important for them to at least be able to offer a reasonable capability to do this for quite a wide range of developers.
Thanks - useful.
I guess I was wondering a bit how it does Vs say a GPU/CPU combination at running - or even training - a neural net.
It obviously won't compete with A100 :) Shouldn't the tight integration between the elements of the SoC logically help out a good bit?
An M1 based laptop might...
Better place to ask this here I suppose - has anyone mamaged to bench mark the NPU portion of the M1?
Apple put a lot of area and resources into upping its performance for A14 & M1, so they presumably have a considerable end goal in mind.
I guess it must be a little more complex than this, otherwise the specialised companies would do just that and wipe NV's performance out :)
Quite possible of course. On a related, if formally rather off topic note, has anyone seen any neural net execution benchmarks for the NPU in the M1 (or...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.