- Mar 3, 2017
- 1,659
- 6,101
- 136
The whole point of the NPU is on-device AI.Once they realise that people are not willing to pay $20/month for co-pilot plus and similar AI services I wonder how important they will be.
2 years or so after the "AI PC"s until the financial side starts expecting serious returns, I'd say.Once they realise that people are not willing to pay $20/month for co-pilot plus and similar AI services I wonder how important they will be.
The whole point of the NPU is on-device AI.
On-device AI doesn't usually need subscriptions.
And the integration with Office365.The whole point of the NPU is on-device AI.
On-device AI doesn't usually need subscriptions.
This man never paid for hypervisors.On-device AI doesn't usually need subscriptions.
Doesn't mean they get em.Or that MS isn’t aching for CoPilot subs and wants to obsolete CoPilot sub revenues via NPUs? LOL
They're still correct, yes.Rofl. I still have a bunch of your tweets saved about this whole thing.
Well duh. I’m not making any comment on that. But the idea MS doesn’t want them and is going b*lls deep on NPUs to obsolete their sub, or that NPUs even can replace the DC, is hilarious. I’d expect you of all people should know local AI has meaningful limits (even in a beefy setup). In fact you were once a voice of reason about this.Doesn't mean they get em.
Copilot attachment rates are awful so far.
There’s a bit of expectation inflation. But the fundamentals are there. Lisa realizes this. Pretty much all that needs to be said.They're still correct, yes.
Just more hopium than ever.
?In fact you were once a voice of reason about this.
They aren't.But the fundamentals are there
She has to say The Words or the street gonna crucify her.Lisa realizes this.
They are.Wrong but ok
See, every other bar on that chart is an actual business with a business model of advertisement.
Hasn’t happened till Apple announces it and something that big will demand a press release by Apple.Well. It’s not niche, not even close. Same reason Apple just signed a deal with OpenAI,
And there's been motherboard firmware from many board partners that claim to support it and still nothing has leaked for desktop Zen 5.I must commend AMD on keeping the ship leak tight , look how far this thread has devolved.
There must be some kind of contract over the head of whoever would venture to spill the shadow of a bean, i cant explain it otherwise, or that they delivered a limited firmware that gimp everything, wont be long before we ll know if All the Watts numbers have a relevance.I must commend AMD on keeping the ship leak tight , look how far this thread has devolved.
Very-very tight opsec.How did they do it?
Who made these curves.?.
ChatGPT is more than 100 days old, so why is it displayed as if it is one week old.?.
I was curious so I did some digging, it was made by a Twitter user here but he never describes how he got the numbers despite multiple replies asking him how he sourced his numbers. It seems to be based on web traffic estimates which isn’t really reliable for what the graph is showing but could also be totally made up as he never sources his data.
Flood gates open in 8-9 daysSure, make someone leak something so we can argue about the implications of the leak, or needle the source for poor methodology/poor application choices.
A bit over a week left.Can we get back to Zen 5?
At that point it'll be too late for leaks. Instead we get to insult Anandtech for not having a competent review available on release day.Flood gates open in 8-9 days
Xtor cost scaling is super-dead.
Personally looking forward to Techpowerup and ComputerBase for desktop Zen 5 reviews for real worlds tests.At that point it'll be too late for leaks. Instead we get to insult Anandtech for not having a competent review available on release day.
Modern chips are half SRAM.Scaling for logic is not entirely dead. Many of the quoted figures of scaling imply a mix of analog, SRAM and logic.
Banff XCD still has a giant pile of SRAM, each CU has half a meg worth of vGPRs and 96K of L1 and LDS combined.So in theory, with a setup like Mi300 (maybe further optimized), the compute chiplet should be offering scaling that substantially exceeds stated scaling of the process node, since the compute chiplets are (as much as possible) stripped of analog and SRAM.
DPFP MULs are dense logic.Right now, it seems to be held back of AI workloads by overemphasis on FP64
I admire your optimism.On Intel side, Clearwater Forrest is trying to replicate this theory
By the time CWF ramps, you're more than halfway to Venice-Dense.It may put more pressure on AMD Turin than any other server CPU in a long time...