Is Qualcomm really making it cheap to build Snapdragon X Elite? Or are they mugging OEMs by forcing them to integrate stuff like Qualcomm-approved VRMs and other weirdness?
That's unfortunate.
It shouldn't be that bad, especially since we're talking about out-of-the-box behavior of 13900k/s and 14900k/s CPUs on Z790 motherboards. So of course they'd be using default configurations, or better yet, default with only two changes: 1.1 mOhms AC/DC LL (as a precaution...
Correct. Unlike Curve Optimizer, constraining PPT with a lower value never causes instability. Pretty sure EDC/TDC constraints also won't cause instability.
On the topic of AC/DC LL on Z790:
If the mobo OEMs are pushing up their LLC settings to improve stability, couldn't end users maybe help stabilize things by increasing the cycle speed of the VRMs? I know I have that option on my AM4 board. Most high-end boards should have it. Basically you...
Given that the delid only got them ~18C off the recorded max temps, I'd say using an aggressive pump config with a large rad (560mm or larger) would do the trick, even without a chiller. AiOs don't have the best blocks and definitely don't have the best pumps. Granted a 360mm rad should be...
That's an AiO. Pretty sure it isn't just the lid that's the problem there. Full custom water could probably handle one of those things without a delidding.
Watch Zen5 be significantly faster in SPECInt and electron crap while showing much less improvement in stuff like Cinebench. Everyone will declare their own predictions to have been accurate.
Clearly that stands for Command & Conquer.
I know what they are. They're just frustrating to see when all you want is an SBC with a modern Qualcomm mobile chipset on it, and then you see one of these and you're like hmm that might be interesting NOPE nevermind.
Qualcomm charges $1k+ for dev kits. You get an SBC with poor features, an integrated touchscreen (sometimes!) that you can use as a bootable display, aaaaand that's about it. I remember looking at their Snapdragon 845 dev kit (or was it 855? I forget) that cost $1k.
People shouldn't be buying 4090s used or new given NV's callous disregard for buyers. Sadly, if the market did turn on NV, they might just pull back or pull out since so much of their revenue is coming from enterprise/cloud AI.
Good luck! Nobody can really tell if Intel's nodes are actually competitive with TSMC's since Intel seems to be struggling with either volume or design (or both?). Arrow Lake doesn't look like it's going to be a great CPU despite Intel having a newer/mostly better TSMC node available for it...
Yeah I read all that, and it's not the same as what AMD is doing.
If you take a Zen2 or Zen3 (and presumably a Zen4) running a ST workload, it's going to hit a particular maximum boost clock (which is only approximated by advertised ST boost clocks) without hitting max local temp (95C) and...
That one I admit I didn't know about, but . . . it looks like all it does is affect CPU duty cycle, which isn't the AMD approach.
All they seemed to have done is upped turbo limits a bit and given the CPU a very short period of time in which it can hit those boost clocks provided thermal...
The goal here is not to necessarily replace you (the operator) performing those functions, but to instead make it easier for you to do the things you already know how to do. Ideally so that someone trained as you are can do the work of 2-3 people, meaning that your boss then gets to lay off...
It's also where Intel hands out their biggest discounts. Operating at near-zero margins is killing them in DCG, and that can only go on for so long. Eventually the box seat tickets won't be enough. Genoa has already made things very uncomfortable, and Turin is only going to make it worse.
How does Intel's boost algorithm calculate appropriate clockspeed and voltage while using temperature as a factor? From what I can see, Intel's algorithm doesn't reference temperature at all, and instead slams straight into whichever limit it hits first - temperature, clockspeed, current draw...
AMD also has an elaborate temp sensor network on each CCD, as well as a sophisticated boost algo that uses the worst hotspot on the CCD to limit clocks and volts. When installed and run with default settings on a Z790 motherboard, a 14900ks (for example) is not the same critter.
It shouldn't, but at the same time both AMD and Intel have sold CPUs recently that can overwhelm a good HSF's ability to cool the CPU. The difference is in what happens afterwards - maybe.
A 7950X will keep boosting until it saturates the cooler or until it hits 230W PPT, whichever comes...
3GAP (or whatever they call it now) still could be an okay node, and Samsung should be able to offer it in higher volume than anything Intel will have through 18a.
Were I Intel and JHH issued such a threat, I would release exactly 1 Battlemage dGPU through an OEM (at least) just to provoke the opportunity for legal action. Easy money.
One question that comes to my mind: is cooling a variant when it comes to these CPUs being unstable? How many of these 13900k/s and 14900k/s users have full custom water or similar and can keep temps low even at full load? These CPUs are insanely difficult to cool.
If they could nail the thing down properly, it would make consumer NPUs (Meteor Lake, Arrow Lake for example) more useful (or at least make it easier to sell products featuring NPUs to gamers). Methinks game devs would be better off farming out a lot of that work to other developers who...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.