There's a number of forum users here that consider the MT performance of the hybrid solution will come with a price in performance consistency. As long as 6+0 or 8+0 SKUs are available, they would rather avoid paying for the E-cores considering their workloads fit P-cores better. Some would obviously like their chips unlocked to push P-cores further.
I am one of those users. Actually i don't mind the small cores, as long as they bring with them 2.5MB of L3 per slice. 5MB of extra L3 is good deal for me versus 25MB of L3 for hypothetical 8+0 CPU.
I am now running 10900K with disabled HT and static OC to 5.1Ghz and plenty happy with it as main desktop and gaming machine. As long as Golden Cove has 25% IPC advance over Skylake and i can get something like DDR5 6400CL36 or so i will be happy to disable small cores and HT, set clock to 5-5.1ghz and enjoy real smooth and responsive system.
My dream is simple: 2000 in GB5 ST without VAES style shenanigans would be advance of 33% for me. And i have very little faith in schedulers even in easier setting of HT, heterogenous setup is disaster waiting to bite.
1) Lots of software cannot transition to more MT. That is because many tasks simply are not possible to be multithreaded. Any task that is user facing must go down to one thread at some point. Having dozens of threads sitting around doing nothing waiting for the mouse to move does not improve performance, only one thread is needed for that. Also, only one thread can display on the screen (the UI thread). Having drawing calls from multiple different threads is a guaranteed way to crash the software. As for math problems, many calculations rely on the result of the previous calculation and thus must be ST. Sure, some tasks can be MT, but many cannot.
I think ~10 years ago i had hilarious MT problem with one of our servers, that after upgrade to 2S Sandy Bridge started having nasty periodic slowdowns running exactly same workload versus Core2 FSB based server. After a looooooong investigation involving a lot of digging, it turned out that culprit was periodic scheduled invocation of ImageMagic command line utility to do some misc stuff. Said command was using multithreading on all CPUs to "speed up" things, except once number of CPU's has risen and locking became contended, 99.99% of time started being spent on lock contention and cache line pingpongs betwen CPUs and over inter socket QPI links. And that created NASTY slowdown on whole system, destroying QoL of our service big time.
It took quite some time to investigate and notice patterns and pin the command to single CPU to fix it.
So not everything can be multithreaded and sometimes you don't even control the quality of multithreading in software involved.
I think with heterogenous cores with such wide performance gap, scheduling will steal part of IPC increase from big cores and create problems with various legacy apps. Not complaining much as long as small cores can be disabled tho.