You are comparing apples to oranges. Scaling is related to drivers and the hardware, microstutter is related to the lag in between each card pushing its frame to the screen.
I think multi gpu is good tech and totally viable. But microstutter is a fact of the technology and found on sli and crossfire. No amount of driver work is going to do anything about it.
This might sound totally dumb to some experts here, BUT:
AFR renders alternates frames between gpu and outputs to framebuffer. As a very rough extreme example. Even frames 10 ms to render and odd frames 50 ms to render. We have 10, 50, 10, 50, ..........
If I have a single card, isn't the frames output the same way except they now take longer. Why do you not get microstutter with a single card.
Is it that the 10ms frame in AFR is displayed immediately (refresh rate permitting), as it has already been rendered by the time the 50 ms one is finished.
One solution to mitigate the effects is to increase the buffering level. Triple level might not be enough, although this might induce latencies.
Actually, writing this makes me realize that in a hypothetical case like above sli/xfire will offer roughly 120% of single card.
Any of this sounds reasonable?