As someone who has done a SIGNIFICANT amount of time researching microstutter with crossifre (2, 3 and 4 GPUs) there are a few key points I would like to add to this discussion that may clear up a few things.
1) What most people call microstutter is really just a difference in render times from one frame to the next. i.e. 10ms 25ms 10ms 25ms
2) The result of this is that games sometimes appears to be running at a much lower frame rate than what is being reported. For me, this starts to become noticeable once the frame time maxima are above 30ms. Everybody has different eyes, and this WILL vary from person to person.
3) In every case I have seen, microstutter shows up when the GPUs are nearing 100% load. As the load on the GPUs decreases, the difference in frame times also decreases until it is effectively gone (looks just like running a single GPU). This is why using a frame rate limiter fixes microstutter for most people. It is artificially reducing the GPU load, and in turn the microstutter.
4) The notion that adding a third GPU will eliminate microstutter is wrong. By adding a third (or fourth) GPU to the equation, you start moving in the direction of becoming CPU bound. Once this starts to happen, GPU usage begins to decrease, and so does microstutter. If you look at the data from the Tom's Hardware microstutter article (where they state that adding a third GPU eliminates microstutter) you will notice that their testing is all done at 1920x1080 using Call of Juarez. At this resolution, 3 GPUs are almost certainly CPU limited and thus no microstutter.
If i get a chance this weekend I will post some of my data and graphs to help illustrate these point better.