I was going to say a bunch of things about this being an artificial test. But really most of the stuff can be summed up below. In the end they are stair stepping the test through the cores and not working or analyzing anything as it would be scheduled through the Windows schedular or anything AMD would have control over.
But the science behind that as devices get hotter, they leak more power, requiring more power to maintain performance.
Look back at Gamers Nexus review of the Vega64/FE Edition. They found that the chip was using +50w more with base cooling than when cooled the card down other ways. They did the test to show how beyond the efficiency range and the available cool that AMD was taking the cards to hit clock speeds to be competitive with Nvidia. The point being is that we don't have all the information on their tests. But an under used CCD could be maintaining as good if not better clocks with power usage per core because the CDD is cooler to begin with. This is why when the cores are maxed out we tend to see a drop little by little for all core clocks over time. It is also why I think I see with an all core usuage a increase and temp and power about 5-6 minutes into a test. At that point I think that AMD sends more power to try to keep the clocks up for a little while longer before it has to start dropping it.
Besides testing methodology here. We don't actually have core clock information. Trying to analyze this other than to say the CPU hits peak power usage around 10 cores. Anything other than that is a fools errand.
But the science behind that as devices get hotter, they leak more power, requiring more power to maintain performance.
Look back at Gamers Nexus review of the Vega64/FE Edition. They found that the chip was using +50w more with base cooling than when cooled the card down other ways. They did the test to show how beyond the efficiency range and the available cool that AMD was taking the cards to hit clock speeds to be competitive with Nvidia. The point being is that we don't have all the information on their tests. But an under used CCD could be maintaining as good if not better clocks with power usage per core because the CDD is cooler to begin with. This is why when the cores are maxed out we tend to see a drop little by little for all core clocks over time. It is also why I think I see with an all core usuage a increase and temp and power about 5-6 minutes into a test. At that point I think that AMD sends more power to try to keep the clocks up for a little while longer before it has to start dropping it.
Besides testing methodology here. We don't actually have core clock information. Trying to analyze this other than to say the CPU hits peak power usage around 10 cores. Anything other than that is a fools errand.