So why vendors are bothering with increasing memory amounts? Must be some anti-AMD conspiracy, 256kb + PCIE must be enough.
Does it occur to you guys that for example Sony with it's 8GB of GDDR has advantage over MS who are using DDR3 + "HBCC" of their own. Caching schemes are always like that, difficult to manage, prone to fail.
The reason why vendors are bothering with increasing memory is simple. It's much easier to set up a memory management system that is wasteful with VRAM usage than it is to set up one that is efficient with VRAM usage.
As previously mentioned by Zlatan, current memory management systems loads tons of data into VRAM that isn't actually used, data that would be perfectly happy sitting in system RAM and be streamed as necessary instead. Problem is that streaming data from system memory as necessary requires a significantly more advanced memory management system, something that most developers simply haven't gotten around to making yet (partly because they haven't had to due to large amounts of VRAM being available).
So basically, vendors are increasing memory amounts as a hardware solution to a software problem, in other words developers are being wasteful with the memory management systems, and so in return vendors have to be wasteful with their VRAM amounts to compensate.
AMD's HBCC is essentially a more direct solution to the wasteful memory management issue, since it tackles the problem head on (by reducing the waste), instead of circumventing it (by adding more VRAM) as has traditionally been done.
Look, all that info is nice and dandy, but at the end of the day PCI3 X16 has a TOTAL of 16GB bandwith. Lets say you want 100 FPS, that is 163.84MB/s per frame MAX. and that is very very generous. Probably when all is said and done you have maybe a third of that due to overhead and other traffic and not streaming 100% of time. is 60MB per frame a lot? Well no.
You didn't actually answer my question about how data you think needs to be sent over PCIe for a new frame, but apparently you think it's more than 60MB.
Well, since we're quoting Sebbbi anyway, it's worth noting that he had an
example where his game only required 5 MB per frame. Now this was with texture pool size of 256MB, whereas something like UE4 has a default size of 1024MB I believe, but that would then still only be 20MB per frame.
Also why do you think only a third of the PCIe bandwidth is available for asset data streaming? what other data could possibly use up the remaining bandwidth? do you really think sending command lists and the like takes up that much bandwidth?