Where are the higher resolution monitors? Nearly 3 years ago I bought a 2048×1152 dell monitor on some deal for less than $300. Yet today I still don't see anything larger than 1920X1200 at newegg for less than $800. What is with the lack of progress, or worse, negative progress? Most screens are actually getting smaller, 1920X1050 being the norm. WTF?
Why? I sit about 18in away from my monitor when I game. My 23in fills up my entire field of vision.
Why? I sit about 18in away from my monitor when I game. My 23in fills up my entire field of vision.
I don't want to sit 18in away from my monitor to play games sitting at my desk like a student anymore. I want to sit 8 feet away on my living room couch and play my PC games, watch TV, and browse the internet on the same screen, in comfort.
1080p on a 40" from 8 feet away is fine for gaming and watching shows, but I can't use it also for my regular computer needs as the low ppi strains my eyes too much. So right now I have to have 2 screens, and I have to get up and go to the desk and vice versa if I want to switch tasks.
The problem with a lossless algorithm is that it doesn't have a fixed compression ratio, which means the worst case scenario is that you have an image that you can't compress at all. And since you need to be able to handle that scenario, compression gets you nothing.Yeah, that, I think, is a big problem (the lack of compression) We could save a lot of bandwidth if we required monitors to support some sort decompression algorithms. Even a lossless algorithm would save some significant amounts of bandwidth.
Pretty sure you can do that now, pony up for a home pc theater. Sounds like you need an xbox 360 and a 50" screen :sneaky:
The problem with a lossless algorithm is that it doesn't have a fixed compression ratio, which means the worst case scenario is that you have an image that you can't compress at all. And since you need to be able to handle that scenario, compression gets you nothing.
It's a sad, sad market filled with fail.
I cannot believe the lack of advance with resolutions available to us consumers.
We've gone from 1920x1200 down to 1920x1080, and 2560x1600 to 2560x1440.
And we've had those resolutions forever...it appears nothing higher is going to be an option forever at a consumer level for us.
Sickening.
Things have gotten really bad in laptop screen world. There are literally no 13"-14" Windows laptops on the market with a good screen short of the $2000 Vaio. It's like the panel manufacturers don't even make them anymore. Every screen has a terrible 100-200:1 contrast ratio and vertical viewing angles worse than anything I've ever seen. Not to mention they're all shiny like a mirror and don't have the brightness to back it up.
Another problem is that frames might be coming from multiple sources (apps/windows) that are handled by OS (e,g, DWM in Win and Compiz in Linux) so you'd need to encode this in real-time. And you need to be sending out at 60Hz for regular lcd monitors - try switching your monitor to 24Hz, it's noticeable right away. At high resolutions, that'll need some serious dedicated hardware. You'd need to decode as well at the monitor, again in real-time. All that will also introduce additional lag.Buffering and frame drops perhaps? If 3 frames in a row are completely uncompressable then drop the next frame. That should still reduce the bandwidth while providing pretty acceptable display (especially in work environments where the screen is more or less static)
Lossy would be the way to go, for sure, but there are licensing issues that really hinder things. You could cut the bandwidth requirements in half, easily, and end up with images that are almost always an exact representation of the screen. High motion scenes would be where the pixels would start to have a higher probability of not being exact replicas. (though, in those cases most people don't care).
Yes this is a solution, but my preferred solution would be to have 1 machine (my PC) hooked up to 1 screen that solves all my needs and doesn't require me to get up whenever I switch tasks (which like many people today, I switch tasks quite frequently).
I think this is less of an issue than you are making it. The last stage of a graphics card is already decoding all the output and translating it into bits on the wire. This would simply be adding an encoding stage before placing the bits on the wire.Another problem is that frames might be coming from multiple sources (apps/windows) that are handled by OS (e,g, DWM in Win and Compiz in Linux) so you'd need to encode this in real-time. And you need to be sending out at 60Hz for regular lcd monitors - try switching your monitor to 24Hz, it's noticeable right away. At high resolutions, that'll need some serious dedicated hardware. You'd need to decode as well at the monitor, again in real-time. All that will also introduce additional lag.
I think this pretty much proves that an encoding stage can be done in a reasonable time frame. It already takes some pretty beefy circuitry to make sure that each pixel is the same.I mentioned once that there's something called Panel Self Refresh where drivers can detect if the image is static and instruct the monitor to repeat the image from its cache (monitor has to support PSR too). It's not really used for bandwidth reasons, but for laptops to save battery.
http://www.hardwaresecrets.com/article/Introducing-the-Panel-Self-Refresh-Technology/1384
There already are encoding stages for stuff like color transformations, hardware overlay, formatting (for dvi/hdmi, dp, crt etc.) from frame buffer to actual bits going out. But this is a lot heavier encoding stage, think about what it takes to render 1920x1080 @ 120Hz in real time. And this is something supported over hdmi/dp today. You'd basically need to add something like QuickSynk/VCE, and these are not small features. I'm not sure how fast QS/VCE are either, so maybe I'm wrong, but I'd think 4Kx2K at 60Hz would be a challenge... It is probably cheaper and faster/less lag to introduce a beefier cable, this is basically what they have been doing with different versions of hdmi and dp specs.I think this is less of an issue than you are making it. The last stage of a graphics card is already decoding all the output and translating it into bits on the wire. This would simply be adding an encoding stage before placing the bits on the wire.
Not really. Windows knows when it draws something, or when a mouse moves etc. Actually the driver itself already knows this because all graphics routines eventually land there. So you don't compare each pixel between two buffers, all you need are some sort of interrupts/flags to the PSR part of the driver to notify it that frame buffer might be changed and he'll have to send it.I think this pretty much proves that an encoding stage can be done in a reasonable time frame. It already takes some pretty beefy circuitry to make sure that each pixel is the same.
That could work. But why would you use it? These are last-foot links, there's no reason they need to be lossy. HDMI has plenty of bandwidth if manufacturers actually used it (it's enough to match dual-link DVI), and it wouldn't be particularly hard to further improve it.Buffering and frame drops perhaps? If 3 frames in a row are completely uncompressable then drop the next frame. That should still reduce the bandwidth while providing pretty acceptable display (especially in work environments where the screen is more or less static)
Lossy would be the way to go, for sure, but there are licensing issues that really hinder things. You could cut the bandwidth requirements in half, easily, and end up with images that are almost always an exact representation of the screen. High motion scenes would be where the pixels would start to have a higher probability of not being exact replicas. (though, in those cases most people don't care).