For Colors:
We see all the colors in the visible spectrum without tools, this is wavelengths of 750 - 400 nm. Reference:
http://hyperphysics.phy-astr.gsu.edu/hbase/ems3.html
But this would probably be an over approximation for typical human vision.
In other regions we simulate a color for the spectrum using software to interpret the energy of incident photons on a device that collects photons, e.g. a photodetector such as a CCD. This creates the awe-inspiring models of red for infrared and blue/violet for ultraviolet radiation.
For Resolution:
One perspective is that we see objects by resolution with respect to other objects, for example a tree's dimension compared to Earth?s.
Another perspective would be to compare your peripheral vision or angular vision.
Yet another would be to compare your near sight vision (on the order of microns 10^-6 meters, ~ 30-50 microns, fi-ty microns being the size of fine hair). So depending on this point of view it can be 1x1 (objects as they are) or close to infinity x infinity (objects with relation to others).
The difference between a camera and the human eye, camera lens and eye lens, is that with a camera more photons are collected. This allows the camera to capture more detail than the eye can grab. So how many photons are there in the universe, take that and let that represent a pixel. So to answer this would mean to do some math, and convert angular/peripheral vision into pixels--thus creating another standard?Definitely not a hot deal. Oh darn wrong thread!