haha..years ago when I used to do Win32/MFC development I was in the habit of writing code that painted random pixel colors on my windows desktop...you get the psychedelic effect if you don't do bit-block transfers but paint directly to the screen...it was always a fun time wasting activity.
God dammit! We kill every site we stumble upon; maybe there should be a delay to show links/comments based on time-zones, that way we don't DDOS any site by mistake.
The idea is to use the free CoralCDN (http://www.coralcdn.org/). HN could solve site-killing by prefetching every link with CoralCDN, so that the CDN would cache a version of the page. Then, if HN detects that a site (e.g. http://allrgb.com/) is down, it would link to the Coral version (e.g. http://allrgb.com.nyud.net/) until it’s up again. Or alternatively, instead of detecting when sites are down, it could show a control to replace any given link with the CoralCDN version and ask the user to toggle it if the link doesn’t work at first.
Would a really simple way to do this be to make a 4096x4096 square image, with 0,0,0 at the top left and 255,255,255 at the bottom right, then just go along from left to right incrementing by one (in base-256, essentially)?
(Note, HSL/HSV are not suitable representations for almost any purpose, especially not for human user interfaces; it’s a tragedy that they are in such wide use.)
A color space is good for transforming colors along lines orthogonal to its axes. In RGB, you can't easily find another shade of a given hue. In HSV, this common task is trivial.
I was going to comment that HSL/HSV can be useful for generating colors that work well, but... the real problem is not luminance as much as the HSL values we commonly use. See http://en.wikipedia.org/wiki/HSL_and_HSV#Disadvantages where CIELAB L* is shown as a better representation. Though it's not perfectly matched to the human eye even so.
I've gotten pretty good at doing the reverse operation(given an RGB value, guess the color) because I'm red-green colorblind and that's how I figure out what color something is :)
jacobulus might have been referring to certain color spaces that are optimized for human use instead of for computer manipulation, such as HUSL (http://www.boronine.com/husl/). HUSL gives you a slider that actually changes the apparent brightness of a color to your eye: whereas HSL thinks that bright green is just as “light” as dark true blue, and HSV/HSB thinks that true blue is just as “bright” as white, HUSL colors with the same “lightness” actually do look equally light. Another human-optimized color space is CIELUV (https://en.wikipedia.org/wiki/CIELUV).
The mapping is not one-to-one though - for example when S is 0, all values of H give the same resulting colour, and worse when V is zero both H and S are useless. So an image exploring the full HSV space should look different, probably with a lot more of the grayscale.
I too have misgivings about HSV, the discontinuities created when generating hue ramps being particularly cruel. Using a rotation about the luma axis in YCbCr space gives much smoother rainbows but sadly not ones which hit all six primaries and secondaries. It's frustrating problem that perceptually H, S and V or L are what we think of when specifying a colour, but mapping them smoothly to RGB displays is so tough.
Some of these images in their most basic form are strikingly beautiful. It reminds me of some algorithmic art Joshua Davis produces. ( http://www.joshuadavis.com )
The OP might consider a more art based context with a potential for high priced prints and gallery showings.
Reminds me of some of the art in demoscene productions too. Since each image containing all RGB colours is just a different permutation of the same pixels, it might make for an interesting effect to animate pixel-swapping to transform one into another.
Unfortunately at present there's no single monitor that can show all the pixels of a 4096x4096 image individually...
There was a challenge a friend of mine and I had a while back: take an image, and represent the entire RGB colorspace within that image, without reusing any colors.
It was a REALLY fun challenge, and I encourage everybody to try it as well.
The way that I ended up winning (ha) was to represent the RGB colorspace as a 3D array, and then unravel that 3D array into a 1D skip-list.
The script read the pixel value at (n,n) of the image, "decide" where this would exist if RGB was unraveled into 1D, go that that spot, and then either set the color in the "new" image (if that color was unused) the value in the array, or read the "skip" destination: the place where the closest unused color was located.
Yep, I posted this link downstream a little bit ago. The processes different people use are interesting.
What's fun is that we are so tuned to color differences that a slight change in the algorithm — what color it "resorts to" if the desired one is used — might produce a small absolute difference (distance-wise on the table) but a massive perceptual difference, or create a much greater effect far down the line when greens were exhausted early or unused bright reds are speckling a neutral zone. Certainly an interesting exercise!
I remember being in a computer vision class some years ago where the professor was explaining to us how RGB fails to capture a wide range of colors. I was annoyed that his presentation didn't show any examples of these other colors, and then I realized how I was an idiot.
Given that the RGB image is likely not tagged with colorspace information, the colors represented are a direct byproduct of the primaries of whatever display you view them on.
That is, if your display happened to have the original CIE RGB color primaries, the 3D volume of colors would cover a greater volume.
Of course, this too ignores the "stepping" between colors (bit depth) and also ignores the wider luminance gamut that a standard observer is capable of seeing.
So the slightly more accurate answer to the original question is "Depends on what the primaries are of your display. Typically this is roughly sRGB, but if you are on a wide gamut display or projector, it will be larger."
There's a limit to the ability of the human eye to distinguish similar colors. That limit means that each "point" color in the image can be considered to span some range of the visible spectrum.
Pretty well if it's a regular layout, depending on algorithm, since almost every successive pixel is different from the last by exactly one level in one channel.
Sounds like another fun bit of code golf if some of the more aggressive compression algorithms aren't already picking up on the bit level changes (I believe they do).
It's very interesting how our eyes and brain processes colors differently -- for example, I see a lot of green and blue, but less red (though this is probably due to having a mild case of Protanomaly I suppose).