Discussion:
Best pre-defined 256 color palette for gif
(too old to reply)
JohnF
2014-11-01 11:23:56 UTC
Permalink
What's the best set { (r_i,g_i,b_i), i=1,...,256 }
of 24-bit rgb values to choose for an arbitrary
(i.e., unknown beforehand) gif palette?

I'm programatically constructing animated gifs,
frame-by-frame and pixel-by-pixel within frames.
Too many pixels to store them all and go back
afterwards to statistically analyze their color
distributions. Instead, I want to pre-define
a palette, and then choose the index of the
best-fit color for each pixel as its calculated.

So what's the overall best set of colors for
such a pre-defined palette, and then, given
arbitrary r,g,b values calculated for a pixel,
what's the best way to determine the best-fit
(r_i,g_i,b_i) in that palette?

Aside:
If it makes any difference, and if it's
possible, "best-fit" means "best to the human eye",
whether that's least-sum-of-squares, (r-r_i)^2+etc,
or whatever else. And maybe relevant to that, I'm
actually using the hls color model, and then
converting those values to rgb (using the Foley
method). Thanks,
--
John Forkosh ( mailto: ***@f.com where j=john and f=forkosh )
Limited_Atonement
2014-11-03 15:15:18 UTC
Permalink
Post by JohnF
What's the best set { (r_i,g_i,b_i), i=1,...,256 }
of 24-bit rgb values to choose for an arbitrary
(i.e., unknown beforehand) gif palette?
I'm programatically constructing animated gifs,
frame-by-frame and pixel-by-pixel within frames.
Too many pixels to store them all and go back
afterwards to statistically analyze their color
distributions. Instead, I want to pre-define
a palette, and then choose the index of the
best-fit color for each pixel as its calculated.
So what's the overall best set of colors for
such a pre-defined palette, and then, given
arbitrary r,g,b values calculated for a pixel,
what's the best way to determine the best-fit
(r_i,g_i,b_i) in that palette?
If it makes any difference, and if it's
possible, "best-fit" means "best to the human eye",
whether that's least-sum-of-squares, (r-r_i)^2+etc,
or whatever else. And maybe relevant to that, I'm
actually using the hls color model, and then
converting those values to rgb (using the Foley
method). Thanks,
--
The correct solution would be two passes. Don't store all the pixels if you can't (but make sure that you can't first ;-) ), and simply load them and read them twice, once to create the palette, and a second time to write out the animated gif. Only choose a different, more optimized, less correct solution if you have to.

I'm trying to say, don't optimize until you have to. based on measurements. with numbers! :-) (Please pardon my grammar!)

If I had to create a palette beforehand for an unknown picture, I guess I would take equally-spaced samples from the colorspace, but that's a naive solution, and there may be better ones. For instance, to which changes in colors are human eyes more sensitive? If the human eye can notice more shades of, say, red than gray, make sure you have more reds on your palette than grays. After that, when you are ready to write down a pixel, find its nearest neighbor on the palette of course.
Jongware
2014-11-03 15:51:18 UTC
Permalink
Post by Limited_Atonement
Post by JohnF
What's the best set { (r_i,g_i,b_i), i=1,...,256 }
of 24-bit rgb values to choose for an arbitrary
(i.e., unknown beforehand) gif palette?
I'm programatically constructing animated gifs,
frame-by-frame and pixel-by-pixel within frames.
Too many pixels to store them all and go back
afterwards to statistically analyze their color
distributions. Instead, I want to pre-define
a palette, and then choose the index of the
best-fit color for each pixel as its calculated.
So what's the overall best set of colors for
such a pre-defined palette, and then, given
arbitrary r,g,b values calculated for a pixel,
what's the best way to determine the best-fit
(r_i,g_i,b_i) in that palette?
If it makes any difference, and if it's
possible, "best-fit" means "best to the human eye",
whether that's least-sum-of-squares, (r-r_i)^2+etc,
or whatever else. And maybe relevant to that, I'm
actually using the hls color model, and then
converting those values to rgb (using the Foley
method). Thanks,
--
The correct solution would be two passes. Don't store all the pixels if you can't (but make sure that you can't first ;-) ), and simply load them and read them twice, once to create the palette, and a second time to write out the animated gif. Only choose a different, more optimized, less correct solution if you have to.
I'm trying to say, don't optimize until you have to. based on measurements. with numbers! :-) (Please pardon my grammar!)
If I had to create a palette beforehand for an unknown picture, I guess I would take equally-spaced samples from the colorspace, but that's a naive solution, and there may be better ones. For instance, to which changes in colors are human eyes more sensitive? If the human eye can notice more shades of, say, red than gray, make sure you have more reds on your palette than grays. After that, when you are ready to write down a pixel, find its nearest neighbor on the palette of course.
Wikipedia (http://en.wikipedia.org/wiki/List_of_software_palettes) lists
a number of standard palettes in the section "RGB arrangements".

Easiest is an even spread for each RGB component: 6-6-6. Other options
are 6-7-6, which has more shades of green (based on the human eye being
more sensitive to green), and 6-8-5 (because the eye is *less* sensitive
to blue) and 8-8-4.

Usually the full range for the input of each channel (0..255) gets
spread evenly over the output range, but there might be something to
gain if you adjust for intensity sensitivity as well. Wikipedia does not
mention it, so it could be worth investigating, even if only empirical
(i.e., try some curves and see what looks best).

[Jw]
JohnF
2014-11-04 04:46:39 UTC
Permalink
Post by JohnF
Post by JohnF
What's the best set { (r_i,g_i,b_i), i=1,...,256 }
of 24-bit rgb values to choose for an arbitrary
(i.e., unknown beforehand) gif palette?
I'm programatically constructing animated gifs,
frame-by-frame and pixel-by-pixel within frames.
Too many pixels to store them all and go back
afterwards to statistically analyze their color
distributions. Instead, I want to pre-define
a palette, and then choose the index of the
best-fit color for each pixel as its calculated.
So what's the overall best set of colors for
such a pre-defined palette, and then, given
arbitrary r,g,b values calculated for a pixel,
what's the best way to determine the best-fit
(r_i,g_i,b_i) in that palette?
If it makes any difference, and if it's
possible, "best-fit" means "best to the human eye",
Post by JohnF
whether that's least-sum-of-squares, (r-r_i)^2+etc,
or whatever else. And maybe relevant to that, I'm
actually using the hls color model, and then
converting those values to rgb (using the Foley
method). Thanks,
<<snipped L.A.'s followup, replied to separately>>
Post by JohnF
Wikipedia
http://en.wikipedia.org/wiki/List_of_software_palettes
lists a number of standard palettes in the section
"RGB arrangements".
Thanks, JW, I'd been looking for something like that.
I'd been using 6*6*7=252, and the extra indexes 0,1,2,3
for 0=bg,1=fg,2=white,3=black. There's usually no
particular fg, so that index is usually wasted.
But ~half the pixels are bg, so bg=0's a no-brainer.
And then black,white let the other 252 colors skip
the "endpoints". Moreover, I'd figured when rgb are
all very low values, or are all high, the human eye
probably sees pretty much black,white, respectively,
so I could spread the indexes over the "middle values".
But haven't tested that (see bottom paragraph).

And I'd managed to google stuff like
http://en.wikipedia.org/wiki/Color_quantization
http://www.imagemagick.org/script/quantize.php
and other stuff from googling "color quantization" and
"Heckbert". But I hadn't found your List_..._palettes.
Post by JohnF
Easiest is an even spread for each RGB component: 6-6-6.
Other options are 6-7-6, which has more shades of green
(based on the human eye being more sensitive to green),
and 6-8-5 (because the eye is *less* sensitive to blue)
and 8-8-4.
What I'd also been thinking of fooling around with was
a 5-5-10 distribution based on the hls model: 5 each
for lightness and saturation, 0.0-1.0, and 10 for hue,
0-360. And then convert those to rgb values, regardless
of how that looks numerically. Seems to me that hls is
more human-eye-oriented. But I couldn't google anything
like that kind of palette generation. And I imagine if
it's a good idea, it's already been done (thousands of
times).
Post by JohnF
Usually the full range for the input of each channel (0..255)
gets spread evenly over the output range, but there might be
something to gain if you adjust for intensity sensitivity as well.
Wikipedia does not mention it, so it could be worth investigating,
even if only empirical (i.e., try some curves and see what
looks best). [Jw]
Yeah, "try[ing] some curves" is somewhat of a pain.
The code I've written can definitely do that, but
a lot of messing around editing it for each such test.
What I need is a good, easy-to-use tool for these
kinds of experiments. But, again, it's got to have
already been done (thousands of times). Like that
old X-Files tagline, "The answer's out there."
I'd just like to figure out how to google it.
Thanks again,
--
John Forkosh ( mailto: ***@f.com where j=john and f=forkosh )
JohnF
2014-11-04 04:07:14 UTC
Permalink
Post by Limited_Atonement
Post by JohnF
What's the best set { (r_i,g_i,b_i), i=1,...,256 }
of 24-bit rgb values to choose for an arbitrary
(i.e., unknown beforehand) gif palette?
I'm programatically constructing animated gifs,
frame-by-frame and pixel-by-pixel within frames.
Too many pixels to store them all and go back
afterwards to statistically analyze their color
distributions. Instead, I want to pre-define
a palette, and then choose the index of the
best-fit color for each pixel as its calculated.
So what's the overall best set of colors for
such a pre-defined palette, and then, given
arbitrary r,g,b values calculated for a pixel,
what's the best way to determine the best-fit
(r_i,g_i,b_i) in that palette?
If it makes any difference, and if it's
possible, "best-fit" means "best to the human eye",
whether that's least-sum-of-squares, (r-r_i)^2+etc,
or whatever else. And maybe relevant to that, I'm
actually using the hls color model, and then
converting those values to rgb (using the Foley
method). Thanks,
The correct solution would be two passes.
Don't store all the pixels if you can't
(but make sure that you can't first ;-) ),
and simply load them and read them twice,
once to create the palette, and a second
time to write out the animated gif. Only
choose a different, more optimized, less
correct solution if you have to.
Thanks, L.A., I was beginning to think
this ng was dead, based on low activity.
Can't store and re-read pixels, either.
Has to run in "realtime" (before users
looking at screen get bored, maybe ~1sec),
and anything along the above lines is
too i/o bound, as well as too computationally
intensive.
And I perhaps should have mentioned
that photo-quality accuracy unnecessary.
Program just generates stuff like
Loading Image...
Loading Image...
and other stuff in that decorative/ directory.
All those jpg's and gif's are currently produced
by ImageMagick's convert image.ps image.format,
i.e., the .ps versions are what the program
generates. The only gif's it can generate are
the thin-line two-color ones illustrated at
the bottom of
http://www.forkosh.com/lineart.html
I'm trying to enhance its native gif functionality
to match the ps functionality.
Post by Limited_Atonement
I'm trying to say, don't optimize until
you have to. based on measurements. with
numbers! :-) (Please pardon my grammar!)
If I had to create a palette beforehand
for an unknown picture, I guess I would
take equally-spaced samples from the
colorspace, but that's a naive solution,
Yeah, that's pretty much what I'm doing,
using Jungware's suggested 6-6-7 (hadn't
realized green should get the 7).
Post by Limited_Atonement
and there may be better ones. For instance,
to which changes in colors are human eyes
more sensitive?
Yeah, that's pretty much what I'm asking.
Post by Limited_Atonement
If the human eye can notice
more shades of, say, red than gray, make
sure you have more reds on your palette
than grays. After that, when you are ready
to write down a pixel, find its nearest
neighbor on the palette of course.
And also asking how to define "nearest neighbor",
e.g.,
http://en.wikipedia.org/wiki/Color_quantization
seems to suggest "least-sum-of-squares" Euclidean
distance. But I find it unlikely that human eye
sensitivity ends up that simple.
--
John Forkosh ( mailto: ***@f.com where j=john and f=forkosh )
Loading...