LDraw.org Discussion Forums

Full Version: color matching for pattern
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
Sometimes I am also looking for new possibilities for creating pattern.

One of the challenges I am faced is to reduce the colours used in that pattern. This is mainly necessary because most (if not all) pictures are antialiased and therefore we do not have sharp edges that we can work with, if we try to automatic the generation of pattern in ldraw file format.

I tried it so far with an algorithm from bitsticker project:
''' Identify the most similar color in the palette to a specified color based
    ''' on Cartesian distance between RGB colorspace points. Derived from libgd's
    ''' gdImageColorClosestAlpha function (in gd.c).
    ''' Code taken from project "bitsticker" at [url=http://anoved.net/files/bitsticker/bitsticker.txt]http://anoved.net/files/bitsticker/bitsticker.txt[/url])
But the result is not what I was looking for, as you may get strange matches if you only work with this algorithm.

I remembered that Tim some time ago argued that the colors are not recognised by the eye in the same way (blue different than red etc.). So I searched on the internet for calculation of color differences.

I found a source for many color related stuff at:

Sadly I did not get yet a key idea from that information. But I could read that the algorithm I used so far is known to be not good.

Maybe some can read and understand that, so he can tell me how to calculate the color differences better. This should lead to a better color matching in automatically created pattern.

I think this is a good task for 2014 Smile
Hi Mike,

What you're after is a colour quantisation algorithm. Ideally based on the lab colour space model but more practically based on YUV colour space. In the pseudo-code I use YUV as it is much, much easier and we are not seeking perfection.

Or in pseudo-code we find the square of the distance via:
# Here T converts RGB -> YUV (and inverse(T) converts backwards)
# and T=matrix([0.299,0.587,0.114],[-0.147,-0.289,0.436],[0.615,-0.515,-0.1])

# We then take the distance in YUV space  - these are from matrix elements of
# transpose(T)*T=matrix([0.489235,-0.098729,-0.091506],[-0.098729,0.693315,-0.007586],
#    [-0.091506,-0.007586,0.213092])


Distance2=(R1-R2)*M11*(R1-R2) + (G1-G2)*M22*(G1-G2) + (B1-B2)*M33*(B1-B2)
+ (R1-R2)*M12*(G1-G2) + (R1-R2)*M13*(B1-B2) + (G1-G2)*M23*(B1-B2)

Note that the red and green components are more important (especially green) than red. This is because our eyes are much better adapted to seeing green - I guess because grass and trees are more common than blueberries.

Hope that helps Smile

Yes, I think that is what I am looking for.

Given the current code is like this
taken from bitsticker project Wrote:#
# bs_matchRGB
# Identify the most similar color in the palette to a specified color based
# on Cartesian distance between RGB colorspace points. Derived from libgd's
# gdImageColorClosestAlpha function (in gd.c).
# Parameters:
# p, reference to color palette
# r, red value
# g, green value
# b, blue value
# Return:
sub bs_matchRGB {
my $p = shift;
my $r = shift;
my $g = shift;
my $b = shift;

my $key;
my ($dr, $dg, $db);
my $dist;
my $code = DEFAULT_COLOR;

# max potential difference from 0,0,0 black to 255,255,255 white is
# sqrt(195075), or ~442, so start with a mindist greater than the max
# this simplifies the comparison logic so we don't have to test for a first
# case that occurs only once. Maximum RGB color values of 255 are assumed.
my $mindist = 195076;

foreach $key (keys %{$p}) {

# difference between palette and pixel color values
$dr = @{$$p{$key}}[0] - $r;
$dg = @{$$p{$key}}[1] - $g;
$db = @{$$p{$key}}[2] - $b;

# no need to sqrt since comparing the squares yields the same results
$dist = ($dr * $dr) + ($dg * $dg) + ($db * $db);

# is this the closest color yet?
if ($dist < $mindist) {
$mindist = $dist;
$code = $key;

return $code;

I only need to incorporate your pseudo code into this one Smile (hopefully i am not too stupid).

Thanks so far for your answer. I hope this will solve some (or all) of my current issues Smile
Just need to change one line Smile

$dist = 0.489235*($dr * $dr) + 0.693315*($dg * $dg) + 0.213092*($db * $db)
- 0.197458*($dr * $dg) - 0.183012*($dr * $db) - 0.015172*($dg * $db);
Thank you very much. I was going to understand what happen with the matrix and did not understand how to do. But this is easy Smile.

So I changed the code and now I have two results:
test-rgb.dat with the old calculation
test-yuv.dat with the new calculation

I think that the new calculation did it not better than the old one if I compare the results.

Does anybody has an idea how to improve the automatic conversion process?

I am looking forward to your idea Smile
Hey Michael,

would you share the original image with me. I would like to test something...

Thanks Rolf
It's LDD 88174 pattern...
Hi Mike,

Those colours might be closest in both spaces. All these algorithms can do is choose the _best_ neighbour from a list according to their metric.

What exactly is the proccess you follow in choosing your pallette?

Are you trying to determine the best set of colours? Because that is a different problem again.

The more info I get, the more I can try to help Smile

It might be possible to make your quantizer antialiasing-resistant, but I'm not sure how well it will work. The basic idea would be like so:
  1. First Pass: find all the pixels whose color is "close enough" to one of the output colors, and flag them as "good", while assigning them to the appropriate output color. This should handle all the filled in areas as long as your output colors are acceptable.
  2. Second Pass: for all pixels that didn't get set above, look at all their adjoining pixels that were given a value step one. Then, set their color to the color from that set that most closely matches.
  3. Repeat until done? Note: if you use a pass number for your "good" flag, you can avoid having pixels that are set during pass 2 being considered pre-good in pass 2.

Like I say, I'm not sure how well it will work, but it seems like it should get rid of the problem where black lines get blue and yellow pixels due to the original antialiasing.
I like to follow the following procedure:

1) select picture
2) mark in the picture the colors that should appear in the output
3) detect the regions with different colors based on step 2 (this is what we are talking about)
4) create LDraw file

There are more steps behind, but this should be the steps the user should make.
Rolf has already made a good region detection on the picture if not choosing black but a grey. So he adjusted the treshhold value. But this can be done only manually by the user.

I fear that the attempt from Travis will also do not give better results if no further user action is taken. But I like to keep the user action to a minimum, so everybody should be able to make a LDraw file from a picture Smile

If you have more questions please ask.
Pages: 1 2