I'm trying to find documentation that explains the encoding of custom colors.

http://www.ldraw.org/article/218.html#colours
The above link explains that custom ("direct") colors are defined as 0x2RRGGBB.

But if in MLCad I choose R=64 G=128 B=192 (which should be 0x24080C0) it shows in MLCad/LDView as 0x0455548C and is stored in the .ldr in decimal as 72701068 (which is 0x0455548C).

It seems to be encoded, but where's the specification of the encoding?

Clearly this needs to be detailed somewhere on LDraw.org... Some explanations

here
In addition to the link Philo already provided, below is info from comments in my LDView source code. Note that I didn't create any of these color schemes; I simply coded LDView to recognize them.

Code:

`// 0x2RRGGBB = opaque RGB`

// 0x3RRGGBB = transparent RGB

// 0x4RGBRGB = opaque dither

// 0x5RGBxxx = transparent dither (xxx is ignored)

// 0x6xxxRGB = transparent dither (xxx is ignored)

// 0x7xxxxxx = invisible

Notes:

- In all of the above, "x" indicates a hex digit that is completely ignored.

- 0x3RRGGBB are ostensibly 50% opaque, but they are intended to represent transparent (translucent, really) bricks, so 50% isn't necessarily an appropriate degree of opacity.

- The dither pattern used when the above "dither" colors were created was a 50/50 checkerboard.

- Combining two 12-bit colors together in a 50/50 checkerboard does not give you a 24 bits of color space.

- LDView does not use a checkerboard dither pattern. It instead averages the two input colors together (after converting them from 12 bits each to 24 bits each by multiplying each 4-bit component of the 12-bit number by 17 and then placing the result in the appropriate spot in the 24-bit number).

- For 0x4xxxxxx-0x7xxxxxx colors, the bottom two bits of the 4, 5, 6, and 7 can be interpreted as "ignore this part of the dither and treat it as transparent".

I suspect that the reason that the 0x4xxxxxx-0x6xxxxxx range isn't document on ldraw.org is because they don't make any sense on modern computers, and we basically don't want them to be used. I'm not sure if 0x7xxxxxx is documented or not. On the surface it seems useless, but I know somebody at some point asked me legitimately if there was any way to have an invisible color, and I told them about that color.

Note that I believe that if you manually enter the integer value from one of the 0x2RRGGBB or 0x3RRGGBB colors into MLCad, it will display it in the right color. (In other words, calculate the big integer value, and then enter that into the color number box, instead of entering R, G, and B.) I could be wrong about this, though.

Thanks for that clear and concise list of color ranges.

Travis Cobbs Wrote:Note that I believe that if you manually enter the integer value from one of the 0x2RRGGBB or 0x3RRGGBB colors into MLCad, it will display it in the right color. (In other words, calculate the big integer value, and then enter that into the color number box, instead of entering R, G, and B.) I could be wrong about this, though.

Yes, it accepts decimal or hex into the color box. It also produces a nice guaranteed access violation if you attempt to edit the definition of an existing custom color (using the "Custom..." button) without first resetting it to a standard color

Ok, the nice message is there

You can try to use DATHeader for composing the color you like or LDConfig Manager.

Both apps let you compose the color you like and gives a value that can be used for LDraw files.

Travis Cobbs Wrote:Combining two 12-bit colors together in a 50/50 checkerboard does not give you a 24 bits of color space.

Totally unrelated question. Do any math types out there know how to calculate the actual size of the color space for a 50/50 average of two 12-bit colors, where the results are 24-bit colors? I was curious and wrote a real quick program to come up with the number empirically (by calcuating all 16m+ combinations of 12-bit values and counting all the unique results), but the answer I got seemed very strange: 29,791 (which, for what it's worth, is 0x745F in hexadecimal). I can't decide if I messed up my (very simple) program or not. I would have expected some simple equation to give the count, but I also would have expected the result to have some kind of visible pattern in hexadecimal notation, and I don't see one.

For reference (for the programmers out there), here is the program. Store the below in a .cpp file and compile it and it should work on just about any semi-modern C++ environment. (Having said that, I've only run it as a Win32 C++ Console Application, build with Visual C++ 2010.)

Code:

`#include <stdio.h>`

#include <set>

int main(int argc, char* argv[])

{

std::set<int> colors;

for (int left = 0; left < 4096; ++left)

{

for (int right = 0; right < 4096; ++right)

{

int r = (((left & 0xF00) >> 8) * 17 + ((right & 0xF00) >> 8) * 17) / 2;

int g = (((left & 0xF0) >> 4) * 17 + ((right & 0xF0) >> 4) * 17) / 2;

int b = ((left & 0xF) * 17 + (right & 0xF) * 17) / 2;

colors.insert((r << 16) | (g << 8) | b);

}

}

int count = (int)colors.size();

printf("Total: %d (0x%X)\n", count, count);

return 0;

}

Just a quick thought:

4096 colors are possible with 12 bit

this should be mixed with other 4096 colors

As both colors are the same and a mixture 'ab' is visually the same as 'ba'

I would calculate 4096 * (4096/2).

I understand your calculation, but it results in a value of over 8 million, while my empirical calculated value is only around 29 thousand. That's a pretty big difference.

The thing is, in addition to ab being the same as ba, the components for the average color are calculated as (a + b) / 2, so you things like this:

(2 + 5) / 2 = (5 + 2) / 2 = (1 + 6) / 2 = (0 + 7) / 2 = 3.5

Note that there are of course quite a few other input values that result in an output value of 3.5. For my calculations, the 3.5 is then multiplied by 17 and the result is rounded down, giving an 8-bit color component of 59.

For reference, in case it's not obvious, the reason for the muliplication by 17 is because 15 is the largest 4-bit number, and 15 * 17 = 255, which is the largest 8-bit number. This works to double the number of bits in any number. For example, to extend a 3-bit number to 6 bits, you multiply it by (2^3) + 1, or 9. (7 * 9 = 63.) To go from 5 bits to 10 bits, multiply by (2^5) + 1, or 33. (31 * 33 = 1023.)

Naive information theory says it should be the same as 24 bit, since you need 12 bits x 2 to represent it. But... many combinations of bits reduces the number of _unique_ pieces of 24 bit space you can sample. So it's actually much smaller.

You can easily show that, for each colour element, you can get every value from 0 to 2^4-1 and thus, the sum of the two (ignoring halving) means you can get every value from 0 to (2^4-1)*2 ie. 31 possible values.

Thus, you have a colour space of (31)^3=29791

And my maths and your program agree

Or in other words, by dithering you are disposing of 99.82% of the information.

If instead you dither across N elements, then you get (1+N*(2^4-1))^3 effective colour values