LDraw.org Discussion Forums

Full Version: Adding normals to the library
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
So as I previously said about normals: no other single file format i've seen leave out normals to be computed by the user. There's absolutely no reasons not to include them. They're integral part of a geometry definition. Why forcing every single developer to reinvent the wheel, to spend countless hour on a custom smoothing algorithm? Algorithms that are already usually complicated, but more so for ldraw where definitions are sparse in multiple files, and one have to take into consideration hard and cond lines. I'm not a newbie of 3d programming (not a genius either by all means), and yet i've spent days trying to implement this algorithm and have yet to succeed. Of course i could spend some other weeks and do it, but why? And even if i did, we'll end up with many different programs, each one with its own algorithm, all slightly different, all buggy in a different way and with no consistency at all.

Updating all the library in not an easy task, but i think it can be done in a way that doesn't break compatibility and permits automatic data entry.

First off, i propose to add a new comment/command, with a standard sintax as follow:

0 NORMALS v1x v1y v1z v2x v2y v2z v3x v3y v3z [v4x v4y v4z]

It will be prepended to all quads and triangles, and will have three or four vertex definitions. Of course each one is the normal at the corresponding vertex of the face, in a per-vertex, per-face paradigm that is usually the standard.
Being a new comment, it will not cause any problem with existing software. Editors working with parts will not have any problem. Editors that write tri and quads may break the file but should not crash or misbehave themselves, and the user should be able to act on the problem. Hard and conditional lines can (and must) stay there of course.
New software developers can take advantage of the normals to achieve an uniform look across different renderers and saving weeks of head banging.

Now the delicate part: this solution assumes that any face have the same normals regardless of where it appears, or in other words that each single subpart/primitive always assume the same normals. But it may be that the same subpart can have different normals depending on the rest of the part it is attached to. If this is indeed the case, than it derives that it's impossible to add normals to a subpart as it can vary.
I'm not sure about this, intuitively each 3d shape should have its normals regardless of what it has around. Much of this maybe depend on the granularity of the subparts, how much down they break a solid. I remember that the primitives breask down even to 2d figures, which will need to be adressed somehow. In the worst case, the primitives would need to be rethinked with normals in mind.

How to popolate the library with normals

Ok, the problem of course is to inject all the normals in the already existing library. As the previous post, my solution is to take an existing smoothing algorithm, elevate it as the standard, and apply it to the whole library.

Basically it would work on a per-part basic. It will load the part and all the subpart into a canonical 3d model data structure, but mantaining a correlation between each face and the file/line that generates it. It will then apply the smoothing algorithm to the part, then traverse all faces and use the correlation to write them back in the source text. Of course some subparts will already have normals calculated (since they're shared), in this case the subpart can be skipped (or take advantage of the occasion to check if the would be calculated the same way).

Ok course normals are useful only if they're available on the whole part library (if a developer need to implement a smoothing algorithm for a single part, he may as well do it for the whole Smile ), this of course means that the library will grow in size, but i hope this is not an issue. Only subparts with actual faces will grow, but most of them should be reusing already defined chunks so will not grow. Eventually some optimization can be implemented (such as inserting a single vector instead of 3/4 if the face has uniform normals).
I'd love here some feedback for this humble proposal
It certainly makes sense, and is backwards compatible for sure. One problem, how to make sure the normals comment remains attached to its triangle/quad? Maybe this could be solved by dumping and regenerating normals after each edit...

But I don't clearly see how you make it work with joined primitives. An example: http://www.ldraw.org/library/official/p/bump5000.dat, made of 3 cone primitives and currently smoothed by condlines.
Philippe Hurbain Wrote:One problem, how to make sure the normals comment remains attached to its triangle/quad? Maybe this could be solved by dumping and regenerating normals after each edit...

You mean during the part design? well either the author is aware of normals and work with them, or they could be wiped and recalculated each time for the whole part.

Philippe Hurbain Wrote:But I don't clearly see how you make it work with joined primitives. An example: http://www.ldraw.org/library/official/p/bump5000.dat, made of 3 cone primitives and currently smoothed by condlines.

Here's an example of the case i speculated above. The problem is not smoothing, as the part as a whole will be smoothed correctly, the problem is that the same cone primitives are used for both cones and spherical surfaces, and will need a different set of normals in different cases. Sadly this breaks the assumption i made. The only solution here is to change the primitives to keep normals in mind, ie clarifiing when a surface is intended to be flat or is an approximation of a smooth surface.
Yeah, we could inline primitives in this case... Could be done automatically when there is a condline next to a surface belonging to a primitive. One step further towards ditching primitives...
Yes, inlining primitives is one way to go, even if i'm not sure that the check about the condline is enought to cover all cases.

About ditching primitives, i'm all for it. I understand their usefulness for some things, like avoiding replicate definitions of studs, pins and other well defined functional element (hinges, clips etc), but it should end there. I don't know what's the gain of having all possible combinations of cube faces or portion of cylinder etc, when you can put the required quads where you need them. Also, i know nothing about part design, but i think it would be easier to define a part as a whole instead of having to look up for different cutout of boxes and emisphere to adapt (i may be wrong here).

The idea to cover all possible shapes with primitives maybe was good at the beginning, but now lego bricks have all kind of complex shapes (just think of hair pieces or wings, etc), it's impossible. Much better to have a single file with his faces inside.

The only things about (curve) primitives is that they let you replace them with high definition with ease. I don't know if nowadays it is still useful or if we can stick with a single definition
I also always wondered if there was really a use for a lot of primitives. Mind you, I only very recently got interested in part authoring and the workings behind LDraw. The previous years I was a user and nothing more than that.

I do understand the logic behind primitives like studs, common joints etc. etc. Because, first off, they're common and secondly (as far as I know), they are also used for automatic part snapping/connection in software like SR3D Builder or LDCad.

But, primiteves like boxes and things like that are (in my eyes) not that useful. They may save the author maybe a 10 minutes or so of work. And, if you still want the time saving, you can also just inline the primitive.
Especially the primitives based on primitives I find quite strange... For example, there are primitives for a bunch of stud configurations, like 1x4, 1x6, 1x8. They literally just contain the stud primitive 4, 6 or 8 times. It's nothing more than that.

Again, I'm still quite new to all of this. But, that might also be useful. I'm maybe looking at it from a different perspective than the veterans here Wink
Quote:Again, I'm still quite new to all of this. But, that might also be useful. I'm maybe looking at it from a different perspective than the veterans here Wink
New SPAM CONTENT and ideas are always welcome Wink
Let's put this into a non-3D graphics professional/programmer like myself.
I still haven't learned normals to any great extent beyond knowing they exist. How hard is this concept? How much extra math overhead would be needed in order to author a part? Is this something easily automated? How hard would it be to back convert the entire library? My concern is that if parts authoring is even more cumbersome than it is now then part submissions will plummet even more than they already have.
The things I like about the LDraw format: it's very clean (in it's initial state) and the recursive nature of it makes it very efficient.

So I can't help having mixed feelings adding normals to the library files for a couple of reasons:

  1. Not all end user software needs it (e.g. POVRay)
  2. Normals are trivial to calculate when doing flat shading, so adding one normal per triangle/quad is very wasteful.
  3. Adding normals for smooth meshes will break the recursive / reusable nature of LDraw like you noted yourself.
  4. Smoothing isn't that hard when using modern bfc-ed parts (I agree with your notes on fixing the library itself). And if slow or something software is free to pre calculate/cache them.

Given the above I feel it's unfair to burden the part authors (often working with little more then notepad etc) with the normal issue, while it's only a couple of days work to implement a decent smoothing algorithm in software (done once).

Maybe we could improve things for the view/edit software programmers by adding a reference smoothing algorithm to the LDraw documentation in pseudo code.
If you want, i can try to explain briefly what normals are and how they work.

i don't know well how part authoring work, but for a normal 3d application, such as blender, you can define your mesh first, and then you can edit normals in many ways. The easiest is let the program compute them, for example by using a smooth shading command on the whole object. You can specify an arbitrary angle for "hard" edges, and you end up with something like this

[Image: Edge_Split_to_improve_Smooth_Shading.png]

This is a one click solution that works for most cases.
Or you can manually select a number of faces that you want to smooth and fine control it. Or by specifying edges to be smoother around.

So, there's no math to do, they're usually calculated by the 3d program and eventually tuned.

For part authoring, i think it can be completely automated, like it is right now after all. Only, instead of calculating it when you display the file in LDView, you calculate it right after you author the part, and put the data in it.
Pages: 1 2