LDraw.org Discussion Forums

Full Version: How to get started developing a new LDraw Editor?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi, I’m new to the forum! I’m mainly a digital Lego builder, with a soft spot for real space and sci-fi models.  I’ve been a long-time dabbler with Lego Digital Designer, using LDraw to format models for instruction manuals and recently exploring Mecabricks and Blender for rendering.  However, since Lego has confirmed that LDD will not be updated to run on MacOS Catalina (which is dropping support for 32-bit apps) and I haven’t found any alternative Mac editors that I feel comfortable using, I’m seriously looking at the possibility of creating my own Lego CAD program - a native MacOS app, coded in Swift, using the Metal 3D graphics framework.

I’ve never actually coded for the Mac before, so I am not sure how far I will get with this.  But I used to play with the Dark Basic programming language and some low-level DirectX 9 commands, so I know the basics of high-level 3D object manipulation, and my day job is partly spent writing C++ programs. Would there be room for another LDraw editor in the community, any interest in having something for the Mac?


I also had a few questions after reading up on the LDraw file format. Please, please don’t take these as criticisms of the library, or think that I am asking/suggesting that anything should be changed, I am only asking to try and get a deeper understanding of how the format works.

Why do you use a right-handed coordinate system, with negative y straight up? The two main conventions are +x to the right, +y upwards, and +z either into or out of the screen (for left or right handed). So why was this one chosen? 

Each brick in the library is constructed from a series of recursive primitives, so what’s the best way to manage a brick in a program? Do you consider the brick as an object with multiple child limbs (with the limb structure preserving the LDraw primitive file hierarchy)? Or is it better to combine the geometry data of all of the primitives into a single 3D mesh? And has anyone ever pursued the LDD approach of hiding studs and anti-studs once they are connected, to reduce the scene poly-count?

None of the bricks contain normal data for the vertices, so how do you go about computing this? I mean, I know it’s a vector cross-product for each triangle, but how do you work out if an edge should be a hard edge (like for sides of a 2x4 brick) or a soft edge (like the rounded surface of a minifig head)? How do you go about searching the vertex data for shared edges?  (And out of historical curiosity, why wasn’t the LDraw library designed to include the normal data? Was the original intention to use flat shading only?)

What is the importance of the back-face culling commands in the file format?  Polygons can be rendered as single or double sided, but single sided is the default for performance reasons, and polygons facing away from the camera can be automatically culled by the 3D render engine.  So why does this need to be such a major part of the file format specification? Is this used to help determine which way a normal vector should point when creating a triangle?

For the ROTSTEP command, why does it specify three Euler angles for the rotation instead of a quaternion? And what’s the rotation order (x,y,z, z,y,x, …)?  Also, am I correct in assuming that REL and ABS denote use of perspective or orthographic rendering modes?

When defining parts in a model, the file format specifies x,y,z positions then a 3x3 rotation matrix.  Why isn’t a full 4x4 matrix used, so that all position, scale, rotation information could be encoded in one structure?

From looking at the Bricksmith editor, steps seem to have a default rotation of x,y,z = 31,41,21 degrees.  Why is this angle chosen, does the default ever vary from program to program?

Again, please don’t take any of these questions as criticisms of the LDraw format, I really am just trying to understand how the file format has evolved and how it works currently.


Oh, and sorry for such a huge first post, it’s turned out way longer than I expected! Smile
Hej, welcome to LDraw!

Your post is a little difficult to answer because it is a little like
"Hey, I want to build a car, so why are there 4 wheels, and what about the windshield?"
The main answer is that many things in the file format still are today as they were when they were created by James Jessiman.
We preserved them because changing them fundamentally would immediately break everything that exists,
from software to parts to models to websites (like the Parts Tracker).
He for example chose to have 3 translation numbers x y z, plus the rotation matrix separate in lines of syntax type 1.
It is trivial for a program to read in these numbers and put them into a standard 4x4 matrix which is used
wide-spreadedly in 3D software. So this difference just is an oddity, but not a real problem.
The same goes for the definition of the coordinate system: yes, it might be a little
awkward and different than elsewhere, but on the other hand it just is mathematically equal to any other possible choice.
You just need to use it right, and that would be the same also for all other coordinate system choices.
The BFC elements exist exactly for the purpose you already mentioned, i.e., orienting surfaces properly so that rendering
software can run faster: surfaces not pointing to the user normally cannot be seen, and their rendering effort can be saved.
Of course not for transparent parts. And so on. A real reply to all your questions would require to write down a long article about
each and every LDraw syntax element and how and why it evolved that way.
Also this my reply should not be understood as a criticism of your question, it just explains why answering it is a little difficult.
As a suggestion I would like to say that a good starting point could be to accept the existing standards for now and write
a software operating on them. Future extensions and modifications are always possible.
If you want to build a house but start redesigning all nuts and bolts and tools for that, you will probably never finish the house itself.
You need some firm ground to stand on and from which to take off.
I suggest to begin with writing a parser for the LDraw file format. The format is so simple that doing that
and rendering the loaded data into some 3D scene should give you maybe good energy and faith for anything more that you want to build.
As said, good to see you around here!
(2019-06-15, 12:11)Nathan Readioff Wrote: [ -> ](...)

Why do you use a right-handed coordinate system, with negative y straight up? The two main conventions are +x to the right, +y upwards, and +z either into or out of the screen (for left or right handed). So why was this one chosen? 

1) Each brick in the library is constructed from a series of recursive primitives, so what’s the best way to manage a brick in a program? Do you consider the brick as an object with multiple child limbs (with the limb structure preserving the LDraw primitive file hierarchy)? Or is it better to combine the geometry data of all of the primitives into a single 3D mesh? And has anyone ever pursued the LDD approach of hiding studs and anti-studs once they are connected, to reduce the scene poly-count?

2) None of the bricks contain normal data for the vertices, so how do you go about computing this? I mean, I know it’s a vector cross-product for each triangle, but how do you work out if an edge should be a hard edge (like for sides of a 2x4 brick) or a soft edge (like the rounded surface of a minifig head)? How do you go about searching the vertex data for shared edges?  (And out of historical curiosity, why wasn’t the LDraw library designed to include the normal data? Was the original intention to use flat shading only?)

What is the importance of the back-face culling commands in the file format?  Polygons can be rendered as single or double sided, but single sided is the default for performance reasons, and polygons facing away from the camera can be automatically culled by the 3D render engine.  So why does this need to be such a major part of the file format specification? Is this used to help determine which way a normal vector should point when creating a triangle?

3) For the ROTSTEP command, why does it specify three Euler angles for the rotation instead of a quaternion? And what’s the rotation order (x,y,z, z,y,x, …)?  Also, am I correct in assuming that REL and ABS denote use of perspective or orthographic rendering modes?

4) When defining parts in a model, the file format specifies x,y,z positions then a 3x3 rotation matrix.  Why isn’t a full 4x4 matrix used, so that all position, scale, rotation information could be encoded in one structure?

From looking at the Bricksmith editor, steps seem to have a default rotation of x,y,z = 31,41,21 degrees.  Why is this angle chosen, does the default ever vary from program to program?

(...)

I have answers and comments for the questions highlighted.

1) In the renderer I have been working on for the past year I have seen a huge performance gain in rendering time by going from primitives within primitives to fully 'burned' meshes: one for each part in a model. However. Primitives like "stud.dat" are used so often that I believe there is an optimal solution to be found somewhere in the middle.

2) Brigl uses the conditional lines as indicators for soft edges. This strategy seems to work quite well: http://www.lugato.net/brigl/


3) You are correct regarding REL and ABS. From the abstract for the MPD file format you can get the formulas for how to get from x/y/z in the file line to a matrix. See an implementation of this in my loader, function getRotationMatrix() in the code here: LDRLoader.js

4) A word of caution. I spent 5 days of debugging due to an assertion you are making here: If you make a 4x4 matrix using the 3x3 matrix and position vector in LDraw, you will not always get a matrix that is decomposable to a quaternion, scale and position! See 11477.dat for an example. Why someone has decided to use such a matrix is beyond me, and this is currently the reason why almost no renderer computes bounding boxes correctly for parts like this.
(2019-06-15, 14:51)Lasse Deleuran Wrote: [ -> ]2) Brigl uses the conditional lines as indicators for soft edges. This strategy seems to work quite well: http://www.lugato.net/brigl/
I tend to believe that the right behaviour should be: if there is an edge line, tread this as a hard edge, otherwise make it a soft edge.
(2019-08-08, 12:01)bokholef fouad Wrote: [ -> ]Why do you use a right-handed coordinate system, with negative y straight up? The two main conventions are +x to the right, +y upwards, and +z either into or out of the screen (for left or right handed). So why was this one chosen? 

The man who could answer this question passed away on 25. Juli 1997.

w.

PS. You're not the first asking this question nor will you be the last one: https://news.lugnet.com/cad/dev/?n=10499