Rethinking LDraw for modern hardware and software


Rethinking LDraw for modern hardware and software
#1
The LDraw file format has held up surprisingly well in spite of its age. I’ve outlined a number of limitations when trying to achieve the best performance and results with modern hardware and applications. Any application wanting to work with and render LDraw files efficiently at scale needs to apply significant processing. A new version of the LDraw format should keep this processing in mind because it adds significant runtime cost as well as being ambiguous and difficult to implement.

I’ve worked with implementing and optimizing code for rendering and loading a number of binary and text file formats for 3D models in the video game modding scene. These are non issues with popular formats like glTF or collada and custom binary formats used for modern games. I’m not suggesting that we just adopt something like gltf or replace all the .dat files with .obj due to the unique requirements and optimization opportunities with LEGO models.

I’m curious to see what thoughts people have on what it would take to implement these changes. Some of these points may be more immediately obvious than others. I believe using a human readable format is a worthy goal, but we should also recognize that files are mostly read by machines. We should consider not only the needs of people authoring parts by hand but also the needs of developers. Many usability issues can be resolved with proper libraries and tooling.

Geometry
Modern GPUs are built to render indexed triangle lists. Applications that support quads will need to triangulate them before rendering. This also applies to renderers like Blender. Requiring triangles for LDraw models would simplify application code and also avoid the ambiguities that arise from quads. Future ray tracers will also likely want to work with triangles.

Applications currently need to calculate their own vertex indices. The naive approach is very slow and requires comparing each vertex with every other vertex to detect duplicates. Hashing is unreliable due to rounding errors. Using a distance threshold can be made faster using special tree structures, but it still takes a lot of processing. A lot of popular model formats already include indices like gltf, collada, or wavefront obj. Vertices that come from 3D modeling applications are typically already indexed, so users wouldn’t need to manually specify indices when using these programs to author parts.

Normals
Normals are a major issue with LDraw files. Figuring out the best way to deal with line types is best left to the threads already covering those topics. For GPU rendering, normals are usually defined per vertex instead of per face. LDraw files are unusual in that vertices are not indexed, and smooth normals have to be inferred purely from edge information. 3D modeling applications like Blender typically just have users define edges as “hard” or “smooth” and calculate the vertex indexing for you behind the scenes. Normals can be inferred from the vertex positions and indices, so defining custom normals is only necessary for effects like stylized character shading. LDraw wouldn’t need to explicitly include normals as long as the vertices were indexed correctly and parts were not authored with custom normals in mind.

Subparts
Indexing won’t work well with the current subpart usage. Reducing the number of objects is also beneficial in 3D modeling applications since they need to manage a lot of per object state like unique names, bounding boxes, etc. Subparts also add a lot of small files and negatively impact IO performance for file reads as well as compressing, decompressing, or moving the LDraw parts library. Even newer NVME SSDs are still much faster when using fewer, larger files.

I’m not advocating to completely remove subparts. Subfile references in parts files can still be helpful if a part is made up of multiple sections like joints or hinges. There may be other cases where adding another file reference is justified. I’d like to calculate how much bigger the parts library would be with and without zip compression when not using subparts at some point.

Binary Formats
I don’t think binary files are worth the loss in readability. The space savings from my experience come from using reduced precision for attributes like normals. It’s going to be hard to beat the size of a text file defined in integer LDraw units and even harder after compression. The advantage of binary formats is reduced processing when loading, which matters a lot for games. For a good implementation example that’s close to what graphics APIs work with, see gltf. It’s worth noting that GPUs prefer attributes defined per vertex for cache locality, but applications prefer separate lists for attributes like positions and UV maps for flexibility. We can’t optimize for both in the same format, so the benefits of binary files are mixed.

Level of Detail (LOD)
I don’t think this is an issue on modern hardware for an optimized renderer. A bigger concern is object counts since lots of small objects add a lot of overhead and don’t fully utilize the GPU. Indexing vertices greatly reduces the actual number of vertices processed by the GPU due to vertex reuse. Techniques like frustum and occlusion culling provide significant FPS improvements without requiring any LOD changes. The move towards realtime raytracing or virtualized geometry (UE5 Nanite) will make polygon counts even less of an issue. Path tracers like Blender Cycles already scale very well with polygon counts and benefit from instancing. Memory isn’t a concern since LDraw models instance the same parts many times.

I still think level of detail could be desirable in some cases like selecting a different stud type for CAD applications than final renders. For very large scenes, it may be beneficial to use reduced geometry for distant objects. I plan to investigate this at some point for the ldr_wgpu renderer I’ve been working on as well as other optimizations.

File Format
I’m intentionally not suggesting any specific format changes in this initial post. I’d like to incorporate my rendering library into a file viewer application. This would allow displaying LDraw files in a custom text format within the application. People could use the application to open and view existing LDraw files in the current format and see what it would look like in a different text format. I may also put together a tool for converting the entire parts library and outputting to a new folder. I would want to compare things like file sizes, file count, performance, and ease of implementation in different languages with a new format before I propose any specific changes to LDraw itself.
Reply
RE: Rethinking LDraw for modern hardware and software
#2
(2023-08-05, 16:19)Jonathan N Wrote: The LDraw file format has held up surprisingly well in spite of its age. I’ve outlined a number of limitations when trying to achieve the best performance and results with modern hardware and applications.

I've always seen the library as data only, so no particular rendering target.

That said, I know first hand about the 'pain' some of these issues cause.

Especially the finding identical vertices issue (LDCad still has problems with that in high detail stickers etc).

But I don't see the format changing any time soon.

Especially as most of your issues need single file solutions instead of the highly recursive approach it uses now.

Your own text proves the raw LDraw data can be processed (in bulk) to convert them to something more suitable for modern render hardware.

In other words instead of reworking the entire library, it seems more practical to 'fork' these preprocessed libraries into a new/additional project.

That said I would be open to formalising meta extensions supplying hints to processing software in order to make some of these issues less ambiguous.

my 2cts.
Reply
RE: Rethinking LDraw for modern hardware and software
#3
(2023-08-05, 20:38)Roland Melkert Wrote: I've always seen the library as data only, so no particular rendering target.

That said, I know first hand about the 'pain' some of these issues cause.

Especially the finding identical vertices issue (LDCad still has problems with that in high detail stickers etc).

But I don't see the format changing any time soon.

Especially as most of your issues need single file solutions instead of the highly recursive approach it uses now.

Your own text proves the raw LDraw data can be processed (in bulk) to convert them to something more suitable for modern render hardware.

In other words instead of reworking the entire library, it seems more practical to 'fork' these preprocessed libraries into a new/additional project.

That said I would be open to formalising meta extensions supplying hints to processing software in order to make some of these issues less ambiguous.

my 2cts.

LDraw wouldn't need to assume any rendering target like how gltf assumes OpenGL. You could inline all the geometry in the .dat files for parts. This would eliminate the need for any commands for winding or inverting. MPD can still be defined recursively as normal. Inlining entire MPD models into a single file would use too much memory and be harder to edit. This wouldn't solve any of the problems with indexing or normals, which is the main issue. I'm not sure how you would make normals and indexing floating point values less ambiguous without defining a reference implementation and testing it on the entire library.
Reply
RE: Rethinking LDraw for modern hardware and software
#4
(2023-08-05, 21:54)Jonathan N Wrote: You could inline all the geometry in the .dat files for parts. This would eliminate the need for any commands for winding or inverting. MPD can still be defined recursively as normal.
I meant single part file vs recursive part files.

(2023-08-05, 21:54)Jonathan N Wrote: I'm not sure how you would make normals and indexing floating point values less ambiguous without defining a reference implementation and testing it on the entire library.
Not sure myself Big Grin

Only thing I can think of is dat files supplying a 'vertex resolution' value you could use to pull rounding errors together. (very high for sticker patterns, very low for studs etc).
Reply
RE: Rethinking LDraw for modern hardware and software
#5
From my own point of view with the little experience of working with LDraw as a software developer I have, I can say that, mostly, I don't see much things in the approach.

On the point about quads - with the way spec allows using them them, not much ambiguity can really arise. 

About normals - mostly, they can be easily calculated using the outlines, using them to either separate (normal outline) or merge normals (condline). One thing tho is really bothering me, and that is how heavily LDraw relies on geometry color for making the printed parts. It really is not a problem for flat parts, or when you are not using any lighting aside the ambient light. But in pretty much all other cases it causes very ugly calculated normals, so, the way parts like minifigure heads are done should be preferably rethinked

In the same bin go, by the way, marbled parts. For them I think the textures can be applied - by defining a META command that uses a black/white texture as a map for where to put the color being mixed in
Reply
RE: Rethinking LDraw for modern hardware and software
#6
(2023-10-15, 13:15)Max Murtazin Wrote: One thing tho is really bothering me, and that is how heavily LDraw relies on geometry color for making the printed parts. It really is not a problem for flat parts, or when you are not using any lighting aside the ambient light. But in pretty much all other cases it causes very ugly calculated normals, so, the way parts like minifigure heads are done should be preferably rethinked

I very much agree with that. While I appreciate the preference for geometrical rather than textured patterns (vectorization and all that), this quality is rather compromised by the difficulty in merging curved and flat patterned geometries.

My envisioned solution, and I don't know how feasible this is from a processing standpoint of course, is to have a way to designate any given face as either curved or flat (flat being the fallback obviously). Maybe that isn't specific enough, and you'd have to indicate the exact geometry to map the surface onto (like a cylinder for mining heads). But the idea is just to "force" patterned tris and quads to assume the curvature of a larger whole.
Reply
RE: Rethinking LDraw for modern hardware and software
#7
(2023-10-15, 13:15)Max Murtazin Wrote: by defining a META command that uses a black/white texture as a map for where to put the color being mixed in

I think I mentioned it before but I'm very much in favor of only using geometry based patterns on flat surfaces. Favoring the use of textures for curved ones. 

And allowing png's to use ldraw colors as their pallet (including #16) would solve the one downside of this.
Reply
RE: Rethinking LDraw for modern hardware and software
#8
(2023-10-16, 7:06)Roland Melkert Wrote: I think I mentioned it before but I'm very much in favor of only using geometry based patterns on flat surfaces. Favoring the use of textures for curved ones. 

And allowing png's to use ldraw colors as their pallet (including #16) would solve the one downside of this.
One thing not currently addressed by texmap is printed metallic colors. Not sure also that it works fine for blended opaque+transparent parts.
Reply
RE: Rethinking LDraw for modern hardware and software
#9
(2023-10-16, 11:51)Philippe Hurbain Wrote: One thing not currently addressed by texmap is printed metallic colors.

This should be handled by GLOOSSMAP but no existing LDraw compliant software supports GLOSSMAP.
Reply
RE: Rethinking LDraw for modern hardware and software
#10
(2023-10-16, 7:06)Roland Melkert Wrote: Allowing png's to use ldraw colors as their pallet (including #16) would solve the one downside of this.

This isn't disallowed but could be explicitly encouraged. Not sure how you'd do color 16 though.
Reply
RE: Rethinking LDraw for modern hardware and software
#11
(2023-10-16, 15:51)Orion Pobursky Wrote: This isn't disallowed but could be explicitly encouraged. Not sure how you'd do color 16 though.

The shader can 'paint' the pixels using the current LDraw colour table when using a png with a pallette.

The png's internal palette would only be a place holder of the rgb's at the time of its generation.

Only problem with this are the high number colours.

An alternative would be to use the RGB png values as if they where LDraw colour numbers, but it would look very weird when interpreted as normal RGB values Smile
Reply
RE: Rethinking LDraw for modern hardware and software
#12
(2023-10-16, 7:06)Roland Melkert Wrote: I think I mentioned it before but I'm very much in favor of only using geometry based patterns on flat surfaces. Favoring the use of textures for curved ones. 

And allowing png's to use ldraw colors as their pallet (including #16) would solve the one downside of this.

How would you handle smoothing the borders tho? They would really jagged in case we'd use the pallet.

Also, the benefit of using texture map that the current approach of geometry colors doesn't allow, and neither would the pallet, is the gradient pattern borders. A lot of times dual-injected parts have colors blending in, which, if using the white-black image texture for mapping the colors (from black keeping the 16, and white painting over the new color), can be properly displayed in the part
Reply
RE: Rethinking LDraw for modern hardware and software
#13
(2023-10-16, 19:37)Max Murtazin Wrote: How would you handle smoothing the borders tho? They would really jagged in case we'd use the pallet.

Also, the benefit of using texture map that the current approach of geometry colors doesn't allow, and neither would the pallet, is the gradient pattern borders. A lot of times dual-injected parts have colors blending in, which, if using the white-black image texture for mapping the colors (from black keeping the 16, and white painting over the new color), can be properly displayed in the part

Interesting point.

The texture meta does allow for additional textures, but currently we only documented GLOSSMAP (not really used/supprted by any software at the moment as far I know though).

We could consider extending that with additional types.
Reply
RE: Rethinking LDraw for modern hardware and software
#14
(2023-10-15, 13:15)Max Murtazin Wrote: From my own point of view with the little experience of working with LDraw as a software developer I have, I can say that, mostly, I don't see much things in the approach.

On the point about quads - with the way spec allows using them them, not much ambiguity can really arise. 

About normals - mostly, they can be easily calculated using the outlines, using them to either separate (normal outline) or merge normals (condline). One thing tho is really bothering me, and that is how heavily LDraw relies on geometry color for making the printed parts. It really is not a problem for flat parts, or when you are not using any lighting aside the ambient light. But in pretty much all other cases it causes very ugly calculated normals, so, the way parts like minifigure heads are done should be preferably rethinked

In the same bin go, by the way, marbled parts. For them I think the textures can be applied - by defining a META command that uses a black/white texture as a map for where to put the color being mixed in

There are applications that define normals by marking edges as sharp or smooth. It's important to only use a single attribute for smooth vs sharp edges to avoid ambiguities where an edge is both sharp and smooth or unspecified. This still doesn't solve the issue of indexing the vertices.

In order to average face normals across edges, you need to know which faces are adjacent. Comparing vertices currently requires floating point comparisons that may not be exact. This is especially problematic on patterned parts. Calculating normals given the correct adjacency information isn't hard like you mentioned. It becomes difficult when you have neither normals nor vertex adjacency information.

It's possible to use textures as an integer "ID map" and resolve the texture colors in a shader. 32 bits per pixel is more than enough to encode all current and future ldraw colors. This would require writing custom shaders and using the appropriate texture fetch functions to avoid any filtering or unwanted conversions to floating point. If resolution is an issue, the source ID maps could be defined as SVG and rendered to some desired resolution.

Another approach is to use the RGBA color channels as masks or weights for up to 4 defined ldraw colors. More than 4 colors would require additional RGBA mask textures. Separate grayscale mask textures for each color would also work. This assumes the mask values add up to 1.0 for all pixels.

The easiest for applications is for patterned parts to use UV coordinates and RGBA images for the colors. If parts have UV coordinates defined, it's possible to bake vertex colors to a texture. Thankfully, both the ID map and the mask textures can be easily preprocessed by applications to regular RGB textures for easier rendering. Baking vertex colors or calculating UV coordinates at runtime would be non trivial and potentially error prone.
Reply
« Next Oldest | Next Newest »



Forum Jump:


Users browsing this thread: 6 Guest(s)