The adventures of building a web renderer


The adventures of building a web renderer
#1
This thread is for sharing my learnings and war stories from the LDraw web-rendering project buildinginstructions.js.
You can see how it evolves on BrickHub.org.

Here is a current example of how it renders a LEGO model:

[Image: 14.png]

It was not always like this. While there are at least two other web renderers that I know of, I still decided to start this project from scratch. This way I would get practical experience with the technologies involved (WebGL, Three.js, GLSL, etc.), and focus on having performance in mind from the very beginning. While the project itself hopefully ends up being of practical use for many, my goal with this thread is to share my experiences, wins and losses, and perhaps even get some good feedback to help drive the project forward.

The project started in July 2018 with the first breakthrough being in August 1. Back then I had finished an MPV of the .ldr parser while trying to adhere to the Three.js best practices for building a Loader. By modifying one of the Three.js sample files, I was able to get it to render:

[Image: p4xuzyP.png]

As you can see, there were some massive BFC issues, but that was alright for a start. The important part was to get started and get something, anything really, up and running.
My top 3 takeaways from this early stage are:

- Ignore everything that is not absolutely needed in order to get started. This includes conditional lines, quads, BFC, colors, metadata, viewport clipping, etc, etc. While it is important to do things right, the proof of concept both gave me something tangible and came with a morale boost.

- Three.js and the LDraw file format work well together from the perspective of placing things in 3D. It is obvious that James Jessiman knew what he was doing when designing the specification.

- Depending on your approach of design, BFC can be very difficult to get right. There is psudocode in the spec, but unfortunately it did not fit into the data models I had chosen. The pseudocode assumes a single pass of computing both BFC information and triangle wrapping, while my code handles the BFC computation in a separate initial step that creates reusable components.

That is all for the first post. I will try to keep this thread alive with more war stories.
Reply
RE: The adventures of building a web renderer
#2
Geometries vs Buffer Geometries in Three.js

I found it to be a good idea to build the initial POC using geometry objects in Three.js. They offer a very simple interface to get started. However. If you have read any of the "Geometry vs BufferGeometry" threads on StackOverflow, etc., then BufferGeometries should be your choice if you want to render anything but the most basic of scenes.

I used this very "heavy" model to test the limits of my code base:

[Image: tWgxL6T.png]

It was not able to render in neither Firefox nor Chrome when using "Geometry". With "BufferGeometry" it took 10 minute to render and 1.2GB of memory!

The rendering time has since them been reduced to 6.6 seconds in Firefox and 8.8 seconds in Chrome. The memory usage has seen an even more dramatic decrease to 8.5MB!

Here are some steps and realizations that lead to this performance improvement.


Reducing the Number of Geometries

The first data model which used BufferGeometries had two "geometry" object for each part: One for the lines and one for the triangles (quads are split into triangles to simplify the data model. Besides. WebGL doesn't support quad primitives anymore anyway.)

The "Blueberry" from the first post was used to compare performance. For this data model it took 103MB of memory.

[Image: 200x200_14.png]
I changed the model to have geometries on "step"-level: Each step had a line geometry and a triangle geometry for each color of elements added in the step. This change meant nothing to the rendering of building instructions, since all parts of a step were added simultaneously.

The memory was reduced to 70MB!

The overhead of geometry objects is massive! Based on this result I started looking into other ways of reducing memory usage.

Naive Indexing

Trawling google searches for other common ways to optimize Three.js-based modeling lead me to the topic of "indexing". The idea was simple. Rather than storing points of lines and triangles as triplets of 32 bit floats (x,y,z), the points would be stored in a "static" data structure with the "dynamic" index being offsets into the static structure in order to identify points. Furthermore. If two points are identical (such as when two triangles share a corner), then points could be reused.

The data model had to be changed slightly to accommodate indexing. In the previous data model I used a simple algorithm to detect when lines had common endpoints. This algorithm had to be removed to allow for indexed lines, resulting in a render time of 27 seconds and 270MB of data.

Adding the indexing (but still no sharing of points) resulted in a reduction in render time to 2 seconds, while the memory usage was reduced to 36MB! 

This came as quite of a surprise to me. I was effectively using more memory when setting up the data model because the number of points remained the same while a list of indexes was added. The reason for the improved performance can be found in how Three.js handles attributes that are sent to shaders. Optimizing shaders will be a subject for a later post. I will leave you with this for now and continue with sharing of points in the next post.
Reply
RE: The adventures of building a web renderer
#3
Not beeing a software developper there are many things that gets over my head, but it's nonetheless an enlightening reading!
Reply
RE: The adventures of building a web renderer
#5
(2018-11-07, 13:52)Philippe Hurbain Wrote: Not beeing a software developper there are many things that gets over my head, but it's nonetheless an enlightening reading!

Yeah, I realize the audience for this thread might not be as wide as what we usually see.

You and many others have already helped this project a lot by contributing models to the LDraw all-in-one installer (not to mention the LDraw parts library itself!). I have used models from it to debug the software and make it more resilient. If I only tested it on my own models, I would be making many assumptions which do not always hold.
Reply
RE: The adventures of building a web renderer
#4
Hello Lasse,

I must say I'm impressed - can I test your code in my own page? I looked at github but I see no working example nor any instructions. And your pages contain a lot of in-page code in addition to your library.

According to the renderer itself: it produces very nice results. The only problem I see is that edges are drawn by 1 pixel wide lines, disappearing completely somewhere. On the other hand, there are big positives, like transparent parts rendering which is very, very good in this renderer category - I mean the renderer with (near-to-)immediate response. And, BTW, I see thicker edge lines in your "screenshots" you post here in this thread, so it might be you know to force your renderer...
Reply
RE: The adventures of building a web renderer
#6
(2018-11-07, 17:20)Milan Vančura Wrote: Hello Lasse,

I must say I'm impressed - can I test your code in my own page? I looked at github but I see no working example nor any instructions. And your pages contain a lot of in-page code in addition to your library.

According to the renderer itself: it produces very nice results. The only problem I see is that edges are drawn by 1 pixel wide lines, disappearing completely somewhere. On the other hand, there are big positives, like transparent parts rendering which is very, very good in this renderer category - I mean the renderer with (near-to-)immediate response. And, BTW, I see thicker edge lines in your "screenshots" you post here in this thread, so it might be you know to force your renderer...

Hi Milan. Thanks a lot. You should be able to test it by running sample_view.htm in a browser. The browser probably needs security disabled so it can async load files from the local drive. I have added a guide in readme.md.

As for the edge lines, I see different browsers render them differently. All lines should have "1" pixel as width, but when lines intersect triangles, they might appear thinner.

This is how it normally looks like when lines and triangles and lines intersect:

[Image: OJK3sks.png]
Notice how the lines at the bottom of the studs o the roof are very thin due to this.

This was remedied by making custom shaders. In particular, the vertex shader for lines move points a tiny bit toward the camera - it works on most devices I have tested it on, and is the reason why the lines look so clear in the first screenshot.

Let me just link to that one again, so it is easier to compare:

[Image: 14.png]
The screenshot is made by clicking "VIEW 3D" on this page.

Edit
Simple render example added and README.md updated with guide of how to get started. The new sample is less than 100 lines, so I hope it is easy to get started with it.
Reply
RE: The adventures of building a web renderer
#10
(2018-11-07, 22:54)Lasse Deleuran Wrote: The screenshot is made by clicking "VIEW 3D" on this page.
This is strange, I cannot achieve the same look in my browser, neither Firefox nor chromium. Edge lines in your screenshot are thicker and antialiased. In my browser, they are exactly 1px wide with no antialias. Sometimes it's hard to even understand what they mean, when two edges are too near. See the green person's right arm (the Tan hinge plate).
Edit: You need to click on the image to see it in the original size with no antialiasing made by browser on ldraw.org page.
   
(2018-11-07, 22:54)Lasse Deleuran Wrote: Edit
Simple render example added and README.md updated with guide of how to get started. The new sample is less than 100 lines, so I hope it is easy to get started with it.
Thanks a lot, I put this on my TODO list. Currently, I'm busy with an exhibition model preparation...
Reply
RE: The adventures of building a web renderer
#11
(2018-11-13, 16:20)Milan Vančura Wrote:
(2018-11-07, 22:54)Lasse Deleuran Wrote: The screenshot is made by clicking "VIEW 3D" on this page.
This is strange, I cannot achieve the same look in my browser, neither Firefox nor chromium. Edge lines in your screenshot are thicker and antialiased. In my browser, they are exactly 1px wide with no antialias. Sometimes it's hard to even understand what they mean, when two edges are too near. See the green person's right arm (the Tan hinge plate).
Edit: You need to click on the image to see it in the original size with no antialiasing made by browser on ldraw.org page.

(2018-11-07, 22:54)Lasse Deleuran Wrote: Edit
Simple render example added and README.md updated with guide of how to get started. The new sample is less than 100 lines, so I hope it is easy to get started with it.
Thanks a lot, I put this on my TODO list. Currently, I'm busy with an exhibition model preparation...

Thanks for the screenshot. I have been able to recreate it on one of my devices. Anti-aliasing is actually enabled - even in your screenshot. The sampling rate has just bottomed out because of how canvas size vs. css size and physical device pixel ratios can differ. I will try to fix the renderer. It seems to work fine when viewing building instructions - it is only the preview that currently is messy on some devices.
Reply
RE: The adventures of building a web renderer
#7
Merging points efficiently

From my post regarding indexing, you could see how using 'indexes' could help reduce the amount of points.

As an example. Consider a 3D box. I has 8 corners. All lines and triangles use these 8 corners, but a box is constructed by 12 lines and 12 triangles. Each line has 2 points and each triangle has 3. With each point taking 3 numbers, the amount of numbers stored to show a box is:

(12*2 + 12*3)*3 = 180 numbers.

If we store the 8 corner points separately (8*3 = 24 numbers) and simply store offsets/indices, the "*3" from the previous equation can be removed, resulting in:

24 + (12*2 + 12*3) = 84 numbers.

Parts in the LDraw library (especially standard parts) have a lot of common points, so it makes sense to use this trick to save memory, and thereby also rendering time. In our example above we save roughly 50%, so let us take a look at how much we can save in our test model.

I would also like to introduce you to an additional test model. This is the very first LDraw model I ever built. It is quite big (3500+ parts) and is good for stress testing:

[Image: 112.png]


Here are the baseline numbers for just showing triangles and not using our trick to combine points. I call the two models 'Psych' (the blue car) and 'Executor':

Psych: Memory usage: 36.3MB. Rendering time: 1.039ms. Number of points: 375.432.
Executor: Memory usage: 862MB. Rendering time: 1.8185ms. Number of points: 11.333.253.

This is what happens when you combine points for the full models:

Psych: Memory usage: 16.6MBRendering time: 3.121ms. Number of points: 99.687.

Executor: Memory usage: 313MBRendering time: 99.495ms. Number of points: 2.700.145.


That rendering time is completely unacceptable. Here is what happens when points are only combined for the individual parts (not the full mode:

Psych: Memory usage: 20.5MBRendering time: 1.584ms. Number of points: 100.339.

Executor: Memory usage: 414MBRendering time: 17.096ms. Number of points: 2.751.714.



These tradeoffs are much more acceptable. Next up was adding normal lines to the mix.
Reply
RE: The adventures of building a web renderer
#8
(2018-11-09, 11:44)Lasse Deleuran Wrote: Psych: Memory usage: 36.3MB. Rendering time: 1.039ms. Number of points: 375.432.
Executor: Memory usage: 862MB. Rendering time: 1.8185ms. Number of points: 11.333.253.

This is what happens when you combine points for the full models:

Psych: Memory usage: 16.6MBRendering time: 3.121ms. Number of points: 99.687.

Executor: Memory usage: 313MBRendering time: 99.495ms. Number of points: 2.700.145.


That rendering time is completely unacceptable. Here is what happens when points are only combined for the individual parts (not the full mode:

I don't understand how can it become slower, or are you counting the preparations too?

In LDCad each part gets prepared for rendering separability and the result is stuffed in VBO.
Also finding the unique points is not only useful for indexed meshes but also very helpful during smoothing.
Reply
RE: The adventures of building a web renderer
#9
(2018-11-09, 19:38)Roland Melkert Wrote:
(2018-11-09, 11:44)Lasse Deleuran Wrote: Psych: Memory usage: 36.3MB. Rendering time: 1.039ms. Number of points: 375.432.
Executor: Memory usage: 862MB. Rendering time: 1.8185ms. Number of points: 11.333.253.

This is what happens when you combine points for the full models:

Psych: Memory usage: 16.6MBRendering time: 3.121ms. Number of points: 99.687.

Executor: Memory usage: 313MBRendering time: 99.495ms. Number of points: 2.700.145.


That rendering time is completely unacceptable. Here is what happens when points are only combined for the individual parts (not the full mode:

I don't understand how can it become slower, or are you counting the preparations too?

In LDCad each part gets prepared for rendering separability and the result is stuffed in VBO.
Also finding the unique points is not only useful for indexed meshes but also very helpful during smoothing.

Yes. I am counting the full time to sort all points of the model. Sorting 11mio. points takes forever in the browser, but I had to see it in practice in order to rule out the approach.

Three.js is putting an abstraction layer above what I actually have as individual VBO's. I am planning on reading up on this and try to take advantage of it, rather than allowing it to be some black box. I can also see that BrickSmith has had huge performance boosts by using parts being VBO's as you mention, so the rendering can simply be "draw this VBO/part here, there and there". If I can figure out how to do this as well, then I might get a similar big performance benefit.
Reply
« Next Oldest | Next Newest »



Forum Jump:


Users browsing this thread: 1 Guest(s)
Forum Jump:


Users browsing this thread: 1 Guest(s)