The adventures of building a web renderer


The adventures of building a web renderer
#1
This thread is for sharing my learnings and war stories from the LDraw web-rendering project buildinginstructions.js.
You can see how it evolves on BrickHub.org.

Here is a current example of how it renders a LEGO model:

[Image: 14.png]

It was not always like this. While there are at least two other web renderers that I know of, I still decided to start this project from scratch. This way I would get practical experience with the technologies involved (WebGL, Three.js, GLSL, etc.), and focus on having performance in mind from the very beginning. While the project itself hopefully ends up being of practical use for many, my goal with this thread is to share my experiences, wins and losses, and perhaps even get some good feedback to help drive the project forward.

The project started in July 2018 with the first breakthrough being in August 1. Back then I had finished an MPV of the .ldr parser while trying to adhere to the Three.js best practices for building a Loader. By modifying one of the Three.js sample files, I was able to get it to render:

[Image: p4xuzyP.png]

As you can see, there were some massive BFC issues, but that was alright for a start. The important part was to get started and get something, anything really, up and running.
My top 3 takeaways from this early stage are:

- Ignore everything that is not absolutely needed in order to get started. This includes conditional lines, quads, BFC, colors, metadata, viewport clipping, etc, etc. While it is important to do things right, the proof of concept both gave me something tangible and came with a morale boost.

- Three.js and the LDraw file format work well together from the perspective of placing things in 3D. It is obvious that James Jessiman knew what he was doing when designing the specification.

- Depending on your approach of design, BFC can be very difficult to get right. There is psudocode in the spec, but unfortunately it did not fit into the data models I had chosen. The pseudocode assumes a single pass of computing both BFC information and triangle wrapping, while my code handles the BFC computation in a separate initial step that creates reusable components.

That is all for the first post. I will try to keep this thread alive with more war stories.
Reply
RE: The adventures of building a web renderer
#2
Geometries vs Buffer Geometries in Three.js

I found it to be a good idea to build the initial POC using geometry objects in Three.js. They offer a very simple interface to get started. However. If you have read any of the "Geometry vs BufferGeometry" threads on StackOverflow, etc., then BufferGeometries should be your choice if you want to render anything but the most basic of scenes.

I used this very "heavy" model to test the limits of my code base:

[Image: tWgxL6T.png]

It was not able to render in neither Firefox nor Chrome when using "Geometry". With "BufferGeometry" it took 10 minute to render and 1.2GB of memory!

The rendering time has since them been reduced to 6.6 seconds in Firefox and 8.8 seconds in Chrome. The memory usage has seen an even more dramatic decrease to 8.5MB!

Here are some steps and realizations that lead to this performance improvement.


Reducing the Number of Geometries

The first data model which used BufferGeometries had two "geometry" object for each part: One for the lines and one for the triangles (quads are split into triangles to simplify the data model. Besides. WebGL doesn't support quad primitives anymore anyway.)

The "Blueberry" from the first post was used to compare performance. For this data model it took 103MB of memory.

[Image: 200x200_14.png]
I changed the model to have geometries on "step"-level: Each step had a line geometry and a triangle geometry for each color of elements added in the step. This change meant nothing to the rendering of building instructions, since all parts of a step were added simultaneously.

The memory was reduced to 70MB!

The overhead of geometry objects is massive! Based on this result I started looking into other ways of reducing memory usage.

Naive Indexing

Trawling google searches for other common ways to optimize Three.js-based modeling lead me to the topic of "indexing". The idea was simple. Rather than storing points of lines and triangles as triplets of 32 bit floats (x,y,z), the points would be stored in a "static" data structure with the "dynamic" index being offsets into the static structure in order to identify points. Furthermore. If two points are identical (such as when two triangles share a corner), then points could be reused.

The data model had to be changed slightly to accommodate indexing. In the previous data model I used a simple algorithm to detect when lines had common endpoints. This algorithm had to be removed to allow for indexed lines, resulting in a render time of 27 seconds and 270MB of data.

Adding the indexing (but still no sharing of points) resulted in a reduction in render time to 2 seconds, while the memory usage was reduced to 36MB! 

This came as quite of a surprise to me. I was effectively using more memory when setting up the data model because the number of points remained the same while a list of indexes was added. The reason for the improved performance can be found in how Three.js handles attributes that are sent to shaders. Optimizing shaders will be a subject for a later post. I will leave you with this for now and continue with sharing of points in the next post.
Reply
RE: The adventures of building a web renderer
#3
Not beeing a software developper there are many things that gets over my head, but it's nonetheless an enlightening reading!
Reply
RE: The adventures of building a web renderer
#4
Hello Lasse,

I must say I'm impressed - can I test your code in my own page? I looked at github but I see no working example nor any instructions. And your pages contain a lot of in-page code in addition to your library.

According to the renderer itself: it produces very nice results. The only problem I see is that edges are drawn by 1 pixel wide lines, disappearing completely somewhere. On the other hand, there are big positives, like transparent parts rendering which is very, very good in this renderer category - I mean the renderer with (near-to-)immediate response. And, BTW, I see thicker edge lines in your "screenshots" you post here in this thread, so it might be you know to force your renderer...
Reply
RE: The adventures of building a web renderer
#5
(2018-11-07, 13:52)Philippe Hurbain Wrote: Not beeing a software developper there are many things that gets over my head, but it's nonetheless an enlightening reading!

Yeah, I realize the audience for this thread might not be as wide as what we usually see.

You and many others have already helped this project a lot by contributing models to the LDraw all-in-one installer (not to mention the LDraw parts library itself!). I have used models from it to debug the software and make it more resilient. If I only tested it on my own models, I would be making many assumptions which do not always hold.
Reply
RE: The adventures of building a web renderer
#6
(2018-11-07, 17:20)Milan Vančura Wrote: Hello Lasse,

I must say I'm impressed - can I test your code in my own page? I looked at github but I see no working example nor any instructions. And your pages contain a lot of in-page code in addition to your library.

According to the renderer itself: it produces very nice results. The only problem I see is that edges are drawn by 1 pixel wide lines, disappearing completely somewhere. On the other hand, there are big positives, like transparent parts rendering which is very, very good in this renderer category - I mean the renderer with (near-to-)immediate response. And, BTW, I see thicker edge lines in your "screenshots" you post here in this thread, so it might be you know to force your renderer...

Hi Milan. Thanks a lot. You should be able to test it by running sample_view.htm in a browser. The browser probably needs security disabled so it can async load files from the local drive. I have added a guide in readme.md.

As for the edge lines, I see different browsers render them differently. All lines should have "1" pixel as width, but when lines intersect triangles, they might appear thinner.

This is how it normally looks like when lines and triangles and lines intersect:

[Image: OJK3sks.png]
Notice how the lines at the bottom of the studs o the roof are very thin due to this.

This was remedied by making custom shaders. In particular, the vertex shader for lines move points a tiny bit toward the camera - it works on most devices I have tested it on, and is the reason why the lines look so clear in the first screenshot.

Let me just link to that one again, so it is easier to compare:

[Image: 14.png]
The screenshot is made by clicking "VIEW 3D" on this page.

Edit
Simple render example added and README.md updated with guide of how to get started. The new sample is less than 100 lines, so I hope it is easy to get started with it.
Reply
RE: The adventures of building a web renderer
#7
Merging points efficiently

From my post regarding indexing, you could see how using 'indexes' could help reduce the amount of points.

As an example. Consider a 3D box. I has 8 corners. All lines and triangles use these 8 corners, but a box is constructed by 12 lines and 12 triangles. Each line has 2 points and each triangle has 3. With each point taking 3 numbers, the amount of numbers stored to show a box is:

(12*2 + 12*3)*3 = 180 numbers.

If we store the 8 corner points separately (8*3 = 24 numbers) and simply store offsets/indices, the "*3" from the previous equation can be removed, resulting in:

24 + (12*2 + 12*3) = 84 numbers.

Parts in the LDraw library (especially standard parts) have a lot of common points, so it makes sense to use this trick to save memory, and thereby also rendering time. In our example above we save roughly 50%, so let us take a look at how much we can save in our test model.

I would also like to introduce you to an additional test model. This is the very first LDraw model I ever built. It is quite big (3500+ parts) and is good for stress testing:

[Image: 112.png]


Here are the baseline numbers for just showing triangles and not using our trick to combine points. I call the two models 'Psych' (the blue car) and 'Executor':

Psych: Memory usage: 36.3MB. [color=#000000]Rendering time: 1.039ms. Number of points: [color=#000000]375.432.[/color][/color]
Executor: Memory usage: 862MB. [color=#000000]Rendering time: 1.8185ms. Number of points: [color=#000000]11.333.253.[/color][/color]

[color=#000000][color=#000000]This is what happens when you combine points for the full models:[/color][/color]

[color=#000000][color=#000000]Psych: Memory usage: [color=#000000][color=#000000]16.6MB. Rendering time: [color=#000000]3.121ms. Number of points: [color=#000000]99.687.[/color][/color][/color]
[/color][/color][/color]
Executor: Memory usage: [color=#000000]313MB. Rendering time: [color=#000000]99.495ms. Number of points: [color=#000000]2.700.145.[/color][/color][/color]


[color=#000000][color=#000000][color=#000000]That rendering time is completely unacceptable. Here is what happens when points are only combined for the individual parts (not the full mode:[/color][/color][/color]

[color=#000000][color=#000000][color=#000000][color=#000000][color=#000000]Psych: Memory usage: [color=#000000][color=#000000][color=#000000]20.5MB[/color]. Rendering time: [color=#000000][color=#000000]1.584ms.[/color] Number of points: [color=#000000][color=#000000]100.339[/color].[/color][/color][/color][/color][/color][/color]
[/color][/color][/color]
Executor: Memory usage: [color=#000000][color=#000000]414MB[/color]. Rendering time: [color=#000000][color=#000000]17.096ms[/color]. Number of points: [color=#000000][color=#000000]2.751.714[/color].[/color][/color][/color]



These tradeoffs are much more acceptable. Next up was adding normal lines to the mix.
Reply
RE: The adventures of building a web renderer
#8
(2018-11-09, 11:44)Lasse Deleuran Wrote: Psych: Memory usage: 36.3MB. [color=#000000]Rendering time: 1.039ms. Number of points: [color=#000000]375.432.[/color][/color]
Executor: Memory usage: 862MB. [color=#000000]Rendering time: 1.8185ms. Number of points: [color=#000000]11.333.253.[/color][/color]

[color=#000000][color=#000000]This is what happens when you combine points for the full models:[/color][/color]

[color=#000000][color=#000000]Psych: Memory usage: [color=#000000][color=#000000]16.6MB. Rendering time: [color=#000000]3.121ms. Number of points: [color=#000000]99.687.[/color][/color][/color]
[/color][/color][/color]
Executor: Memory usage: [color=#000000]313MB. Rendering time: [color=#000000]99.495ms. Number of points: [color=#000000]2.700.145.[/color][/color][/color]


[color=#000000][color=#000000][color=#000000]That rendering time is completely unacceptable. Here is what happens when points are only combined for the individual parts (not the full mode:[/color][/color][/color]

I don't understand how can it become slower, or are you counting the preparations too?

In LDCad each part gets prepared for rendering separability and the result is stuffed in VBO.
Also finding the unique points is not only useful for indexed meshes but also very helpful during smoothing.
Reply
RE: The adventures of building a web renderer
#9
(2018-11-09, 19:38)Roland Melkert Wrote:
(2018-11-09, 11:44)Lasse Deleuran Wrote: Psych: Memory usage: 36.3MB. [color=#000000]Rendering time: 1.039ms. Number of points: [color=#000000]375.432.[/color][/color]
Executor: Memory usage: 862MB. [color=#000000]Rendering time: 1.8185ms. Number of points: [color=#000000]11.333.253.[/color][/color]

[color=#000000][color=#000000]This is what happens when you combine points for the full models:[/color][/color]

[color=#000000][color=#000000]Psych: Memory usage: [color=#000000][color=#000000]16.6MB. Rendering time: [color=#000000]3.121ms. Number of points: [color=#000000]99.687.[/color][/color][/color]
[/color][/color][/color]
Executor: Memory usage: [color=#000000]313MB. Rendering time: [color=#000000]99.495ms. Number of points: [color=#000000]2.700.145.[/color][/color][/color]


[color=#000000][color=#000000][color=#000000]That rendering time is completely unacceptable. Here is what happens when points are only combined for the individual parts (not the full mode:[/color][/color][/color]

I don't understand how can it become slower, or are you counting the preparations too?

In LDCad each part gets prepared for rendering separability and the result is stuffed in VBO.
Also finding the unique points is not only useful for indexed meshes but also very helpful during smoothing.

Yes. I am counting the full time to sort all points of the model. Sorting 11mio. points takes forever in the browser, but I had to see it in practice in order to rule out the approach.

Three.js is putting an abstraction layer above what I actually have as individual VBO's. I am planning on reading up on this and try to take advantage of it, rather than allowing it to be some black box. I can also see that BrickSmith has had huge performance boosts by using parts being VBO's as you mention, so the rendering can simply be "draw this VBO/part here, there and there". If I can figure out how to do this as well, then I might get a similar big performance benefit.
Reply
RE: The adventures of building a web renderer
#10
(2018-11-07, 22:54)Lasse Deleuran Wrote: The screenshot is made by clicking "VIEW 3D" on this page.
This is strange, I cannot achieve the same look in my browser, neither Firefox nor chromium. Edge lines in your screenshot are thicker and antialiased. In my browser, they are exactly 1px wide with no antialias. Sometimes it's hard to even understand what they mean, when two edges are too near. See the green person's right arm (the Tan hinge plate).
Edit: You need to click on the image to see it in the original size with no antialiasing made by browser on ldraw.org page.
   
(2018-11-07, 22:54)Lasse Deleuran Wrote: Edit
Simple render example added and README.md updated with guide of how to get started. The new sample is less than 100 lines, so I hope it is easy to get started with it.
Thanks a lot, I put this on my TODO list. Currently, I'm busy with an exhibition model preparation...
Reply
RE: The adventures of building a web renderer
#11
(2018-11-13, 16:20)Milan Vančura Wrote:
(2018-11-07, 22:54)Lasse Deleuran Wrote: The screenshot is made by clicking "VIEW 3D" on this page.
This is strange, I cannot achieve the same look in my browser, neither Firefox nor chromium. Edge lines in your screenshot are thicker and antialiased. In my browser, they are exactly 1px wide with no antialias. Sometimes it's hard to even understand what they mean, when two edges are too near. See the green person's right arm (the Tan hinge plate).
Edit: You need to click on the image to see it in the original size with no antialiasing made by browser on ldraw.org page.

(2018-11-07, 22:54)Lasse Deleuran Wrote: Edit
Simple render example added and README.md updated with guide of how to get started. The new sample is less than 100 lines, so I hope it is easy to get started with it.
Thanks a lot, I put this on my TODO list. Currently, I'm busy with an exhibition model preparation...

Thanks for the screenshot. I have been able to recreate it on one of my devices. Anti-aliasing is actually enabled - even in your screenshot. The sampling rate has just bottomed out because of how canvas size vs. css size and physical device pixel ratios can differ. I will try to fix the renderer. It seems to work fine when viewing building instructions - it is only the preview that currently is messy on some devices.

Update on Nov 19, 2018. I have now fixed the issue on my own device. It was caused by improper handling of canvas size vs canvas element size vs size of parent of canvas vs device pixel ratio. It took a lot of tries to get right!
Reply
RE: The adventures of building a web renderer
#12
The results from my previous post (Merging points efficiently) were only for rendering triangles. I started out by just focusing on getting triangles right because it wasn't a cake walk. Here is the result of my first attempt at merging points:

[Image: exeWsqW.png]


The issues were fixed and it was time to move on to

Also make the lines indexed

Adding non-indexed lines meant slightly different baselines. Here are the results from rendering with non-indexed lines and non-indexed triangles:

Psych: Memory usage: 36.0MB. Rendering time: 1.961ms.

Executor: Memory usage: 1.015MB. Rendering time: 32.913ms.


By using indexes the results were:

Psych: Memory usage: 22.0MB. Rendering time: 2.381ms.

Executor: Memory usage: 457MB. Rendering time: 27.040ms.[/color]



I believe the massive reduction in space usage for the big model offsets the poor results for the small one. Note how we still get an improvement in rendering (and setup) time for the large model. This is for the same reason as pointed out previously.

That was enough of not addressing the elephant in the room. In the next post I will start discussing conditional/optional lines.
Reply
RE: The adventures of building a web renderer
#13
(2018-11-22, 17:00)Lasse Deleuran Wrote: The results from my previous post (Merging points efficiently) were only for rendering triangles. I started out by just focusing on getting triangles right because it wasn't a cake walk. Here is the result of my first attempt at merging points:

[Image: exeWsqW.png]

Don't fix it! Start rendering your models this way for the next 10 years ... meanwhile twitter a render every day from a fake account ... don't reveal your identity ... organize an exhibition (don't forget the champagne). By 2028 Sothebys or Christie's will auction them for a million :-)

w.
LEGO ergo sum
Reply
RE: The adventures of building a web renderer
#14
Bricksmith doesn't just put each part in a VBO - it also builds (on the fly per render) an instance VBO - that is, a VBO full of transform data and color - for each part (e.g. a 2x4 brick), the VBO is drawn once* with an instance buffer describing where and what color each use of that part is.  The entire instance buffer is built once and sent to GPU memory, then each part is drawn once using part of the instance buffer.  This gives Bricksmith both minimal draw calls (one draw call per change of part) and minimal per-render GPU memory overhead (because we write all of the instance data into one big buffer).

The one exception is translucent parts, which are drawn later in Z order from far to near, even if this means changing VBOs per part.
Reply
RE: The adventures of building a web renderer
#15
There was a helpful post in this thread from someone who had tried the website. Unfortunately I can't see the post anymore, but here are the main areas of improvement as I remember them:
  • Size of PLI's can be too big.
  • Parts, such as cables on motors, do not render correctly
  • I'm sure there was a third point...
I think the post was by the one who was unlucky and uploaded while I was pushing a major update to the site... which broke the "take a snapshot" functionality for a day!

In any case. Thank you very much for testing the code! All your points seemed completely valid when I read them:
  • The size of PLI's was recently changed so they attempt to fill as much as possible while allowing the model to take up at least 55% of screen space. I must add some additional limits so that we don't see a single 1x1 plate being blown up to massive proportions.
  • It is not just cables. It is all kinds of assembled parts which are causing me major headaches. Minifig legs and torsos give similar issues. It has been on my TODO-list for a while because it is not easy to create automation for "merging" these assembled parts into single parts to be shown in PLI's. Right now you can simply have 'pants.dat' submodels in the LDraw file to force the PLI to show assembled pants, but I can't expect users to do this, and it will not work for all the files that people have already made. This needs more time in the thinking box.
Reply
RE: The adventures of building a web renderer
#16
Conditional Lines

[Image: 123.png]

From a technical perspective, 'conditional lines', or 'optional lines' seem to cause the most headache. The concept is rather simple: Optional lines highlight the outline of parts when the standard lines do not suffice. My go-to example is a stud:

[Image: 233.png]

The line on the right side show one of the two conditional lines which are currently visible in order to highlight the cylindrical section of the stud.

Try it for yourself by going to this page where the cylindrical part of a stud is the subject (And BFC mode is enabled).

Click on one of the 'Optional' icons in order to highlight a particular line and see how it only shows when the blue and purple dot are on the same side of the line (it will appear between the green and orange dot)

Naive implementation

My first shot at conditional lines was on October 7 and in each draw call evaluate all the conditional lines.

This is obviously going not going to perform, but it is a good start.

The math behind wether a conditional line should be shown is this primitive. Consider a line from 'lineStart' to 'lineEnd'. A point 'p' is on the left side of the line if the following is negative:

(lineEnd.x-lineStart.x)*(p.y-lineStart.y) - (lineEnd.y-lineStart.y)*(p.x-lineStart.x)

This function is basic matrix algebra 101, and it is based on screen coordinates, that is, how the 'camera' is seeing things. The following method from the Three.js 3D library can be used to get a point 'p' from 3D to screen coordinates:

p.project(camera)

The performance results are predictably rather dire since the function has to be computed for each of the conditional lines:

Psych
- Memory usage: 324MB
- Rendering time 12s
- Conditional lines to evaluate: 33.650

Executor
- Rendering crashes
- Conditional lines to evaluate: 1.181.960


Conditional Line Evaluator

The naive implementation needs to be drastically improved. One idea for improving performance is to do less work for the same results. Consider the standard LDraw 1x2 plate (shown here using the 'harlequin mode' on the parts page):

[Image: 241.png]

It has 3 cylinders: Two for the studs and one for the underside pin. Each cylinder has 16 sides, and thus 16 conditional lines.
The two studs on top are oriented the same way, so whenever a conditional line shows on one of them, the same conditional line will show on the other. The conditional lines of the first stud can thus be used as representatives for both. This is the idea behind the 'conditional line evaluator': You only have to compute the lines for the representatives and copy the results to all other. For the 32 conditional lines of the two studs, we thus only have to run the function for the 16 of them.

But we can do better. Whenever a conditional line is shown, the line on the opposite side should be shown as well. They should have the same representative. We can obtain this by the observation that instead of looking at lines, we can see them as vectors, and vectors to the two 'conditional points' can be reversed without changing the outcome. The vectors are said to be 'normalized' by giving them a preferred direction. There are also some technical details with the ordering of these vectors, but I will spare you the gritty details.

Instead we should take a look at the underside pin.

[Image: 240.png]

The angles involved are the same as for the studs - only the distances are different. Recall that the function is only interested in the fact that points lie on certain sides of lines - not how far from the line. We can thus further normalize the vectors by changing them to unit vectors, and do so for the lines as well. By doing this we reduce the 48 conditional lines of the 1x2 plate to 8 representative conditional lines that need to be handled in each draw call.

The results are now:

Psych
- Memory usage: 167MB
- Rendering time 7.789ms
- Conditional lines to evaluate: 8.011
Executor
- Still crashing
- Conditional lines to evaluate: 144.024

These are not the results I was hoping for. Luckily I found a way to tweak the 'window' of the sweep line algorithm that searches for representatives to obtain a better tradeoff between performance and matches. By changing this and reintroducing indexing (see the previous post), the results were improved to:

Psych
- Memory usage: 86.3MB
- Rendering time 3.795ms
- Conditional lines to evaluate: 8.829
Executor
- Memory usage: 1.219MB
- Rendering time 58.693ms
- Conditional lines 61.401

A small sacrifice in the number of conditional lines to evaluate for the small model was worth the ability to render the large one.

This was with the code changes of October 14.

In the next post I will explain how to delete all of the 7 days of work lying behind this post and obtain even better results.
Reply
RE: The adventures of building a web renderer
#17
I promised to delete those 7 days of work with conditional lines. I that code I reduced the 1.181.960 conditional lines to [color=#000000]61.401 to be evaluated in each draw call. How about going the other way and evaluate 1.363.920 in each call?[/color]

[color=#000000]That is essentially what happened when I moved this code to:[/color]

[color=#000000]Custom Shaders[/color]

[color=#000000]The idea is to perform the costly calculations on the GPU instead of the CPU. The control points (and opposite line point) are pushed to the GPU and the calculation is performed for each of the two end points of each conditional line (hence the doubling of the calculations).[/color]

[color=#000000]The technical details behind this is to write 'RawMaterials' which use custom WebGL shaders written in the shader language GLSL.[/color]

[color=#000000]Moving the code was not pain free:[/color]

[color=#000000][Image: 235.png][/color]

[color=#000000]But after fixing the initial rendering problems I was able to get it to work. The main piece of code is in the vertex shader where the alpha component in the color of the conditional line is used to determine if the line should be shown or not:[/color]

[color=#000000][color=#24292e][font=SFMono-Regular, Consolas,][color=#032f62]     vColor.a *= sign(dot(d12, d13)*dot(d12, d14));[/font][/color][/color][/color]

[color=#000000]Here 'd12' is the vector difference between points 1 and 2 for the conditional line, 'd13' is between points 1 and 3, and so forth. The dot-operator computes the dot product of two vectors and the sign-operator returns '1', '0' or '-1' depending on the product. [/color]

[color=#000000]The 'fragment shader' can now discard conditional lines that should not be shown:[/color]

[color=#000000][color=#24292e]
[font=SFMono-Regular, Consolas,][color=#032f62]      if(vColor.a <= 0.001)
         discard;
     gl_FragColor = vColor;
[/font][/color][/color][/color]

[color=#000000]The results on the test models are:[/color]

[color=#000000][color=#000000]Psych
- Memory usage: 9.2MB
- Rendering time: 2.519ms
[/color][/color]

[color=#000000][color=#000000]Executor
- Memory usage: 15.2MB
- Rendering time: 47.794ms[/color][/color]
Reply
RE: The adventures of building a web renderer
#18
very nice!
Can you do something about the missing portions in the above render?
To me it looks like this trouble is simply caused by wrong winding / BFC orientation of surfaces,
so that the OpenGL renderer discards them.
Do you correctly parse the "0 BFC CERTIFY CW" vs "0 BFC CERTIFY CCW" statements?
Reply
RE: The adventures of building a web renderer
#19
(2019-01-04, 8:24)Steffen Wrote: very nice!
Can you do something about the missing portions in the above render?
To me it looks like this trouble is simply caused by wrong winding / BFC orientation of surfaces,
so that the OpenGL renderer discards them.
Do you correctly parse the "0 BFC CERTIFY CW" vs "0 BFC CERTIFY CCW" statements?

You are absolutely right in the source of the error being regarding BFC. The error you see in the screenshot is due to rotation matrices inverting the winding. The solution is to invert winding whenever the determinant of the rotation matrix is inverted. This is a problem other LDraw renderer authors have stumbled into before me, so it was easy to detect and fix.

Another good source of improvements is BrickSmith where there are some good tips and tricks from the author on the net. One of these tips is regarding drawing transparent polygons after drawing the solid ones.

I think the next area of interest should be for VBO's, since BrickSmith sees a big benefit using them. Hopefully my next post will be with findings of VBO's and Three.js.

Edit. I almost forgot. Moving the transparent triangles to be rendered last wasn't pain free either!

There was some art to be found in the intermediary codebase:

[Image: 236.png]
Reply
RE: The adventures of building a web renderer
#20
I have implemented the suggestion by Roland Melkert with VBO's as he wrote above (draw this VBO here, there and there) in order to see if there would be any benefits when doing this with three.js. 

The initial results were some really hairy models:

[Image: JcIPdqs.png]
Joke aside (although that was actually the result after the first code rewrite). To use VBO's with three.js boils down to reusing 'Mesh' objects. I have decided to build one mesh for each part in color '16. A model using a certain part with different colors will thus only use (and reuse) a single VBO. The color handling is performed in custom shaders.

If you remember some of my first posts in this thread, you might think "This sounds familiar... isn't this what you did before bringing down the space usage and improving the render time?" and you would be right.

So what has changed? 

Well. Pretty much the whole code base. By using the tricks from all the other posts (such as reusing points and using custom shaders), the memory usage is being kept under control. Performance is similarly improved by the following two improvements:

Bottom-up Constructions of Parts

I am now building the parts "bottom-up" in order to improve reusability of points. Previously a part, such as 3001 would be built like this:

Build 3001.dat by first building all line types 2, 3, 4 and 5. Then built each primitive (line type 1), such as stud.dat the same way.
Finally sort all the points collected for 3001.dat in order to reuse those that overlap.

Now parts are built like this:

First built all primitives without any inner primitives (such as 4-4edge.dat, which only consists of 16 normal lines). Sort the points of the primitives immediately.
Then built all primitives that only depend on the primitives that have been built. The effort used by sorting is now reduced since the other primitives already have their points sorted.
Continue until all parts are built.

The improvements with this method of building comes from having to sort less. There are also fewer points to sort each time anything needs to be sorted, which leads to better detection of overlapping points.


Client Storage

The second big improvement does absolutely nothing for performance... unless you have IndexedDB enabled in the browser. Parts are being stored in the IndexedDB of the client, which allows for much quicker rebuilding of parts. Retrieving parts from IndexedDB takes less than 2ms for all models I have tested with. This technology didn't work for me right away:

[Image: l3dBWfV.png]

Performance Improvements

The following two models are added to the test suite for variety:

UCS MF (10179) with 5163 parts

[Image: 201.png]
and this MAN TGS Cement Truck with 1523 parts

[Image: 111.png]

The results for performance are split up in two times. The first is for building parts and storing them in IndexedDB. This number is less than 2ms for repeated visits, but is encountered at the first visit and for everyone who does not use IndexedDB (such as Safari in Incognito mode):

Psych
- Memory usage: 8.8MB -> 9.7MB
- Rendering time: 3.325ms -> 1.270ms + 1.730ms = 3.000ms
MAN Cement Truck
- Memory usage: 20.2MB -> 18.5MB
- Rendering time: 12.090ms -> 4.619ms + 4.432ms = 9.051ms
Executor
- Memory usage: 16.4MB -> 27.5MB
- Rendering time: 44.922ms -> 4.008ms + 4.024ms = 8.032ms
UCS MF (10179)
- Memory usage: 20.5MB -> 37.8MB
- Rendering time: 134.709ms -> 4.561ms + 5.214ms = 9.775ms

This is a great success for rendering times and only a moderate decrement to space usage. Remember that these performance measures are from my now 10 year old mid range Sony Vaio laptop. A modern phone is quicker than this!
Reply
RE: The adventures of building a web renderer
#21
Improving load time for building instructions

The previous posts have all focused on rendering full models. The user experience for browsing building instructions should also be nice. As an example, opening the building instructions on the first page should display the image and BOM for the step with little to no delay.

As an example. Here is how BrickHub.org today displays the first step of the mod of 5580 With the contrast set to 'high' in settings, it is very similar to how LEGO originally displayed the instructions back in 1990 (or was it 1986 in some markets?)

[Image: vaeJzwP.png]

Unfortunately the construction of bottom-up geometries causes all parts of the model to be built in the browser before the first step is displayed. Yesterday evening I changed that to only include the parts that are necessary for the construction step that is displayed.

The rendering time for the first step has improved as follows:

Psych
- Cold cache: 2900ms -> 1500ms
- Warm cache: 1700ms -> 1500ms
MAN Cement Truck
- Cold cache: 9500ms -> 3300ms
- Warm cache: 3400ms -> 2900ms
Executor
- Cold cache: 6800ms -> 2700ms
- Warm cache: 3000ms -> 2500ms
UCS MF (10179)
- Cold cache: 7900ms -> 3600ms
- Warm cache: 4300ms -> 3000ms

Here 'cold cache' refers to the very first time the page is loaded into the browser (no cached parts in the browser storage), while 'warm cache' has the parts already stored.

The loading times for 'cold cache' are improved as expected. That the 'warm cache' numbers are also improved is due to the parts not all being fetched for this first step. I mentioned earlier that this process should at most take 2ms in total, which these numbers contradict. However. That was just for retrieving the parts for storage. Additional code is being run to prepare those parts for being used in instructions, hence the larger performance improvement observed here.
Reply
RE: The adventures of building a web renderer
#22
It has become time to throw all the performance improvements out the window and create a more realistic renderer.

At first I simply copied some sample files to get started... the result is not quite there yet:

[Image: Z5DZUXi.png]

By turning the 'metalness' effect off for the parts that are standard color, you can see that the edges have all become rounded:

[Image: IDarxYs.png]
In order to only have the intended edges rounded, I dug up some old wisdom from this forum: "Only round the edges without hard lines of type 2".

This is the result when doing so:

[Image: VGrGuN3.png]
This makes all the edges soft since all vertices (corners) are also being used by the hard lines!

Second attempt was to use optional lines (line type 5) to indicate that a vertex should appear "soft":

[Image: DVDLxMD.png]
Zooming in on the long sloped piece shows us some failures:

[Image: jcs8GPl.png]
Third attempt was to use conditional lines to mark soft edges instead of soft vertices:

[Image: pvmSvfu.png]
This seems to do the trick.

And turning on environment map helps the chrome fuel cap become better lit:

[Image: fZvNUmL.png]
I am currently satisfied with how edges are shown. Now the work continues to the surfaces which have to be more realistic. Textures, bump maps, lights, shadows and more to come!
Reply
RE: The adventures of building a web renderer
#23
I'm taking a detour from trying to get transparency to render correctly.

I want to be able to import LDraw files as they are exported from Studio 2.0. This is a challenge as texmaps are not done using the standard. Making sure that all standard parts work correctly took some days:

[Image: 543.png]

It is minifigs that are posing a challenge. You can place up to 10 pictures onto a minifig in their Part Designer:

[Image: q5ULlsr.png]

I have to decode onto which of the 721KB of LDraw data should be projected onto which parts of a combined texture. This is going to take a while:

The LDraw file for the minifig has an unknown subpart (probably for the neck), and a lot of type-3 lines onto which the texture has to be projected. I thus have to compute the TEXMAP commands to ensure proper placement of the textures according to the official specification. Right now there is a long way to go:

[Image: hAluDAx.png]
Reply
RE: The adventures of building a web renderer
#24
The road seems indeed long and windy, but that looks promizing! I quess that when you've done the viewer it wouldn't be too hard to get a working exporter to true LDraw file?
Reply
RE: The adventures of building a web renderer
#25
(2020-01-05, 18:29)Philippe Hurbain Wrote: The road seems indeed long and windy, but that looks promizing! I quess that when you've done the viewer it wouldn't be too hard to get a working exporter to true LDraw file?
Bingo! With this I can both implicitly make a converter by allowing to export to (standard) LDraw .mpd files, and by setting up a simple conversion page that allows you to upload and download directly. I can just reuse the code from PatternFolder and remove all the folding stuff.
Reply
RE: The adventures of building a web renderer
#26
(2020-01-05, 21:36)Lasse Deleuran Wrote: Bingo! With this I can both implicitly make a converter by allowing to export to (standard) LDraw .mpd files, and by setting up a simple conversion page that allows you to upload and download directly. I can just reuse the code from PatternFolder and remove all the folding stuff.
Note that... the opposite converter to convert a regular LDraw file to a file compatible with Studio texturing would be very interesting too. Studio photorealistic renderer remains awesome Angel
Reply
RE: The adventures of building a web renderer
#27
(2020-01-05, 21:36)Lasse Deleuran Wrote: Bingo! With this I can both implicitly make a converter by allowing to export to (standard) LDraw .mpd files, and by setting up a simple conversion page that allows you to upload and download directly. I can just reuse the code from PatternFolder and remove all the folding stuff.
Another thing to address in a converter is the parts that are NOT compatible because of different origin/orientation. See for ex. this discussion:https://forums.ldraw.org/thread-23821-post-35258.html#pid35258
Reply
RE: The adventures of building a web renderer
#28
(2020-01-06, 6:01)Philippe Hurbain Wrote: Note that... the opposite converter to convert a regular LDraw file to a file compatible with Studio texturing would be very interesting too. Studio photorealistic renderer remains awesome Angel

The opposite converter is online: https://brickhub.org/i/apps/ldraw2studio.htm

It requires that you either have an inlined texture, or reference one that is in the /textures folder. Due to this, I recommend hosting locally, and I have added an option to the README of the project to help https://github.com/LasseD/buildinginstructions.js people doing just that.
Reply
RE: The adventures of building a web renderer
#29
(2020-01-08, 10:17)Philippe Hurbain Wrote: Another thing to address in a converter is the parts that are NOT compatible because of different origin/orientation. See for ex. this discussion:https://forums.ldraw.org/thread-23821-post-35258.html#pid35258
Oddly enough, it seems like studio 2.0 now takes care of this. When I created a Studio-compatible parts file without moving the origin, it still accepted and snapped properly! - See https://brickhub.org/i/554 as an example.
Reply
RE: The adventures of building a web renderer
#30
(2020-02-05, 18:28)Lasse Deleuran Wrote: The opposite converter is online: https://brickhub.org/bh/i/ldraw2studio.htm

It requires that you either have an inlined texture, or reference one that is in the /textures folder. Due to this, I recommend hosting locally, and I have added an option to the README of the project to help https://github.com/LasseD/buildinginstructions.js people doing just that.

Wait…you mean, no more building all my sticker parts twice just so I can render them? Hooray!!  Big Grin
Reply
RE: The adventures of building a web renderer
#31
(2020-02-06, 4:38)N. W. Perry Wrote: Wait…you mean, no more building all my sticker parts twice just so I can render them? Hooray!!  Big Grin
That is the intend.

Please shout if you find any issues while using the tools - any fix should help everyone.
Reply
RE: The adventures of building a web renderer
#32
(2020-02-05, 18:28)Lasse Deleuran Wrote: The opposite converter is online: https://brickhub.org/bh/i/ldraw2studio.htm

It requires that you either have an inlined texture, or reference one that is in the /textures folder. Due to this, I recommend hosting locally, and I have added an option to the README of the project to help https://github.com/LasseD/buildinginstructions.js people doing just that.

I’ll see what I can do to get this converter online on LDraw.org. Right now textures are broken on the PT due to some issue with my code (probably the path prefetch script) so I’ll have to troubleshoot.
Reply
RE: The adventures of building a web renderer
#33
(2020-02-06, 16:15)Orion Pobursky Wrote: I’ll see what I can do to get this converter online on LDraw.org. Right now textures are broken on the PT due to some issue with my code (probably the path prefetch script) so I’ll have to troubleshoot.
I am also considering adding an "upload texture" option to the converter, so that you do not have to rely on inlined textures, known textures on the website, or local hosting.
Reply
RE: The adventures of building a web renderer
#34
This might seem a bit silly, but I think it is important to represent colors as accurately as possible. For this reason I have added support for colors with luminance, or, "glow in the dark" as it is better known.

In the physical renderer you can simply turn down the hemisphere light and remove the other light sources:

[Image: 20200617.png]
The effect is achieved by adding an outline pass similarly to the part highlights in the instruction steps. It has a glow-parameter which I have adjusted to give the elements just a slight glow similarly to what you see in real life. By using StandardMaterial from three.js with metalness=1.0 and enough roughness to hide the metallic effect. The env. map takes care of the part itself being visible when there is no light source.

The instructions and parts views simply use the outline pass as there is no lights to control there.
Reply
RE: The adventures of building a web renderer
#35
(2018-11-07, 13:52)Philippe Hurbain Wrote: Not beeing a software developper there are many things that gets over my head, but it's nonetheless an enlightening reading!

Yes it is.
Reply
RE: The adventures of building a web renderer
#36
(2020-02-06, 16:06)Lasse Deleuran Wrote: That is the intend.

Please shout if you find any issues while using the tools - any fix should help everyone.

I can't find the reverse converter (404 error)—has it moved?
Reply
RE: The adventures of building a web renderer
#37
(2022-05-12, 3:03)N. W. Perry Wrote: I can't find the reverse converter (404 error)—has it moved?

I moved all the converters into the "apps/" folder about 2 years ago. Sorry for the inconvenience.
Reply
RE: The adventures of building a web renderer
#38
(2022-05-12, 7:59)Lasse Deleuran Wrote: I moved all the converters into the "apps/" folder about 2 years ago. Sorry for the inconvenience.

Oh, there it is. I did find that before—but how do I execute the htm file? Does it run online or am I supposed to download a bunch of stuff?
Reply
RE: The adventures of building a web renderer
#39
(2022-05-12, 14:16)N. W. Perry Wrote: Oh, there it is. I did find that before—but how do I execute the htm file? Does it run online or am I supposed to download a bunch of stuff?

You can either go to the pages hosted on my website. See links on: https://c-mt.dk/ 
Or host locally using the instructions in the README.md file. See the "Hosting locally" section: https://github.com/LasseD/buildinginstructions.js
Reply
RE: The adventures of building a web renderer
#40
(2022-05-12, 14:20)Lasse Deleuran Wrote: You can either go to the pages hosted on my website. See links on: https://c-mt.dk/ 
Or host locally using the instructions in the README.md file. See the "Hosting locally" section: https://github.com/LasseD/buildinginstructions.js

Thanks, that's the key I was missing!

Does it work with primitives or only quads/tris? I tried the maxifig faces (685p01 through 685p05) but the Studio file comes up blank.

(Also 685p04 threw an undefined error but I assume that's because it's unofficial?)
Reply
RE: The adventures of building a web renderer
#41
(2022-05-12, 18:17)N. W. Perry Wrote: Thanks, that's the key I was missing!

Does it work with primitives or only quads/tris? I tried the maxifig faces (685p01 through 685p05) but the Studio file comes up blank.

(Also 685p04 threw an undefined error but I assume that's because it's unofficial?)

It only works on triangles and quads, since it has to map to triangles, and it doesn't completely split up a sub-model.

685p04 was not on BrickHub, so it threw an error. I have updated the files now, but there is still an issue showing texmaps on sub-part files, which has to get fixed.
Reply
RE: The adventures of building a web renderer
#42
(2022-05-12, 20:10)Lasse Deleuran Wrote: It only works on triangles and quads, since it has to map to triangles, and it doesn't completely split up a sub-model.

685p04 was not on BrickHub, so it threw an error. I have updated the files now, but there is still an issue showing texmaps on sub-part files, which has to get fixed.

Ah, okay. I should be able to do it manually in PartDesigner then. Just got to read up on the above thread and get the right size/position conversion. :-)
Reply
RE: The adventures of building a web renderer
#43
The work on these online rendering tools continue. It is clear to me that I do not give enough visibility as most requests come in through emails and Github issues.

I will try to keep you all updated on progress by writing in this thread.

Currently being worked on

1) Thick lines: This should sole rendering issues on several screen types, but might cost on rendering performance.

2) Animations: Support of animations will allow for nice transitions between steps, as well as opening up for play features in the future.

Prioritised Backlog

3) Github issues: Link to Github issues list

4) Minifig Generator (online version of the beloved tool)

5) Scenes: Have default scenes for displaying loaded models similarly to old school LEGO box art

6) X-ray feature showing mechanical parts as cutouts

7) SNOT heat map: Another feature showing the complexity of LEGO models



Please tell me if you have things to add to the list, or if you disagree with the prioritisation.
Reply
RE: The adventures of building a web renderer
#44
I'd really like bi.js to be a module. I don't have the knowledge or the time to do it myself. Right now I'm using some work-arounds but integration with the library software would be far easier if it were.
Reply
RE: The adventures of building a web renderer
#45
Yes. I saw that in Github, but have not gotten to it yet. It sounds like a good idea to increase its priority. I have moved all the Github issues out and prioritised them:

Currently being worked on

1) Thick lines: This should solve rendering issues on several screen types, but might cost on rendering performance.

2) JS Modules: Github.com issue 60

Prioritised Backlog

3) Animations: Support of animations will allow for nice transitions between steps, as well as opening up for play features in the future.

4) Minifig Generator (online version of the beloved tool)

5) Scenes: Have default scenes for displaying loaded models similarly to old school LEGO box art

6) Handle new Studio Texmaps: Github issues 52 and 62

7) X-ray feature showing mechanical parts as cutouts

7) SNOT heat map: Another feature showing the complexity of LEGO models

9) Improved documentation: Github.com issue 61

10) Buffer Exchange: Github.com issue 34

11) Texmap transparency issue: Github.com issue 57

12) All other Github issues (unless others create pull requests)
Reply
« Next Oldest | Next Newest »



Forum Jump:


Users browsing this thread: 3 Guest(s)