T-Junctions re-visited


T-Junctions re-visited
#1
The recent activity regarding automatic calculation of surface normals in order to produce smooth rendering in other programs (similar to how LDView has behaved for a long time, but via a different, better algorithm) has made me investigate places where LDView's current smoothing is failing, and this has led to this post, a revisit to the subject of T-Junctions.

The point is that, while I never thought about it before, T-Junctions prevent smooth shading from working. If you don't know what T-Junctions are, please see this article. I just updated that article to point out that T-Junctions on curved surfaces prevent smoothing algorithms from working on those surfaces.

Since, as far as I know, this has never been mentioned before, I don't think parts authors in the past have had any particular impetus to go to the significant extra work needed to avoid T-Junctions on curved surfaces. However, given that they prevent just about any reasonable smoothing algorithm from functioning, I think they should be avoided more strongly.

If someone can come up with a simple smoothing algorithm that works with T-Junctions, I will gladly stand corrected, but any such algorithm would also require per-pixel lighting in order to work with T-Junctions, and I think even with per-pixel lighting, the algorithm would end up being downright nasty.

The only alternative I can think of is to detect the T-Junctions during file loading, and automatically split the geometry involved. That doesn't seem very easy either, though. Does anyone know of a good, fast algorithm to do this? If so, we could potentially recommend that in all LDraw renderers, and remove the recommendation against T-Junctions in parts entirely.
Reply
Re: T-Junctions re-visited
#2
Hi Travis,

I'm not sure what the hardest stage is, but if one computes all valid edges (on triangles or quads), then one can efficiently check if points lie within lines as follows:
Code:
# O (N_Edges N_Points)
for I,J in Edges
  for K in Points
    DIJ=Dist(P(I),P(J))
    DIK=Dist(P(I),P(K))
    DJK=Dist(P(I),P(K))
    if (abs(DIJ-(DIK+DJK))<TOL)
      # K is on the line between I & J
      PushLineTrio(I,J;K)
    endif
  endfor
endfor

After that, one needs to split all tris and quads that have a point K in the line trios.
Code:
# O(N_LineTrios log(N_LineTrios))
SortLineTriosBy(I,J)
# O(N_LineTrios)
foreach I,J in LineTrios # Count on I,J first to split along a line
  if (NumberLineTriosWith(I,J) is 1)
    SplitIJAtKTri/Quad # For quads there are two options here, perhaps split into three tris here rather than trying to choose
  else
    SplitIJAtMultipleKTri/Quad # Hard
  endif
endfor

Code:
routine SplitIJatKTri(I,J;K)
foreach Tri with I,J as an edge # This is slow but won't be called very often
  NewTri(I,K,Tri.L)
  NewTri(J,K,Tri.L)
  DeleteTri(I,J,Tri.L)
endfor

routine SplitIJatMultipleKTri(I,J;KS)
foreach Tri with I,J as an edge # This is slow but won't be called very often
  NewTri(I,KS(1),Tri.L)
  for IK in 1..(NKS-1)
    NewTri(KS(IK),KS(IK+1),Tri.L)
  endfor
  NewTri(KS(NKS),J,Tri.L)
  DeleteTri(I,J,Tri.L)
endfor

routine SplitIJatKQuad(I,J;K)
# Quad has I,J,L,M going around
foreach Quad with I,J as an edge # This is slow but won't be called very often
  NewTri(I,K,Tri.M)
  NewTri(J,K,Tri.L)
  NewTri(K,Tri.L,Tri.M)
  DeleteQuad(I,J,Quad.L,Quad.M)
endfor

What this does may not deal with well is quads with splits on multiple edges. That could perhaps be checked for if results from the above suck.

Tim
Reply
Re: T-Junctions re-visited
#3
Hi Guys,

If I may jump on the "T junctions are evil" bandwagon:

- If you have a smooth surface, you probably don't have lines covering the edges. But with a T junction, the GPU may leave a 'crack' in the texturing of the surface. So T junctions are a general modeling no-no for this reason even when smoothing isn't considered.

- In order for the lighting to match at the T point, you have to ensure that the calculated normal of the vertex touching the triangle midpoint is the same interpolated normal from the two edges. This pretty much goes against what a smoothing algorithm is trying to do (calculate a normal based on the incident faces).

Even if we can band-aid around T junctions for smoothing, I do not think we should: there exists a way to author the part to fix this (subdivide the 'long' triangle or quad into two triangles or quads at the T point) that will result in un-cracked, smoothly shaded meshes.

I think it would be more productive for developers to write tools to detect and/or repair parts that have T junctions than to code around it in smoothing code; in the later case we endure the development cost for every LDraw app going forward, the performance cost of the more complex algorithm, and the lighting artifacts of having to pick weird normals at every T junction to avoid discontinuities.

I apologize for sounding like a broken record/big fat jerk, but... :-)

...the rules for mesh construction for 3-d graphics and games are already solved for the rest of the 3-d modeling world...we should match the graphics industry's rules for "water tight" meshes; the result will be better visual results and better performance.

Cheers
Ben
Reply
Re: T-Junctions re-visited
#4
There t-junctions are a big party spoiler indeed, Tim's detection code could probably be done fast enough on small parts, but on the bigger ones it could take ages, also on most parts it isn't even needed.

Instead of brute forcing everything, we should only apply this on problem parts like the minifig heads (by means of a meta hint). This would keep loading times acceptable even on lesser hardware.

I posted an example on using a meta in this message:

[url=http://forums.ldraw.org/showthread.php?tid=8597&pid=8597#pid8597][/url]

The same meta could be used to indicate the need of t-junction removal.
Reply
Re: T-Junctions re-visited
#5
I agree, but there is the small issue of editing over 5000 parts Smile

So if we could do 'simple' real time fixing, instead of waiting on the library to catch up, it's the preferred way of doing things at the moment. We could always get rid of the real-time correcting once the library does catch up.
Reply
Re: T-Junctions re-visited
#6
Hi Roland,

I seem to keep running into "but we can't edit 5000 parts". :-(

My concern about simple real-time fixing is that having these algorithms alive in code is a very different proposition from having their results saved to a file. If we decide the algorithm does a bad job, revising _all_ apps is nearly impossible. But re-processing library files is possible.

The library is big, but it's not homogenous: some parts are way more important than others in terms of frequency of use, and some won't even need processing.

Are there specific authoring tools that I could write that would make the parts library more accessible? For example, we could write code to detect T junctions, or even code to detect T junctions that probably _should_ have been smooth shaded, or even code that only subdivides those T junctions that affect smooth shading. (The goal with this is minimal editing of the parts while solving the maximum smoothing problems.)

I am not afraid to write big complex slow algo code to solve these things -- I just hate to see it in BrickSmith's part loader. :-(

cheers
Ben
Reply
Re: T-Junctions re-visited
#7
Thing is large changes have to go through the reviewing process, which is currently a bottle neck even for new parts.

Chris Dee is currently the only one who can 'sneak' stuff through without rereviewing, so maybe if you write a very smart t-junction removal script he could apply it to the whole library and roll it out in the next release if he finds the results acceptable.
Reply
Re: T-Junctions re-visited
#8
Hi Roland,

That is sort of my idea - apply the review and quality control to the tool - carefully check some of its output, design it to create sane output, and then use it to batch-apply to parts. I feel bad asking part authors to manually do such tedious work!

cheers
ben
Reply
Re: T-Junctions re-visited
#9
The "have to go through the reviewing process" bit is a purely artificial requirement. It exists because some people are afraid that subverting a very complicated, very slow review process will somehow damage the quality of the part library. In individual instances, I'm sure the concern is well-founded. But on the whole, I feel the policy has demonstrably had the exact opposite effect. Simple problems which could have been algorithmically corrected remain enshrined (bow-tie quads, for example). Authors create parts, then don't even bother submitting them to the Part Tracker because it's a waste of their time. Contributors fade away. New ideas are almost invariably dead on arrival.

I would like to see, at the very least, an expansion of admin power on the part tracker and a recognition that some changes can be made in bulk without individual review of every single line.

Reflexively shooting down an idea like this is just unhealthy. We should be fixing this administrative problem before talking about software client workarounds.

Allen
Reply
Re: T-Junctions re-visited
#10
I like this idea very much. If we would have code that can fix the T-junction I am very interesting in integrating that code into DATHeader.

Regarding the bulk change of files in the library (I like that idea), I think we need to address this change in the !History line due to our license.
Reply
Re: T-Junctions re-visited
#11
Allen Smith Wrote:Reflexively shooting down an idea like this is just unhealthy. We should be fixing this administrative problem before talking about software client workarounds.

I'm not against mass changes, I'm just stating the current policy.

We are both in the LSC can't we start something to improve this? Although technically it has nothing to do with file format, so it might be a library admin only thing, I'm not sure on that.

But in short if it's so 'hated' why don't we change the policy?
Reply
Re: T-Junctions re-visited
#12
Ben Supnik Wrote:That is sort of my idea - apply the review and quality control to the tool - carefully check some of its output, design it to create sane output, and then use it to batch-apply to parts. I feel bad asking part authors to manually do such tedious work!

I'm not sure if we could get rid of all t-junctions even by processing the whole library. Some of them might be the result of recursive primitive use. Can't correct those without adding (similar looking) primitives or in lining the modified primitive (which is messy).

Also while trying a rough implementation of Tim's algorithm, I noticed the unique point detection becomes very important, so mass correcting for that might be a required first step.
Reply
Re: T-Junctions re-visited
#13
Hi Michael,

What format(s) of code would be most useful for integrating such a thing into the existing tool chain?

- Cross-platform? Mac, Windows, Linux? Std-C? Command line?
- A standalone tool that can be run by a script?
- C or C++ or some other language?

Is there already code out there that is (1) cross-platform, (2) suitable for batch processing and does either LDraw round-trip file parsing and/or geometric analysis?

EDIT: can you point me to some kind of project page for LDTools? Are they source available and/or is the API documented?

cheers
Ben
Reply
Re: T-Junctions re-visited
#14
We can't really write a tool that gets rid of T-Junctions that span mutiple sub-files, but someone could presumably write a tool that gets rid of T-Junctions in all individual files. This has the potential to have T-Junctions left over around the edges where minifig face sub-files meet the minifig head parent file, but such a tool would likely still have an overall significant positive impact on the library as a whole.
Reply
Re: T-Junctions re-visited
#15
A fast text-centric scripting language would probably be best. Especially one that allowed access to a higher-level language for the tricky bits. You probably don't want perl doing those loops!

Tim
Reply
Re: T-Junctions re-visited
#16
Hi Ben,
many questions and I can only answer to a very few Sad

Personally I am coding in Visualbasic NET. So it is by far not cross platform by default. I have build my own library (xmpd.dll) where I have many function already build in, so writing a tool that handles dat files are very quick. For some tasks, depending on the specific task, i need sometimes to write extra code for speed purpose as at the very beginning i did only know the basics of .NET language. So I often used not the best object if speed is required. To rewrite the complete dll I do not have the mood and i do not see the necessarity.

As far as I know there is no code currently available that help us in the case of t-junction.
Reply
Re: T-Junctions re-visited
#17
Before going to patch parts definition to avoid T-J gaps, please consider the following:
LDraw part libray is made of recursive parts loading where the position of each of them is determined by a MATRIX.
Even if using the most precise mathematical approach, when you multiply a vertex position by a matrix, some kind of rounding will occour so, even the most precise vertex positioning or the best possible triangle subdivision will cause a gap!

The only way to solve this (and I personally test this with my SR 3DBuilder appl) is to perform a vertex position check at part loading to see if another vertex with the same position (plus a tollerance) has already been loaded, and, if this is the case, then the new vertex will change its coordinates to the already existing one.

This approach has many advantages:
- allows the usage of indexed primitives saving at least 60% of vertex definition
- allows a fast (even if not precise) computation of triangle normals
- comparing the value of the normal for an existing vertex with the normal for the new one let your application decide if a smoothing for the surface must be done or not (ie if the angle between normal is > 45° then avoid smoothing otherwise compute the new normal for the vertex)

The only disadvantage is loading time: I have to cache loaded parts in memory to allow quick model loading...

byes to all

Sergio
Reply
Re: T-Junctions re-visited
#18
Roland Melkert Wrote:But in short if it's so 'hated' why don't we change the policy?

As I recall the policy is somewhat flexible. For example I'm fairly certain (although I may misremember) that my scripted flex parts bypassed the usual review policy after getting an example reviewed.

But removing T-junction should definitely be checked by eye as it is very much a non-trivial process. Although some parts might be done more quickly (eg. the minifig heads which should definitely be prioritised).

Tim
Reply
Re: T-Junctions re-visited
#19
Sergio Reano Wrote:The only way to solve this (and I personally test this with my SR 3DBuilder appl) is to perform a vertex position check at part loading to see if another vertex with the same position (plus a tollerance) has already been loaded, and, if this is the case, then the new vertex will change its coordinates to the already existing one.

I think most of the newer tools are doing this in one way of another, but like i wrote in an earlier message even after indexing, there probably still be some unnecessary duplicate points which will be messing with the t-junction detection and fix algorithm proposed by Tim.

This is mainly caused by the varying decimal precision in the .dat files in combination with different matrix stacks.

For example when detecting duplicates using 3 digit precision some point in a .dat using 2.12 or something might en up like 2.121 after matrix transformations. It will then NOT be joined by some other point of 2.12 (2.120, arrived via a different matrix stack).

Simply lowering the precision threshold isn't a real option though, because it will ruin some of the more complex parts.

I'm thinking about changing my detection code to use distances to existing unique points instead of the decimal precision comparisons, the first 'unique' point will then be used full 0.001 (or even more) precision. But any later point that has a distance to it of less then e.g. 0.1 will be snapped to it. This way you keep the resolution, but trow a wider 'net' over closely grouped points that should be joined.

I'm hoping this reduces the false positives so I can take another go at implementing Tim's T-Junction removal method. I'm gambling on the assumption the precision is needed for placement and not for (very) closed grouped triangles making up some very high res detail.
Reply
Re: T-Junctions re-visited
#20
Roland Melkert Wrote:I'm thinking about changing my detection code to use distances to existing unique points instead of the decimal precision comparisons, the first 'unique' point will then be used full 0.001 (or even more) precision. But any later point that has a distance to it of less then e.g. 0.1 will be snapped to it. This way you keep the resolution, but trow a wider 'net' over closely grouped points that should be joined.

I'm hoping this reduces the false positives so I can take another go at implementing Tim's T-Junction removal method. I'm gambling on the assumption the precision is needed for placement and not for (very) closed grouped triangles making up some very high res detail.

When I made my algorithm I worked on the assumption that points (and thus lines) had already been reduced via a minimum distance algorithm. I'd personally consider that to be a vital first step in any modern LDraw renderer.

Tim
Reply
Re: T-Junctions re-visited
#21
Hi Y'all,

I did try to write a spec for smoothing that doesn't require some kind of rounding/snapping process, and I was not able to do it.

Meanwhile, with actual code and actual parts, adding rounding (even poorly written rounding to just 'get started') addresses almost all of the normal errors not induced by T junctions. So I feel like maybe there aren't that many other shoes to drop. :-)

But at the same time, I still feel that fixing T junctions in-app feels weird to me - my gut says I'm writing to code to fix parts that aren't authored up to standard (albeit maybe only a standard in my imagination :-) as opposed to compensating for the limits of floating point.

The minifig faces present a problem - to impose triangulation on the library, we would have to induce into the 'rest of the head' parts enough triangles to ensure uniform connectivity to a variety of actual face pieces, all of which may have different current edge triangulations.

For examlpe, 3626bpa2.dat contains a head suture whose triangles go right up to the 'cut-over' line to the primitive that builds the standard forehead area. The result is a T junction that would require modifying the primitive.

Sergio, your note on caching is well taken - I currently have an algorithm that's tolerable in real-time, but if I have to add T-junction removal that's probably going to go out the window. :-(

(Am I the only one going 'this would be so much easier if the minifig heads were texture-mapped? ;-) Also, it blows me the level of detail people have built colored triangle meshes for so many faces.)

cheers
Ben
Reply
Re: T-Junctions re-visited
#22
I've managed to get a T-Junction removal going based on Tim's algorithm above. Performance is acceptable (even in it's current stupid unsorted duplicate work doing etc etc state). But only while using parts like the minifig heads. I'll probably make this ether an 'HQ option' or limit it's use (hard coded) to parts that really benefit from it.

This is what I currently got:

[Image: TJunctionFixTest.png]

I'm using the colors to indicate splits (gray single split, bl/gr multiple splits). Also my current implementation does not allow for inserting new triangles into current triangle sets (have to redo most of the vertex buffer preps for that to work nicely, but was planning to do that anyway) so I couldn't use the real colors in this test.

The function is very optimistic I've put in a 1x1 plate to demonstrate why Smile

@Ben: I've put in 3626bpa2.dat to give you an indicator to decide the t-junction stuff is worth it.

There are still some artifacts, but I think those are from parts being non bfc, have to check that to be sure.
Reply
Re: T-Junctions re-visited
#23
Hi Roland,

It's cool to see the T-junction tris right above the scar. :-) :-)

Did your algorithm induce a gajillion triangles on the top of the 1x1 plate?

And...how long does the 48x48 baseplate take to process? :-)

cheers
Ben
Reply
Re: T-Junctions re-visited
#24
Wow. Awesome coding skills.

Out of interest, what did you do about multi splits on quads? I left that in the too hard basket Big Grin

Tim
Reply
Re: T-Junctions re-visited
#25
Ben Supnik Wrote:Did your algorithm induce a gajillion triangles on the top of the 1x1 plate?

Actually I'm not quite sure what's going on there. Because the stud sits on top of a quad. Might be false positives of some kind.

As on the 48x48 it would take ages, I didn't even dare to open 48x48 so i tried 24x32 it took almost 3 minutes Smile But the function could be optimized quite a bit though, for example it's using why to many lists and unnecessary loops at the moment.
Reply
Re: T-Junctions re-visited
#26
I get 'rid' of quads in the early stage of loading, so I only have to deal with triangles on this level. I must confess I didn't really understand your multi triangle split pseudo code so I ended up with sorting on the DIK values (per edge of to be fixed triangles) followed by building a new list of points for the needed poly.

So if you had a triangle of a b c that needs a split on e.g. ab=d and ca=e you would get a list of a, d, b, c, e which in turn is used in this loop (semi pseudo):

for i=newPnts.cnt-2
addTriangle(newPnts[i], newPnts[i+1], newPnts[lastone]);

Boils down to the same principle I ques..
Reply
Re: T-Junctions re-visited
#27
Roland Melkert Wrote:Boils down to the same principle I ques..

Yep. Think that's the same as my routine.

I suspect you'd get slightly better quality by keeping quads until later, and processing them specially. But probably not worth the hassle Smile

Tim
Reply
Re: T-Junctions re-visited
#28
Mmm...this brings up an optimization consideration.

Not every sub-part needs to be smoothed against every other sub-part in a given part. In fact, sub-parts smoothing against each other (and thus needing T-junction checking, point welding, and all of the other fun things we have discussed) is perhaps, by numbers, the exception. For example, there sure are a lot of basic studs in the part library and to my knowledge, none of them need to be smoothed agains their surrounding part. :-)

We could include a meta command indicating that a part does not need to be smoothed against its used context. This could be used for:
- Any sub-part where we have fairly certain knowledge that this isn't necessary (e.g. studs) and
- Parts that are fundamentally squared/boxy (e.g. the various primitives that make up rectilinear shapes.

There are a few 'killer' parts - for example, the painted Dragon body (75174p01.dat) contains 15,658 faces and probably needs to be smoothed as a whole. On the other hand, the 32x32 baseplate has 32826 faces, only 58 of which are non-studs.

cheers
Ben
Reply
Re: T-Junctions re-visited
#29
I suspect that the gray triangles inside the stud holes on top of some of the heads are also false positives.
Reply
Re: T-Junctions re-visited
#30
Those are the result of the internal cross support beams touching the cylinder primitive. Like so (top view)

Code:
split
  |

______
  /   \   -split
/    /\    
/    /  \
Reply
Re: T-Junctions re-visited
#31
This may be a result of an oversight I made. For small triangles the algorithm might find a false hit. Try replacing
Code:
# O (N_Edges N_Points)
for I,J in Edges
  for K in Points
    DIJ=Dist(P(I),P(J))
    DIK=Dist(P(I),P(K))
    DJK=Dist(P(I),P(K))
    if (abs(DIJ-(DIK+DJK))<TOL)
      # K is on the line between I & J
      PushLineTrio(I,J;K)
    endif
  endfor
endfor
by (here RTOL should be something like 1e-4, adjust to taste)
Code:
# O (N_Edges N_Points)
for I,J in Edges
  for K in Points
    DIJ=Dist(P(I),P(J))
    DIK=Dist(P(I),P(K))
    DJK=Dist(P(I),P(K))
    if ((abs(DIJ/(DIK+DJK)-1.0)<RTOL) && (abs(DIJ-(DIK+DJK))<TOL))# CHANGED TO INCLUDE A RELATIVE CHECK TOO
      # K is on the line between I & J
      PushLineTrio(I,J;K)
    endif
  endfor
endfor
Reply
Re: T-Junctions re-visited
#32
There's a possible problem with subdividing quads to triangles: in a quad cylinder the normals get 'tilted' toward the diagonal line if you weight the input triangles' normals equally. With diffuse lighting it's not visible, but if you add some specularity, you'll see the hilights are skewed.

I haven't played with different normal weightings; conceivably a heuristic like angle spanned could fix this. My code preserves quads, but that's easily done since I haven't fixed T junctions yet.
Reply
Re: T-Junctions re-visited
#33
Tim, isn't there a risk that pre-welding points via your algorithm could allow two points that are arbitrarily far apart to lock together due to a cascading effect? Or is this just a "you shouldn't have done that" kind of thing for authors?
Reply
Re: T-Junctions re-visited
#34
Yes it is possible. But it would only happen for an extremely badly designed part that should never be used.

Remember we're talking about distances of at most 1/1000 of an LDU ie. 2.5 microns. The only time points should be that far apart is when they've been misrounded there.

Tim
Reply
Re: T-Junctions re-visited
#35
Right - one of the implications is that the parts need an _absolute_ precision limit, e.g. the author should limit 'meaningful' use of triangles to a spacing between vertices of > X where X is a fraction of an LDU, after considering all transforms. Then X needs to be big enough that the accumulated error of some number of transforms doesn't exceed X.
Reply
Re: T-Junctions re-visited
#36
I use very basic lighting at the moment, but I would like to fancy that up at some point. I'm going the rewrite the vertex generation / management code this weekend anyway. So maybe I'll (optionally) keep the quads this time.

Thing is I kinda thought / was told the driver/hardware converts the quads to triangles anyway at some point, so why not prevent the extra drawelements call all together.
Reply
Re: T-Junctions re-visited
#37
I think the splits (in this case) are warranted.

Limiting the size of new triangles would help performance, so I'll be adding that. Thanks for the update.
Reply
Re: T-Junctions re-visited
#38
When using a 'snapping' distance of 0.1 I noticed a false positive on one of the minifig heads, so I'm using 0.05 at the moment. An official guideline would be nice to have.
Reply
Re: T-Junctions re-visited
#39
The driver almost certainly does convert the quad to triangles, but the calculated average surface normal will be different, because with quads, the point has exactly two contributing normals, which averate to a normal pointing away from the centerline of the cylinder (as desired). However, when the normal smoothing is done on the triangles, one facet will have two triangles contributing to the given point, while the other facet only has one. This will cause the normals on the bottom of the cyllinder to get pulled one way (say clockwise), while the top ones get pulled the opposite way (say counterclockwise). This leads to visible artifacts.
Reply
Re: T-Junctions re-visited
#40
Yep - this is exactly what I saw when I had triangulation on. But I think in theory a weighting by arc-angle could fix this. Two of the three triangles at a given vertex will share 90 degrees of "span" and the other will have 90 to itself - by this weighting, normals will go back to vertical. I don't know what the effect would be on other shapes.

BrickSmith currently uses non-indexed primitives, and in that configuration, breaking quads into tris is a net performance loss:
- It increases vertex count.
- We hardware-instance together multiple bricks, so our total draw calls is low - so calling glDrawArraysXXX twice isn't a huge problem. (The mac drivers also do a good job of getting the second draw call out quick if no state has been changed.)

I plan to re-measure this once I have indexing working - the indexer is partly done and it looks like it will cut VBO size in half or better.

cheers
Ben
Reply
Re: T-Junctions re-visited
#41
I have some questions regarding indexing. Are you creating a separate indexed list for each part, or one huge list for the whole model? If the latter, do you ever run into problems with having too many points, or do graphics cards widely support very large numbers here? (I seem to remember than in OpenGL ES, indexes are not allowed to be more than 16 bits wide.)
Reply
Re: T-Junctions re-visited
#42
I think I have already finished:
1) detecting t-junctions
2) correcting t-junctions
but currently only without subfiles.
Reply
Re: T-Junctions re-visited
#43
Hi Michael,

Subfiles and T junctions is the $1,000,000 problem. Within a single part, it's not crazy to ask authors to eliminate T junctions, and it's not crazy to use a tool to fix the problem and then view the results. I've been very impressed at how many parts with complex patterns are already correctly triangulated - a real undertaking for both the author and reviewers. :-)

But fixing T junctions between sub-parts may be impossible. Consider this case:

3626bpn1.dat contains T junction errors at the bottom of the side markings of the face - the bottom trapezoid pattern is subdivided quite a bit more heavily than the Torus primitive used to make the minifig head bottom. To 'fix' this a program would have to analyze these parts together (3626bpn1.dat and the sub-part to4o6250.dat) and chop up the top the torus to match the face pattern.

But if we do that in pre-processing, we've really jumped the shark: now the torus has some crazy cut pattern in its top and _every_ minifig face now needs to be subdivided to match.

The net result would be that every part that shares a common edge would have every vertex that every other part has. :-( Even if this is possible from a pre-processing standpoint (with floating point rounding problems, this problem would need to be solved comprehensively, not iteratively) it's not desirable - we're talking abut minifig faces that have been chopped to bits.

I _think_ this case makes me think that fixing T junctions in-app (once parts are 'flattened') is the only sane way to go. :-(

(I do still think that fixing T junctions _within_ single parts like you are doing _is_ valuable and a good idea. I think I'm just arguing that between parts we're going to need app processing.)

cheers
Ben
Reply
Re: T-Junctions re-visited
#44
Hi Travis,

Bricksmith 'flattens' library parts (e.g. a 2x4 brick is a single geometry VBO and index VBO with a single pile of tris, lines and quads, even though it was built out of studs and boxes and tubes). Models are then drawn by making one draw call per part, with a transform set to position the part.

The result is that indices and VBOs don't expand unbounded. We definitely have parts larger than 65k indices - the 48x48 baseplate is something like 400k vertices unmerged and about 200k vertices merged. But with 32-bit vertices all the major cards will handle that without blowing up -- the days of 32-bit indices causing havoc are pretty much gone unless you're supporting Win95 users with Voodoo 2's. :-)

GLES will support 16 or 32 bit indices - the motivation for 16 is that most of the GLES driver stacks support small-data-size optimizations, and mobile devices are always bandwidth constrained. :-(

We do a few things to cope with the large number of draw calls that one-draw-per-part and one-VBO-per-library-part induce:
- We draw by part type, to cut down on VBO/pointer binds - changing VBOs is a big CPU cost!
- We use hardware instancing to draw more than one brick per draw call where supported in hardware.
- We actually pass the transform matrix as immediate-mode attributes instead of uniforms (which is what the native glRotate would give you). On Windows YMMV but on OS X replacing uniforms with attributes can get you about a 2x win if (1) you don't have too many attributes (e.g. only do this for the transform, not everything) and (2) you change the transform matrix with every draw call.

cheers
BEn
Reply
Re: T-Junctions re-visited
#45
Thank you very much for your thoughts.

I currently trying to implement subfile processing too Smile

Let's see where I end up. I like to discuss things based on facts and not only on thoughts Smile

Another point of this (no t-junctions) is, that probably the files growing in size where it is not necessary as the result of our tweaking (t-junctions free) will have no visual impact. So maybe we trying to fix one thing and get trouble on the other side.
Maybe we need a decision what t-junctions should be eliminated because they have visual impacts and which one we could leave?
Reply
Re: T-Junctions re-visited
#46
Right - Travis' original concern was T junctions where smoothing is needed - basically an app that does per-pixel lighting and smooth normals _will_ show a nasty artifact at T junctions that were smoothed. So that's a big one.

For flat shaded intersections, there's no lighting advantage, and the artifact is 'cracking' - if you don't see it now in today's programs, you probably never will; the fact that lines get drawn on top of the seams tends to hide some sins.

The problem with pre-processing I think is that you'll induce vertices not just for T junctions, but for the -potential- for T junctions for other parts...see the example above of the torus section; it will become more vertex intensive for everyone who uses it, just to fix a problem with one minifig face.
Reply
Re: T-Junctions re-visited
#47
I think you underestimate the will of the part authors, to correct their parts.
Giva me a good tool, and I'll start correcting the oriental dragon tomorrow...;-)
Reply
Re: T-Junctions re-visited
#48
I do pretty much the same thing in both LD4DStudio and LDCad, the biggest downside is deciding what's a part or not resulting in the code trying to flatten a whole model in some cases Smile In the end I've put an vertex count limit on the prep function.

@Travis:
I've been using 32 bit indices for years, and as far I know this hasn't caused any problems for users at all.

Are you thinking about shifting LDView to a per part approach? Performance will almost certainly improve.

ps: Don't underestimate the VBO buffer changes, I've made changes concerning this in my LDCad dev code and it resulted in a performance gain of over 35% or even higher in certain circumstances.
Reply
Re: T-Junctions re-visited
#49
Ben Supnik Wrote:3626bpn1.dat contains T junction errors at the bottom of the side markings of the face - the bottom trapezoid pattern is subdivided quite a bit more heavily than the Torus primitive used to make the minifig head bottom. To 'fix' this a program would have to analyze these parts together (3626bpn1.dat and the sub-part to4o6250.dat) and chop up the top the torus to match the face pattern.

I agree that in light of this, it's not possible to eliminate cross-file T-Junctions in the library. However, since I don't think it will ever be possible to efficiently auto-detect and remove T-Junctions at load time (on things like the 48x48 baseplate, for example), I wonder if perhaps something else can be added to the library to indicate their presence. I could see either having a meta-command indicating the presence of T-Junctions, or a separate computer-generated file for the whole library that does the same thing.

The meta-command would be an indicator that the following line introduces a T-Junction. Something like:

Code:
0 !TJUNC 1 3
3 16 0 0 0 1 0 0 1 1 0

The above example would be interpreted to mean, "The following triangle might introduce T-Junctions at vertices 1 and 3." Ideally this meta-command would be auto-generated (at library publishing time?), so that part authors wouldn't have to worry about it. Programs would then only do T-Junction detection on the vertices indicated by the meta-command. This would seriously decrease the load-time performance hit. Note that the presence of the meta-command doesn't guarantee a T-Junction, since the sub-file could be used in multiple parent files, and only introduce T-Junctions in some configurations. But programs completely ignore the possibility of T-Junctions for all geometry not having such a meta-command.

The problem with this meta-command approach is that it ideally needs to be automated, and the library isn't really set up for automated behavior like this. Hence, my alternative suggestion, which is a file that sits alongside the library and contains all the information for all T-Junctions in the library. It would definitely be computer-generated, and this would presumably happen as part of the process of publishing the library (although users would be allowed to regenerate the file themselves on their own library if desired). I really don't have a very good idea how long it would take to scan all files in the library and note all the T-Junctions, but I can't see it being more than an hour or two tops on a modern computer (and hopefully quite a bit less than that).

While ideally this separate file would be part of the official library, it doesn't have to be, as long as it can be made available to users of programs that depend upon it. We would definitely want to make sure the file was easy to make use of though, so some thought would have to go into its format. And I definitely think there should be one single format for this file (possibly extensible).

Another alternative is for programs that do T-Junction detection to have their own cache of detected T-Junctions. So the information would be similar to what I describe above, but the program would automatically do the T-Junction scan any time it loads a part that either doesn't have cached T-Junction data, or where the timestamp has changed. (This is complicated by the fact that the part file itself won't necessarily update; any one of its sub-files could update, so the timestamp information would have to be for each sub-file, not just for the whole part.) This kind of caching could be done by any program, but it's a fair bit of extra work that every program author would then have to undergo. I've actually seriously considered having LDView have an option to auto-cache loaded parts in order to speed loading of files, but never did so due to the complexity involved.
Reply
Re: T-Junctions re-visited
#50
Hi Travis,

Here's another possible META command to help solve the T-junction performance problem: sub-part smoothing hints.

The meta command would categorize a part, sub-part, or primitive into one of three categories:

SMOOTH - the part needs to be smoothed in the context of its neighbors. A torus quarter would need this since it must be smoothed against the rest of a minifig head. This is the most expensive smoothing because smoothing must be done on the larger structure, not the sub-part or primitive. This is the default for non-anotated parts.
FLAT - the part does not need any smoothing because it's all angular. The part also does not need to be smoothed against other parts near it. The part can be excempt from the smoothing of those who contain it.
SMOOTH_ATOMIC - the part needs to be smoothed against itself, but does not create smooth contours with other parts. The part can thus be smoothed once and then be exempted from smoothing in a parent part. Stud.dat would be in this category.

Looking at our friend the 48x48 base-plate, if studs are marked smooth-atomic, then the baseplate is trivially cheap to smooth. :-)

These meta directives of course can be ignored; the FLAT hint is not meant to ensure flat smoothing of a round surface - it is meant to allow a program that supports smoothing to skip it; putting the FLAT smoothing meta tag on a minifig head would be an authoring error.

The idea here is that by breaking out the sub-parts and primitives that are self contained and/or angular relative to other parts, we can cut the amount of 'stuff' in the expensive smooth calculations, which aren't going to get any better than O(NlogN) under ideal circumstances.

What's good about this proposal is:
- It isn't required - we can apply it tactically on a few parts to get some of the biggest problem pieces to be faster. Just marking stud.dat as smooth atomic and a few of the box primitives as flat will make a big difference.
- It doesn't require analysis of the part itself - a human can look at the part, go "that's a box", apply the META, and go home happy...no geometry required.

What is not so good is that it is only useful if smoothing can be applied iteratively during part constructoin. This is currently not true for BrickSmith (I smooth after Allen flattens the parts) but it wouldn't be a hard refactor, and it would be worth the perf win.

Also this proposal totally ignores the problems of T junctions when smooth shading is not desired. Technically if we wanted a 'water tight' baseplate, in OpenGL, we'd have to subdivide the top face of the baseplate itself at every stud and re-triangulate. I am pretty sure we do not want to do that... :-) So the spec intentionally doesn't try to solve T's unless smoothing is desired.

Finally, I am hoping to get a T-finding algo written some time in the next week or two to get some 'real' performance numbers, e.g. if the algo uses good search structures in a low level language, how slow is it...

Cheers
Ben
Reply
« Next Oldest | Next Newest »



Forum Jump:


Users browsing this thread: 2 Guest(s)