Question about edges

Question about edges
#1
Question to the part experts:

Do the edge line main points (nearly) always share a point with normal triangle and quad points. Or do they often ''float' between normal geometry points?

I would like to know this in relation to the current library, not the latest authoring specs.
Re: Question about edges
#2
Generally they are placed on surface edges. Possible exception is when they are used as decoration. Though this is not a very good practice, I have commited this a few times, such as the grip on the latch of NXT cover, or to simulate the complex surface on back of Power Functions XL motor. It has been used also to mark the separation between parts on assembly, where separated parts should have been used.
Re: Question about edges
#3
Thanks, so aside from the decorative ones do normal edges always follow triangle and quad sides.

For example could something like this happen in the current library

Code:
```x~~~~0~~~~~~~~~0~~~~~x       \       /        \     /         \   /          \ /           0```
This being a single type 2 line and a single type 3 line (there will be more triangles in real parts off course this is an 'zoom' if you like)

Or will it allay be guaranteed to be like:

Code:
```x~~~~X~~~~~~~~~X~~~~~x       \       /        \     /         \   /          \ /           0```
Being 3 type 2 lines and a single triangle.

The reason I would like to know this is for the smoothing function I'm working on. It would be very helpful to assume any two triangles can be split by a edge line using the same shared points of the triangle.

I need this information to calculate normals for the triangle vertices. If I can not assume this the alternative would be to do line point intersection tests for all triangle and line data. Which will be much much much slower, and as a result I need to seek an alternative approach.

ps: I'm looking into this at the moment completely unhindered by knowledge of how apps like LDView handle this, just to see if I can come up with something on my own.
Re: Question about edges
#4
Code:
```x~~~~0~~~~~~~~~0~~~~~x       \       /        \     /         \   /          \ /           0```
Sorry, I forgot this case. Yes, that probably happens a lot. Actually my own Isecalc does create this quite frequently.
Speaking of Isecalc, it is also used to create the edge line that marks the intersection between two interpenetrating surfaces. In that case, edge lines are indeed "floating" in the middle of surfaces (but there is no need for smoothing anyway!)
Re: Question about edges
#5
Bit disappointing this means my current approach isn't usable unless you go use OpenCL or something.

Quote:(but there is no need for smoothing anyway!)

My plan was to take the indexed triangle soup data to average the normals of triangles sharing a point unless one or more of them are separated by an edge line, in which case a second vertex using a different normal would be added.

So if a certain point is used by 3 triangles the normal for the resulting vertex would be the average of those three triangle (flat shading) normals. But if one of those triangles is separated from the other ones by an edge only the other two will be used to average a normal, using the 3rd one by itself (also correcting the indices etc off course).

But I have to look at it in a different way now
Re: Question about edges
#6
Hi Roland,

I peaked at the LDView code (and Travis gave me some hints) but I won't spoil the mystery for you. :-)

I will add that it would be nice to have a semi-official smoothing algorithm; the flip side of smoothing is that if several programs implement smoothing, an author might want to modify a part to get high quality visual output; if we all pick different smoothing heuristics, it won't be possible to make the library look good everywhere.

To that end, I think it wouldn't be crazy to declare that smoothing _requires_ that lines and triangles share vertices in order for the line to create a 'crease' - it would seriously lower the computational work required to calculate the smoothed normals because you could rely on a bitwise index of vertex coordinates; without that you have to do a geometry test and at best use a spatial index - a lot more work for the CPU and developer.

If there were some official-ish policy on smoothing, we could code something manageable and authors could make parts work if desired.

Cheers
Ben
Re: Question about edges
#7
Totally agree with you!
My only reference now is LDView, and I try to get things nice looking with it. It would be great if all viewers behaved coherently.
Re: Question about edges
#9
It would be nice to have an universal guide line.

As a result to Philippe's answer above I decided to try my original approach but without using any edge information at all. So by just looking at angles (acos'ed dot products actually) to decide to group stuff or not.

It works remarkably well, so well in fact I'm kinda glad I dropped the edges approach

This is using the (ludicrously wide) angle threshold of 60 degrees, could probably lower that to 30 or so, haven't decided on that.

Downfall is things go 'weird' with high details on a curved surface, like minfig heads which seem to have a crumbled paper thing going on. I'm not complety sure this is the result of the general algorithm or a minor bug on my side.

But in general this approach's results ares very acceptable given the cost. Could probably optimize things to bring it at a loading level only slightly slower then the current LDCad loading times.

If we are serious about helping smoothing by introducing rules on the library, some kind of hint system would be most useful. Like: these following triangles all lay on a curved quad. A bit like the texture projection hints. This would make it possible to generate normals based on a common (cylinder) center instead of the surrounding triangle normals.
Re: Question about edges
#10
Looks very good indeed!
Quote:If we are serious about helping smoothing by introducing rules on the library, some kind of hint system would be most useful. Like: these following triangles all lay on a curved quad. A bit like the texture projection hints. This would make it possible to generate normals based on a common (cylinder) center instead of the surrounding triangle normals.
Very interesting idea! If it could be retrofitted easily - at least on minifig heads - it would be a great improvement.
Re: Question about edges
#11
Hi Roland,

I think the artifacts in the minifig-heads pic are induced by bugs and not by the limits of crease-angles, because there appears to be flat-shading errors at very low angles (e.g. nearly smooth sides). But it is also possible that the meshes aren't manifold or contain tiny cracks that throw your code off.

Could you post a screenshot of parts 6085.dat with the crease-angle smoothing? If I understand, it may look weirdly rounded due to the soft angles.

Also, how are you handling BFC? Does your algorithm assume BFC or cope with the case where the winding order of adjacent triangles is opposite (thus inducing opposite normals)?

cheers
Ben
Re: Question about edges
#12
This one?

It looks fine even though it isn't bfc, but because all it's subparts are make it a non issue I think.

At the moment I assume a perfect library, so I handle it like everything is bfc . I figured the angle stay more or less the same (because of the smallest angle result of dot products) and re normalizing will take care of most other problems resulting from non bfc meshes. I realize this isn't optimal but this is just an experimental implementation, so much room for improvement.

Low angle stuff is in deed a problem at the moment (using a whopping 60 degree threshold at the moment), but the nice thing is the edge lines will fix most of that problem (due to they optical illusion/misdirection they supply). Take a look at the slope bricks in the 5580 model for example.

I'm also hoping per pixel lighting will fix things even more by removing, the some times, weirdly shaped hot spots.
Re: Question about edges
#13
Hi Roland,

Yep - exactly what I expected with 6085: - a rounder-than-real life shading. :-)

I think you may find per pixel lighting makes the lighting effect _worse_...particularly if you use per pixel lighting for what it's really good for: shininess. (The problem with per-vertex lighting is that the fall-off from specular hilights is very steep, so the hilight tends to be contained entirely within one triangle. Once you have that 'sharp' hilight, the induced roundness will make lighting that really looks...well...round.) I hadn't noticed the error on the slope bricks because they look nice for the roundness of the lighting.

I just coded a smoothing algorithm (which, having not run it yet, I must assume works perfectly :-), but it takes the opposite approach: it uses only lines, rather than only angle, to do smoothing/creasing. I'll post some pics once I have it integrated, which may not be for a few days.

Having gone through the coding exercise, I think I can state a 'wish list' of assumptions about the library for a smoothing algorithm:

1. The bit-wise locations of all colocated vertices match exactly - that is, no floating point jitters between vertices that should touch.
2. All faces that form a smooth edge share the (bitwise) same coordinate values and the (bitwise) same transform stack. (In other words, if you want two sub-parts to mesh smoothly, they must be transformed in the same way.)
3. All parts are BFC valid. (Because of this, two adjacent faces will have edges going in opposite directions along the triangle.)
4. Any lines used to indicate a crease have a start and end point that (bitwise) matches the location of the corners of the triangle edge that they crease. (The line does not have to go in any particular direction, as it will always match one of the two triangle faces.)

This wish list allows apps that smooth meshes to avoid any epsilon math checks, computational geometry tests, etc. Smoothing behavior is (theoretically) predictable:
- every exactly manifold edge that doesn't have a manifold line is smooth.
- every exactly manifold edge that has a manifold line is creased.
- non-manifold edges are crease (or rather, aren't eligible for smoothing because they aren't even considered to be connected).

Cheers
Ben
Re: Question about edges
#14
Unfortunately, geting bit-wise floating point exactness is incompatible with using primitives (or even, to a certain extent, subfiles). Much of the time, curved primitives meet other curved primitives, but sometimes they have to meet part-level geometry, and when they do, the floating point values won't match. (Note: parts aren't allowed to use a bunch of floating point digits when specifying geometry). Also, even when primitives meet up with each other, they're only likely to have the same bit-wise floating point value if both primitives have the same transformation matrix. Otherwise, it's quite likely that round-off will result in slightly different values.
Re: Question about edges
#15
Hi Travis,

I smacked into that wall pretty hard with my own smoothing code tonight - 3963.dat uses a mix of hand-positioned faces and sub-part line rings to build the nozzles and the result is inexact vertices. The problem can be resolved in code by (1) also using a crease angle to catch the case where we 'lost' our line or (2) snapping the geometry to a grid.

Is there a minimum precision to the library (e.g. 1/16th of an LDU or something) that would allow programs to quantize part positions to catch such floating point problems without breaking small details? Or is there a minimum tolerance that library parts are checked to?

Cheers
Ben
Re: Question about edges
#16
While I realise bitwise matching is quickest, three subtractions, three squares, an addition, an absolute and a less than is hardly that much slower (the minimum needed for a distance comparison) for a quadratic scaling operation like comparison. And you can easily save work here too. As a scientific programmer of many years, I've learned the hard way that it's sometimes too easy to get caught up trying to save CPU in the easy bits, when there's other much better places to save it.

And you should never, ever, ever rely on floating point bitwise matches. The moment you add, subtract, multiply or divide you will inevitably run into errors. This even applies if you quantise, as 1/16+1e-15 and 1/16-1e-15 are only 2e-15 apart, but appear in different quanta. Furthermore you need to choose an integer that is large enough to cover the whole range, which would have to be at least 32bytes to be safe (since 16 bytes at a 1/16 bucket gives a range of -2048 to 2048).

Pseudo-code for efficient matching of points

Code:
```# Calculate tolerance squared TOL2=TOL*TOL # Precompute distance squared for every vertex P(i) - O(N) for i from 1 to noPoints   P(i).r2=P(i).x*P(i).x+P(i).y*P(i).y+P(i).z*P(i).z end # Compare points - O(N^2) for i from 1 to noPoints   for j from (i+1) to noPoints     # Check that points are close enough in the sphere - one abs, one <     if abs(P(i).r2-P(j).r2)<TOL2       # Now check their distance apart (squared) - 8 FLOPS, one <       if (P(i).r2+P(j).r2-2*(P(i).x*P(j).x+P(i).y*P(j).y+P(i).z*P(j).z))<TOL2         # Points are the same to TOL         SetPointsSame(i,j)       endif     endif   endfor endfor```
Re: Question about edges
#18
Hi Tim,

My hope for 'bitwise sameness' isn't so that we can use fast bit comparisons - it is so that we can have transitivity of equal points. I think your code, by pre-processing, may do that; what does "SetPointsSame" do?

For example, if 3 vertices A, B, and C are near each other so that the distance from A to B is < TOL and the distance from B to C is < TOL but the distance from A to C is > TOL, are all three points the same? If not, do we say that A == B and B == C but A != C? :-)

My smoothing algorithm assumes transitivity of equality for point locations, so a fuzzy match has to be transitive too. As an experiment, I implemented a simple and stupid grid-snap, which does result in transitive behavior - in the case above, if A, B, and C all snap together, then A == B == C. If the snap separates B and C but keeps A and B together then A == B, A != C, B != C and things are still sane.

As an aside, OpenGL (maybe D3D too??) has some pretty specific rules about invariance, water tight meshes, and cracks; if the part library has bit-wise exact transform data and point locations then there will be no mesh cracks, but if that isn't true then I think there could be rendering artifacts..that's why I thought there might be some hope of having bitwise comparable data.* :-) The case that hosed me was a line and triangle not having the same bitwise definitions, which isn't surprising; no one needs watertight rendering between a tri and a line.

cheers
Ben

* For X-Plane we basically require bit-wise exact vertices to get manifold rendering in our model files, because the graphics cards require it...but the output is coming from 3-d modeling programs that more or less do this for free. Thus we ensure the same mathematical function is applied on the same input bits, so while we don't know what the floating point output is, we know it's the same every time.
Re: Question about edges
#19
I've been using the next piece of code for years to do 'fast' duplicate point detection in my renderers.

Code:
```kX=(int)floor(v.x*appDef_decCntMul); kY=(int)floor(v.y*appDef_decCntMul); kZ=(int)floor(v.z*appDef_decCntMul); v.x=(GLfloat)kX/appDef_decCntMul; v.y=(GLfloat)kY/appDef_decCntMul; v.z=(GLfloat)kZ/appDef_decCntMul;```

v.x etc is still float but it's contents should now be bit wise comparable while using 4 decimal precision on all part level transformed coordinates.

I also stuff this in a hash list to speed up look ups.

This has worked pretty well for years (reducing vertex count by ~60% when only using positional data)
Re: Question about edges
#21
Hi Roland,

You'll still (admittedly extremely rarely if you set appDef_decCntMul small enough) end up with false negatives from that (see below). Which may not be a problem. Although I suppose that by vounting through twice you could offset by half appDef_decCntMul in the second loop and look and add anything that matches there to your list of matches.

Code:
```appDef_decCntMul=10000 #1e-4 precision # Before flooring Point1=(0.44499...,0.99434...) Point2=(0.44500...,0.99434...) # floor Point1F=(0.4449,0.9943) Point2F=(0.4450,0.9943) # compare bitwise=(Point1F==Point2F) # is false distance=norm(Point1-Point2) # 1e-5 < Precision```
below will avoid false negatives, but give (arguable) false positives...
Code:
```appDef_decCntMul=10000 #1e-4 precision # Before flooring Point1=(0.44499...,0.99434...) Point2=(0.44500...,0.99434...) # floor Point1F=(0.4449,0.9943) Point2F=(0.4450,0.9943) # round(x) = floor(x+0.5) Point1R=(0.4450,0.9943) Point2R=(0.4450,0.9943) # compare bitwise=(Point1F==Point2F) || (Point1R==Point2R) # is true distance=norm(Point1-Point2) # 1e-5 < Precision```

Tim
Re: Question about edges
#23
Very interesting, especially since this issue is most likely the reason of the 'crumbled paper' look on some of the parts I'm getting with my current angle only smoothing.

I'm going to try these improvements to see if it helps the minifig heads etc. Too bad I won't be able to use the below function of my vector template anymore though

Code:
`const bool operator==(const TGLVector3 &b) { return memcmp(comp, b.comp, sizeof(TGLVector3))==0; }`

comp is the xyz float (or double) array, so memcmp takes only a few clock cycles to compare all three in one go.

Up till now one or two extra (almost) identical points where not visible, so I never really tested my code against such requirements, only speed counted

It might also be interesting to lower the decimal count precision in these comparisons to 3 instead of 4.
Re: Question about edges
#24
Hi Roland,

You'll only double the check. I'd be very surprised if it was a limiting factor. Although I admit that my gut instinct could be wrong.

For more acceleration, I suggest the following:

For each point determine and store (here floora and rounda act as floora(x)=floor(x/tol)*tol and rounda(x)=floor(x/tol+0.5)*tol)
Code:
```xyzF=floora(xyz) # triplet xyzR=rounda(xyz) # triplet r2F=floor(x*x+y*y+z*z) r2R=floor(x*x+y*y+z*z)```
for a comparison you can now do:
Code:
```if ((r2R[1]==r2R[2]) || (r2F[1]==r2F[2])) is false, return false else return ((xyzF[1]==xyzF[2]) || (xyzR[1]==xyzR[2]))```

Tim
Re: Question about edges
#25
I did some additional tests, using this very slow but simplifying code:

Code:
```index=-1; for (int i=0; i<initStats.triPairCnt; i++) {   if (v.fuzzyCompare(triVertices[ i ], 0.0005))   {     index=i;     break;   } }```

Note this uses 3 decimal precision, and no bit wise comparisons.

The results seem to be (visually) the same as when using the bit wise solution (but also with 3 digits). The reason I went from 4 to 3 decimals is that actually makes a difference. For example the visible false split on the technic bush part on my 2nd screencap goes away.

It doesn't help the minifig heads at all though, the problems on them (imho) isn't unique positional related. It's caused by the surrounding triangles having different offsets on the cylinder curve which in turn result in different (fragment color) interpolations by OpenGL (As a result of different arcs in regards to the normal 16 face primitive.

A solution would be to author the whole face using a even grid of (tiny) triangles so the surrounding normals for any of them would result in the correct vertex normal. But that wouldn't be very effective authoring.

This whole thing could also be fixed by using the center axle of the minifig head to project a normal to any point on the face. But the renderer has no way of knowing this without ether hints in the LDraw files or hard coding this information for certain part families (e.g. 3626bp*)

edit: stupid bb code hid the [ i ]
Re: Question about edges
#26
Roland Melkert Wrote:The results seem to be (visually) the same as when using the bit wise solution (but also with 3 digits). The reason I went from 4 to 3 decimals is that actually makes a difference. For example the visible false split on the technic bush part on my 2nd screencap goes away.

It doesn't help the minifig heads at all though, the problems on them (imho) isn't unique positional related. [/code]

That doesn't surprise me. The error I mention would only occur as TOL fraction of all times. Which is pretty small.
[code]
This whole thing could also be fixed by using the center axle of the minifig head to project a normal to any point on the face. But the renderer has no way of knowing this without ether hints in the LDraw files or hard coding this information for certain part families (e.g. 3626bp*)

Or in my dream world, by having a decent class of primitives for minifig heads.

Tim
Re: Question about edges
#27
In general, you'll never be able to get good looking minifig heads (at least not with Gouraud shading), because (from what I've seen) they are full of T-Junctions. With Phong shading, I supposed it's theoretically possible to get good results even with the T-Junctions, but good luck calculating appropriate normals purely from the geometry. The only way I can think of to calculate appropriate normals in order to make that work would be to have hard-coded recognition of minifig heads, and then code that automatically sets the normals based on an idealized head shape.
Re: Question about edges
#22
Hi Ben,

Gotcha. No my code won't guarantee transitive points as written, but you can make it guarantee it.

Ben Supnik Wrote:For example, if 3 vertices A, B, and C are near each other so that the distance from A to B is < TOL and the distance from B to C is < TOL but the distance from A to C is > TOL, are all three points the same? If not, do we say that A == B and B == C but A != C? :-)

My smoothing algorithm assumes transitivity of equality for point locations, so a fuzzy match has to be transitive too. As an experiment, I implemented a simple and stupid grid-snap, which does result in transitive behavior - in the case above, if A, B, and C all snap together, then A == B == C. If the snap separates B and C but keeps A and B together then A == B, A != C, B != C and things are still sane.

In SetPointSame(i,j) you'd look through all pairs already set to see if i or j already had a match. So if i1==i2 and i1==i3 then it will set i3==i2.

Quote:As an aside, OpenGL (maybe D3D too??) has some pretty specific rules about invariance, water tight meshes, and cracks; if the part library has bit-wise exact transform data and point locations then there will be no mesh cracks, but if that isn't true then I think there could be rendering artifacts..that's why I thought there might be some hope of having bitwise comparable data.* :-) The case that hosed me was a line and triangle not having the same bitwise definitions, which isn't surprising; no one needs watertight rendering between a tri and a line.

* For X-Plane we basically require bit-wise exact vertices to get manifold rendering in our model files, because the graphics cards require it...but the output is coming from 3-d modeling programs that more or less do this for free. Thus we ensure the same mathematical function is applied on the same input bits, so while we don't know what the floating point output is, we know it's the same every time.

I'd be very surprised if you can ever really guarantee bitwise compatibility with floats after any operations. Even Roland's code cannot do so*. Indeed, if you ever compile with Intel's compilers with warnings on it will highlight every bitwise float match as a potential error.

Tim

* Although as I note in my comment below that it can be made to do so.
Re: Question about edges
#31
Hi Tim,

I implemented this code, but non-iteratively...that is, I collect 'chains' of points that are all near each other and then when all chains are gathered, each set of points in a chain are set to be at the point cloud's centroid, which effectively locks up the points.

As we discussed, a chain of points all within TOL of each other 'cascade' into one collapsed point, even though the extrema of the point chain are more than TOL apart.

But what I also discovered is that my non-iterative problem (find all chains first, "edit" all geometry second) will result in geometry where some points are _within_ TOL of each other.

For example:
Code:
```E A     1      E B    2      D          C```

Points A, B, C, D and E are in a circular arc whose center is (1), and whose radius is slightly smaller than TOL.
Point E is directly above C, such that the distance from E to any point is > TOL (that is, E is enough "up" that it is more than TOL from A, C and E.

Point 2 is the 'centroid' of ABCDE - that is, A, B, C, D and E will all end up at point 2 after locking up points.

When this happens, the distance from 2 to E can easily be below TOL.

I think my question is: do I care??? :-)

That is, at this point am I done snapping and I go home, or do I then need to re-run a snap to detect that 2 and E are close and merge them?

Is the above pattern a legitimate authoring technique or 'too much detail in too small of a place'?

cheers
ben
Re: Question about edges
#32
Ben Supnik Wrote:I think my question is: do I care??? :-)

That is, at this point am I done snapping and I go home, or do I then need to re-run a snap to detect that 2 and E are close and merge them?

Is the above pattern a legitimate authoring technique or 'too much detail in too small of a place'?

cheers
ben

Probably you do not care. If that situation comes up it's very likely to be a result of bad part design. Any detail finer than (say) 1/100th of an LDU is just plain wrong and setting TOL=1e-2 is likely to catch all but the most extreme rounding errors. The one notable exception is if a part used a rotation on something really long to match with another detail. Which is unreliable design and best caught at the design/review stage.

You could get rid of the problem by iterating until no matches are found. But that seems like overkill to me and is possibly more likely to result in false joins.

Tim
Re: Question about edges
#33
Hi Tim,

Tim Gould Wrote:The one notable exception is if a part used a rotation on something really long to match with another detail. Which is unreliable design and best caught at the design/review stage.

That's the one thing that's strange. I get failures to lock-upon the 6x6 dishes if my TOL is < about 0.05. For example, 0.01 will lock up some but not all of the dishes.

And to make it worse, some pattern parts have markings intentionally authored with triangles < 0.05. (My guess is that these triangles were induced to remove T junctions.)

Quote:You could get rid of the problem by iterating until no matches are found. But that seems like overkill to me and is possibly more likely to result in false joins.

Right - I thought about that, but if the choice is between two somewhat arbitrary outputs for a wrong input, I might as well pick the one that requires less processing time. :-) I'm screening now to see which parts have 'close' outputs despite not having tiny input triangles. One technic part that failed this test had some authoring problems.

My hope was that, with no points closer than TOL after running, I could write the T-splitting with no concern for numeric precision (because TOL would be significantly bigger than the real floating point limits I would face).

But it may instead be the case that where the points are close together, the input was funky and the output can be funky too.

Cheers
Ben
Re: Question about edges
#34
Quote:And to make it worse, some pattern parts have markings intentionally authored with triangles < 0.05. (My guess is that these triangles were induced to remove T junctions.)
Or it may be the result of automatic tool (SlicerPro...)
Re: Question about edges
#17
The Numeric Precision and Format section of the File Format Restrictions for the Official Library specifies that parts and non-scalable primitives are not allowed to use more than 3 decimal places, and scalable primitives aren't allowed to use more than 4 decimal places. It's worth noting, however, that just because that level or precision is allowed, it doesn't mean that relying on it will work.
Re: Question about edges
#20
Ben Supnik Wrote:Yep - exactly what I expected with 6085: - a rounder-than-real life shading. :-)

Yes but I kinda expected that using a 60 degree threshold. Although this part seems to be very round indeed, I didn't realize it because I never seen this part in real life or use it in modelling etc

When I lower the threshold to 30 degree (6085 becomes square then) it messes up (much more often used) parts like the antenna or loudspeaker. So in the end I rather have these 'rare' parts rendering wrong.

Ben Supnik Wrote:I think you may find per pixel lighting makes the lighting effect _worse_...particularly if you use per pixel lighting for what it's really good for: shininess. (The problem with per-vertex lighting is that the fall-off from specular hilights is very steep, so the hilight tends to be contained entirely within one triangle. Once you have that 'sharp' hilight, the induced roundness will make lighting that really looks...well...round.) I hadn't noticed the error on the slope bricks because they look nice for the roundness of the lighting.

It's probably better to first get the smoothing thing to work at an acceptable level before I start playing with my own lighting model then.

Ben Supnik Wrote:Having gone through the coding exercise, I think I can state a 'wish list' of assumptions about the library for a smoothing algorithm:

1. The bit-wise locations of all colocated vertices match exactly - that is, no floating point jitters between vertices that should touch.
2. All faces that form a smooth edge share the (bitwise) same coordinate values and the (bitwise) same transform stack. (In other words, if you want two sub-parts to mesh smoothly, they must be transformed in the same way.)
3. All parts are BFC valid. (Because of this, two adjacent faces will have edges going in opposite directions along the triangle.)
4. Any lines used to indicate a crease have a start and end point that (bitwise) matches the location of the corners of the triangle edge that they crease. (The line does not have to go in any particular direction, as it will always match one of the two triangle faces.)

Like Travis wrote I don't think 1 and 4 are realistic demands, not only from a technical point but it won't be fair to the part authors/reviewers they already have enough to nit pick over I think

3 Is a very realistic demand, and the current library is well on it's way on this. So maybe if some parts smooth weirdly the first helping hand thing part authors can do would be to make the part in question bfc.

4 Is pretty much the point I start the thread for (excluding the bitwise part), I'm not sure how many part will smooth weird when assuming this while it isn't the case. Depending on the amount of parts I might be fixable quite easily for some of the advanced part authors.

I'm going to implement a very rough version of the number 4 base edge handling in my code next weekend or so, maybe this in combination whit 60 degree approach will result in e.g 99% correct smoothing. I could then compile a list of 'problem' parts, that would benefit the most from bfc/nr 4 corrections.

Or maybe someone else has ideas for a completely different approach?
Re: Question about edges
#28
(replying to an higher branch, because the text area is getting to narrow)

On the t-junctions and minifig head things:

Last night I've tried a very quick and dirty implementation of using a meta for smoothing direction of curved planes with details.

Although it looks somewhat better, it's still not 'perfect', mostly due to the t-junctions indeed.

edit: the top ones are useing def smoothing on standard library parts, the lower ones use a modified .dat.

But even with no t-junctions I'm expecting a somewhat 'flattens/dented' facial area due to the distance of the detail vertices in regards to the 'cylinder' radius. When all t-junctions are resolved (or at least for the face geo) you could correct that by scaling all vertices using the same guidance cylinder. I would expect a (nearly) perfect minifig head in pretty much all situations after that.

But it will need a new meta, my quick implementation uses this, but it could be much much more powerful:

0 !SMOOTH CEN 0 0 0 0 -1 0
and
0 !SMOOTH NOCEN

I use these metas around the essence of the minifig face (everything except the "0 // replacing s\3626bs01.dat" part). But in practice it probably better to enclose the whole cylindrical mesh (to prevent further normal smoothing on those vertices).

The presence of the meta could also be used to decide to do auto t-junction corrections or not (see my other post in the t-junctions re-visited thread).

Another approach I was thinking about is random tessellation of the larger 'filler' triangles in combination with the 'radius push', this might decrease the effect of t-junctions without the costly removal of them. Haven't tested this jet though.
Re: Question about edges
#29
Hi Roland,

Those heads look great, and I have a new-found respect for smooth heads: I went looking at a bunch of minifig/starwars heads with my smoothing code and saw a lot of artifacts like your previous pictures. T junctions, non-water-tight connections, transforms...it's a jungle out there. :-)

I was going to mention this in response to "other ideas" but re: the meta command.

There is a totally different approach we could take, which would be consistent with the 3-d industry: we could write normal vectors into the LDraw files themselves.

This would mean that any smooth surface could have normals matching its underlying geometric shape, regardless of what that shape is. A smooth center meta command solves the problem for one shape, but then we'll need cylinders, cones, etc. Some shapes may be complex composites for which a pile of meta commands to get mathematically correct normals becomes quite painful.

Per-triangle and per-quad normals would cut down on the processing time to load a smooth part, and it would cut down (completely) on variance between implementations - we'd use the processed normals "as is". :-)

Such a scheme need not be ubiquitous - I am _not_ proposing any given syntax, but as a stupid straw-man example, we could have a meta command that, when preceding a triangle, sets the normals for that triangle. This would allow authors to "spot fix" particularly tricky cases like minifig heads, while leaving the simpler cases (cones, studs, etc.) up to the automatic algorithms. We could even code one of the apps to save out smoothed shapes _with_ the normal metas to act as a seed for authors who want to then tune smoothing.

Stepping back a little bit, I would like to see a system that:
1. Does not rely too heavily on heuristics to 'guess' what the author meant and
2. Provides ways for the author to specify exact behavior when needed without too much difficulty.

My concern about "heavy guessing" with 1 is that a complicated heuristic may produce wrong results such that the 'fix' introduces other wrong results. Better to have 2 - a system where authors can spend a little bit of time and get exact results when desired.

This is why I like rules like:
- Two triangles with exact matched corners and an angle > 60 degrees _will _be creased.
- Two triangles with exact corners and a line covering those corners exactly _will_ be creased.
It's easy to use the rules, straight forward to code, etc.

cheers
Ben
Re: Question about edges
#30
I forgot to mention, only the lower ones use the meta.

Normals in the LDraw format would be the best way to go, but like said before editing all existing parts is going to take ages, just look at the timespan BFC is taking.

For new parts the combination of >60 deg and matching type 2 lines (I think type 5 don't matter for smoothing) would be a very good and reasonable requirement.

Don't get me wrong I'm thankful to the part editors we getting all these parts in whatever form.
Re: Question about edges
#8
When implementing this for my POV export in LDView, I went with infinitely long representations of all edge lines, and checked if triangle edges lined up with those. Assuming a match is found, you can then see if the triangle edge is within any of the segments that correspond to the infinitely long line. (I think I actually skipped this second step, but doing so is probably bad.)

Note: LDView's realtime 3D view uses conditional lines to indicate "smooth", and does its smoothing based on that. LDView's POV export does what you propose (which is probably better).
« Next Oldest | Next Newest »

Forum Jump:

Users browsing this thread: 1 Guest(s)