Hi Roland,

You'll still (admittedly extremely rarely if you set appDef_decCntMul small enough) end up with false negatives from that (see below). Which may not be a problem. Although I suppose that by vounting through twice you could offset by half appDef_decCntMul in the second loop and look and add anything that matches there to your list of matches.

Code:

`appDef_decCntMul=10000 #1e-4 precision`

# Before flooring

Point1=(0.44499...,0.99434...)

Point2=(0.44500...,0.99434...)

# floor

Point1F=(0.4449,0.9943)

Point2F=(0.4450,0.9943)

# compare

bitwise=(Point1F==Point2F) # is false

distance=norm(Point1-Point2) # 1e-5 < Precision

below will avoid false negatives, but give (arguable) false positives...

Code:

`appDef_decCntMul=10000 #1e-4 precision`

# Before flooring

Point1=(0.44499...,0.99434...)

Point2=(0.44500...,0.99434...)

# floor

Point1F=(0.4449,0.9943)

Point2F=(0.4450,0.9943)

# round(x) = floor(x+0.5)

Point1R=(0.4450,0.9943)

Point2R=(0.4450,0.9943)

# compare

bitwise=(Point1F==Point2F) || (Point1R==Point2R) # is true

distance=norm(Point1-Point2) # 1e-5 < Precision

Tim

Hi Ben,

Gotcha. No my code won't guarantee transitive points as written, but you can make it guarantee it.

Ben Supnik Wrote:For example, if 3 vertices A, B, and C are near each other so that the distance from A to B is < TOL and the distance from B to C is < TOL but the distance from A to C is > TOL, are all three points the same? If not, do we say that A == B and B == C but A != C? :-)

My smoothing algorithm assumes transitivity of equality for point locations, so a fuzzy match has to be transitive too. As an experiment, I implemented a simple and stupid grid-snap, which does result in transitive behavior - in the case above, if A, B, and C all snap together, then A == B == C. If the snap separates B and C but keeps A and B together then A == B, A != C, B != C and things are still sane.

In SetPointSame(i,j) you'd look through all pairs already set to see if i or j already had a match. So if i1==i2 and i1==i3 then it will set i3==i2.

Quote:As an aside, OpenGL (maybe D3D too??) has some pretty specific rules about invariance, water tight meshes, and cracks; if the part library has bit-wise exact transform data and point locations then there will be no mesh cracks, but if that isn't true then I think there could be rendering artifacts..that's why I thought there might be some hope of having bitwise comparable data.* :-) The case that hosed me was a line and triangle not having the same bitwise definitions, which isn't surprising; no one needs watertight rendering between a tri and a line.

* For X-Plane we basically require bit-wise exact vertices to get manifold rendering in our model files, because the graphics cards require it...but the output is coming from 3-d modeling programs that more or less do this for free. Thus we ensure the same mathematical function is applied on the same input bits, so while we don't know what the floating point output is, we know it's the same every time.

I'd be very surprised if you can ever really guarantee bitwise compatibility with floats after any operations. Even Roland's code cannot do so*. Indeed, if you ever compile with Intel's compilers with warnings on it will highlight every bitwise float match as a potential error.

Tim

* Although as I note in my comment below that it can be made to do so.

Very interesting, especially since this issue is most likely the reason of the 'crumbled paper' look on some of the parts I'm getting with my current angle only smoothing.

I'm going to try these improvements to see if it helps the minifig heads etc. Too bad I won't be able to use the below function of my vector template anymore though

Code:

`const bool operator==(const TGLVector3 &b) { return memcmp(comp, b.comp, sizeof(TGLVector3))==0; }`

comp is the xyz float (or double) array, so memcmp takes only a few clock cycles to compare all three in one go.

Up till now one or two extra (almost) identical points where not visible, so I never really tested my code against such requirements, only speed counted

It might also be interesting to lower the decimal count precision in these comparisons to 3 instead of 4.

Hi Roland,

You'll only double the check. I'd be very surprised if it was a limiting factor. Although I admit that my gut instinct could be wrong.

For more acceleration, I suggest the following:

For each point determine and store (here floora and rounda act as floora(x)=floor(x/tol)*tol and rounda(x)=floor(x/tol+0.5)*tol)

Code:

`xyzF=floora(xyz) # triplet`

xyzR=rounda(xyz) # triplet

r2F=floor(x*x+y*y+z*z)

r2R=floor(x*x+y*y+z*z)

for a comparison you can now do:

Code:

`if ((r2R[1]==r2R[2]) || (r2F[1]==r2F[2])) is false, return false`

else return ((xyzF[1]==xyzF[2]) || (xyzR[1]==xyzR[2]))

Tim

I did some additional tests, using this very slow but simplifying code:

Code:

`index=-1;`

for (int i=0; i<initStats.triPairCnt; i++)

{

if (v.fuzzyCompare(triVertices[ i ], 0.0005))

{

index=i;

break;

}

}

Note this uses 3 decimal precision, and no bit wise comparisons.

The results seem to be (visually) the same as when using the bit wise solution (but also with 3 digits). The reason I went from 4 to 3 decimals is that actually makes a difference. For example the visible false split on the technic bush part on my 2nd screencap goes away.

It doesn't help the minifig heads at all though, the problems on them (imho) isn't unique positional related. It's caused by the surrounding triangles having different offsets on the cylinder curve which in turn result in different (fragment color) interpolations by OpenGL (As a result of different arcs in regards to the normal 16 face primitive.

A solution would be to author the whole face using a even grid of (tiny) triangles so the surrounding normals for any of them would result in the correct vertex normal. But that wouldn't be very effective authoring.

This whole thing could also be fixed by using the center axle of the minifig head to project a normal to any point on the face. But the renderer has no way of knowing this without ether hints in the LDraw files or hard coding this information for certain part families (e.g. 3626bp*)

edit: stupid bb code hid the [ i ]

Roland Melkert Wrote:The results seem to be (visually) the same as when using the bit wise solution (but also with 3 digits). The reason I went from 4 to 3 decimals is that actually makes a difference. For example the visible false split on the technic bush part on my 2nd screencap goes away.

It doesn't help the minifig heads at all though, the problems on them (imho) isn't unique positional related. [/code]

That doesn't surprise me. The error I mention would only occur as TOL fraction of all times. Which is pretty small.

[code]

This whole thing could also be fixed by using the center axle of the minifig head to project a normal to any point on the face. But the renderer has no way of knowing this without ether hints in the LDraw files or hard coding this information for certain part families (e.g. 3626bp*)

Or in my dream world, by having a decent class of primitives for minifig heads.

Tim

In general, you'll never be able to get good looking minifig heads (at least not with Gouraud shading), because (from what I've seen) they are full of T-Junctions. With Phong shading, I supposed it's theoretically possible to get good results even with the T-Junctions, but good luck calculating appropriate normals purely from the geometry. The only way I can think of to calculate appropriate normals in order to make that work would be to have hard-coded recognition of minifig heads, and then code that automatically sets the normals based on an idealized head shape.

(replying to an higher branch, because the text area is getting to narrow)

On the t-junctions and minifig head things:

Last night I've tried a very quick and dirty implementation of using a meta for smoothing direction of curved planes with details.

Although it looks somewhat better, it's still not 'perfect', mostly due to the t-junctions indeed.

edit: the top ones are useing def smoothing on standard library parts, the lower ones use a modified .dat.

But even with no t-junctions I'm expecting a somewhat 'flattens/dented' facial area due to the distance of the detail vertices in regards to the 'cylinder' radius. When all t-junctions are resolved (or at least for the face geo) you could correct that by scaling all vertices using the same guidance cylinder. I would expect a (nearly) perfect minifig head in pretty much all situations after that.

But it will need a new meta, my quick implementation uses this, but it could be much much more powerful:

0 !SMOOTH CEN 0 0 0 0 -1 0

and

0 !SMOOTH NOCEN

I use these metas around the essence of the minifig face (everything except the "0 // replacing s\3626bs01.dat" part). But in practice it probably better to enclose the whole cylindrical mesh (to prevent further normal smoothing on those vertices).

The presence of the meta could also be used to decide to do auto t-junction corrections or not (see my other post in the t-junctions re-visited thread).

Another approach I was thinking about is random tessellation of the larger 'filler' triangles in combination with the 'radius push', this might decrease the effect of t-junctions without the costly removal of them. Haven't tested this jet though.

Hi Roland,

Those heads look great, and I have a new-found respect for smooth heads: I went looking at a bunch of minifig/starwars heads with my smoothing code and saw a lot of artifacts like your previous pictures. T junctions, non-water-tight connections, transforms...it's a jungle out there. :-)

I was going to mention this in response to "other ideas" but re: the meta command.

There is a totally different approach we could take, which would be consistent with the 3-d industry: we could write normal vectors into the LDraw files themselves.

This would mean that any smooth surface could have normals matching its underlying geometric shape, regardless of what that shape is. A smooth center meta command solves the problem for one shape, but then we'll need cylinders, cones, etc. Some shapes may be complex composites for which a pile of meta commands to get mathematically correct normals becomes quite painful.

Per-triangle and per-quad normals would cut down on the processing time to load a smooth part, and it would cut down (completely) on variance between implementations - we'd use the processed normals "as is". :-)

Such a scheme need not be ubiquitous - I am _not_ proposing any given syntax, but as a stupid straw-man example, we could have a meta command that, when preceding a triangle, sets the normals for that triangle. This would allow authors to "spot fix" particularly tricky cases like minifig heads, while leaving the simpler cases (cones, studs, etc.) up to the automatic algorithms. We could even code one of the apps to save out smoothed shapes _with_ the normal metas to act as a seed for authors who want to then tune smoothing.

Stepping back a little bit, I would like to see a system that:

1. Does not rely too heavily on heuristics to 'guess' what the author meant and

2. Provides ways for the author to specify exact behavior when needed without too much difficulty.

My concern about "heavy guessing" with 1 is that a complicated heuristic may produce wrong results such that the 'fix' introduces other wrong results. Better to have 2 - a system where authors can spend a little bit of time and get exact results when desired.

This is why I like rules like:

- Two triangles with exact matched corners and an angle > 60 degrees _will _be creased.

- Two triangles with exact corners and a line covering those corners exactly _will_ be creased.

It's easy to use the rules, straight forward to code, etc.

cheers

Ben

I forgot to mention, only the lower ones use the meta.

Normals in the LDraw format would be the best way to go, but like said before editing all existing parts is going to take ages, just look at the timespan BFC is taking.

For new parts the combination of >60 deg and matching type 2 lines (I think type 5 don't matter for smoothing) would be a very good and reasonable requirement.

Don't get me wrong I'm thankful to the part editors we getting all these parts in whatever form.