Sergio Reano Wrote:The only way to solve this (and I personally test this with my SR 3DBuilder appl) is to perform a vertex position check at part loading to see if another vertex with the same position (plus a tollerance) has already been loaded, and, if this is the case, then the new vertex will change its coordinates to the already existing one.
I think most of the newer tools are doing this in one way of another, but like i wrote in an earlier message even after indexing, there probably still be some unnecessary duplicate points which will be messing with the t-junction detection and fix algorithm proposed by Tim.
This is mainly caused by the varying decimal precision in the .dat files in combination with different matrix stacks.
For example when detecting duplicates using 3 digit precision some point in a .dat using 2.12 or something might en up like 2.121 after matrix transformations. It will then NOT be joined by some other point of 2.12 (2.120, arrived via a different matrix stack).
Simply lowering the precision threshold isn't a real option though, because it will ruin some of the more complex parts.
I'm thinking about changing my detection code to use distances to existing unique points instead of the decimal precision comparisons, the first 'unique' point will then be used full 0.001 (or even more) precision. But any later point that has a distance to it of less then e.g. 0.1 will be snapped to it. This way you keep the resolution, but trow a wider 'net' over closely grouped points that should be joined.
I'm hoping this reduces the false positives so I can take another go at implementing Tim's T-Junction removal method. I'm gambling on the assumption the precision is needed for placement and not for (very) closed grouped triangles making up some very high res detail.