DATHeader 3.0.18.3 - bugfix version


DATHeader 3.0.18.3 - bugfix version
#1
DATHeader 3.0.18.3 is out now.

BUGFIXES:
1) Typ-o 'supbart'
2) StarWars filename ending ..ps1.dat is detected as subpart.
3) In special cases colinear vertices are not detected
4) Error if first character in the title is a number
5) Wrong comment for official files not correctly detected.

I hope these errors are now gone.
If you find anything else not working like expected, please report it as answer here or write a PM.

Please download as usually from http://ldraw.heidemann.org/index.php?page=datheader

Have Fun

mikeheide
Reply
Re: DATHeader 3.0.18.3 - bugfix version
#2
DATHeader has a function to detect flat files that are scaled in the flat direction.
This is not an error that has any visual effect, but might be worse for some other calculations. Therefore I introduced that function in DATHeader. I used a tolerance value to minimize the wrong notifications.

LPE now finds errors in files that DATHeader accept for being correct.

I think both applications should use the same idea for finding these errors to avoid confusing the users.

Please see: http://www.ldraw.org/cgi-bin/ptdetail.cg...ts/272.dat

Any comments are very welcome.

cu
Mike
Reply
Re: DATHeader 3.0.18.3 - bugfix version
#3
As I answered previously, I think checking for scaled flat primitives is a waste of time: I can see no benefit of checking this... Except for the code size, eg. 1 is two character less than 100, but that's pretty useless.
Of course DH autocorrects this error so penalty for parts author is limited too (except for the increased DH execution time).
Reply
Scaled Flat Subfiles
#4
LDPartEditor uses arbitrary precision with an exact epsilon value of 0.000001.
The detection algorithm works for flat files (on the X/Y or Z layer) and is similar to Mike's implementation.
It's nearly the same solution.

Code:
// M## = Local Matrix (arbitrary precision), Square root with 50+ decimals
        final BigDecimal lengthX =  Abs(Sqrt(M00 * M00 + M01 * M01 +  M02 * M02) - 1);
        final BigDecimal lengthY =  Abs(Sqrt(M10 * M10 + M11 * M11 +  M12 * M12) - 1);
        final BigDecimal lengthZ =  Abs(Sqrt(M20 * M20 + M21 * M21 +  M22 * M22) - 1);

        final BigDecimal epsilon = new BigDecimal("0.000001");

        if (flatOnX && epsilon.compareTo(lengthX) < 0) {
            // Show a warning for X
            result.add(new ParsingResult(I18n.VM_FlatScaledX, "[W02] " + I18n.DATPARSER_Warning, ResultType.WARN));
        }
        if (flatOnY && epsilon.compareTo(lengthY) < 0) {
            // Show a warning for Y
            result.add(new ParsingResult(I18n.VM_FlatScaledY, "[W03] " ...
        }
        if (flatOnZ && epsilon.compareTo(lengthZ) < 0) {
            // Show a warning for Z
            result.add(new ParsingResult(I18n.VM_FlatScaledZ, "[W04] " ...
        }

Philippe Hurbain Wrote:As I answered previously, I think checking for scaled flat primitives is a waste of time: I can see no benefit of checking this... Except for the code size, eg. 1 is two character less than 100, but that's pretty useless.
Of course DH autocorrects this error so penalty for parts author is limited too (except for the increased DH execution time).

I share your opinion.
First, I thought DatHeader had a bug, but DatHeader prefers simply a greater epsilon value...

I strictly recommend not to change the epsilon value to zero. There might be some scenarios which are correct but not numerically stable. It is better to be less cautionous here and prefer a epsilon value greater zero to allow values which are smaller than epsilon.
Reply
Re: Scaled Flat Subfiles
#5
Hi Nils, we should use the same value, to get the same results.
Currently DATHeader works with the value 0.0005 whereas you work with 0.000001.

Which value should we use for the future?
Reply
Re: Scaled Flat Subfiles
#6
Hey Mike. I would suggest to go for 0.000001 as a threshold if you like. As a result, your detection algorithm becomes more unerring. Otherwise, I can easily adjust it to your value in LPE. It's not a big deal.
Reply
Re: Scaled Flat Subfiles
#7
I would vote for 0.000001 as we usually use a maximum number of 6 decimals for our files. So why not adapt it here. The result get's even more exactly and from my point of view I prefer exact solutions.

Just my two cents...

/Max
Reply
Re: Scaled Flat Subfiles
#8
Quote:I would vote for 0.000001 as we usually use a maximum number of 6 decimals
Problem is in the term "maximum"! What happens when the transformation matrix is reduced to the usual 3 or 4 decimals? I would suggest on the contrary to relax the threshold to 0.001 trying to avoid problems. Or to remove completely this requirement that brings absolutely nothing!!!
Reply
Re: Scaled Flat Subfiles
#9
Hi Nils

I respect very much the words of Philo (http://forums.ldraw.org/showthread.php?t...5#pid17345) and so far nobody claimed that the values used in DATHeader for this purpose are wrong. So my suggestion is to use the value from DATHeader.

I also wanted to adjust this value depending on the given vertices in the file according to this documentation (http://www.physik.uni-jena.de/pafmedia/s...ht_PDF.pdf). It is only in German language but I think you can read it Smile
My try to do it just failed. So if you have a solution that will work, I am proud to code that also for DATHeader.

cu
Mike
Reply
Re: Scaled Flat Subfiles
#10
I adjusted the value to 0.0005 as it is used in DatHeader to avoid confusion in the future.
There is no more work to do.
Reply
Re: Scaled Flat Subfiles
#11
Thank you very much Smile
Reply
Re: Scaled Flat Subfiles
#12
You are welcome.
You've got mail Smile
Reply
« Next Oldest | Next Newest »



Forum Jump:


Users browsing this thread: 6 Guest(s)