LDraw.org Discussion Forums

Full Version: OMR + TEXMAPped unofficial file = ???
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6
(2019-09-23, 17:07)Orion Pobursky Wrote: [ -> ]Travis, can you formalize this into a proposed change to the spec. I'd like to go this route or just change the OMR to state that an image download is required.

Done.
(2018-03-09, 5:02)Travis Cobbs Wrote: [ -> ]Going mostly with Roland's suggestion (but without any header data), I created an MPD with an embedded checker texture (attached). It looks like this when opened:

[Image: Ybt9n2P.png]

Some notes about my implementation:
  • I changed BEGIN to START to be consistent with !TEXMAP.
  • I only support START/END, not NEXT.
  • I'm only officially supporting !DATA in MPD files (since that's the whole point).
  • Only one !DATA entry is allowed per 0 FILE entry in the MPD.
  • When parsing the BASE64 text, I ignore any characters that aren't in the BASE64 character set, so a space after the 0 !: shouldn't break my parser.

I am trying to make this example work with the web-renderer and I have an issue with the cylindrical definition - particularly in how the third point is placed.

In the example you share, the points are:
p1 = (0,10,0)
p2 = (0,75,0)
p3 = (0,0,-40)

According to the definition on: https://www.ldraw.org/documentation/ldraw-org-file-format-standards/language-extension-for-texture-mapping.html

The points should be interpreted as defining the cylinder with bottom center being p1, top center being p2. p3 is a position on the cylinder where the bottom center of the texture resides. 

However. Since p3 has 0 as the Y-ccordinate, and the cylinder is defined to be between y=10 and y=75, this point is not 'on the cylinder'. My question is how to interpret this position and compute the texture correctly: Should I use p3 in the calculations as if everything is fine, or should I project p3 onto the plane defined by normal (p1p2) and p1 being on the plane?

It turns out that projecting p3 onto the base of the cylinder renders a result consistent with other viewers, but this should really be mentioned in the standard.


Edit: It is fixed now. It turns out that flipY=true is default when importing textures using Three.js. Now I just have to fix the UV wraparound issue mentioned elsewhere which you have fought with back in 2011.
(2019-12-14, 14:41)Lasse Deleuran Wrote: [ -> ]However. Since p3 has 0 as the Y-ccordinate, and the cylinder is defined to be between y=10 and y=75, this point is not 'on the cylinder'. My question is how to interpret this position and compute the texture correctly: Should I use p3 in the calculations as if everything is fine, or should I project p3 onto the plane defined by normal (p1p2) and p1 being on the plane?

Neither interpretation gives me the result that you are showing, so I am wondering how you arrived at it.

The spec is a bit vague, but during LDCad's development I discovered it doesn't really matter where p3 is Y-wise.

Just use p3 to calculate a direction vector perpendicular to the p1 p2 vector.

And then use that vector to project the texture to the cylinder's surface by getting the angle between it and similar directional vector pointing at a vertex' position.
(2019-12-14, 19:20)Roland Melkert Wrote: [ -> ]The spec is a bit vague, but during LDCad's development I discovered it doesn't really matter where p3 is Y-wise.

Just use p3 to calculate a direction vector perpendicular to the p1 p2 vector.

And then use that vector to project the texture to the cylinder's surface by getting the angle between it and similar directional vector pointing at a vertex' position.
This was also the conclusion I ended up at. Thankfully the test cases from you and others has helped me a lot arriving at this.

The issue with the specification regarding p3 is that it is not vague, but says that it corresponds to the bottom center of the texture, which is incorrect. It would be better if the specification mirrored what you just wrote.
(2018-03-09, 5:02)Travis Cobbs Wrote: [ -> ]Going mostly with Roland's suggestion (but without any header data), I created an MPD with an embedded checker texture (attached). It looks like this when opened:

[Image: Ybt9n2P.png]

Some notes about my implementation:
  • I changed BEGIN to START to be consistent with !TEXMAP.
  • I only support START/END, not NEXT.
  • I'm only officially supporting !DATA in MPD files (since that's the whole point).
  • Only one !DATA entry is allowed per 0 FILE entry in the MPD.
  • When parsing the BASE64 text, I ignore any characters that aren't in the BASE64 character set, so a space after the 0 !: shouldn't break my parser.
I have now implemented the projections in the web renderer project (buildinginstructions.js - branch 'texmap' on github)

In order to create this screenshot:
[Image: Zozugf9.png]
I had to deviate from the specification regarding the V-projection for spherical. Instead of having the plane P2 containing p1, p2 and a point along the normal of P1, P2 is computed using p and the projection from p onto P1.

Should the spec be updated, or are we all implementing this incorrectly?
(2019-12-16, 22:02)Lasse Deleuran Wrote: [ -> ]I had to deviate from the specification regarding the V-projection for spherical. Instead of having the plane P2 containing p1, p2 and a point along the normal of P1, P2 is computed using p and the projection from p onto P1.

Should the spec be updated, or are we all implementing this incorrectly?

I'm not sure I fully understand. I think perhaps the existing wording is ambiguous, but not necessarily wrong. P1 has an infinite number of normals. Unless I'm misremembering, the intent is to take one of those normals that originates along the line between p1 and p2, move along that normal some arbitrary distance, and use that point (along with p1 and p2) as the third point for P2. So, for example, add the normal vector for P1 to p1 to obtain the third point to construct P2.
(2019-12-16, 23:50)Travis Cobbs Wrote: [ -> ]I'm not sure I fully understand. I think perhaps the existing wording is ambiguous, but not necessarily wrong. P1 has an infinite number of normals. Unless I'm misremembering, the intent is to take one of those normals that originates along the line between p1 and p2, move along that normal some arbitrary distance, and use that point (along with p1 and p2) as the third point for P2. So, for example, add the normal vector for P1 to p1 to obtain the third point to construct P2.
That is also how I understand it. However. It doesn't matter which normal of P1 we take, P2 will still be well-defined based on this choice. This is the current content of the spec:

Quote:
Code:
x1 y1 z1 x2 y2 z2 x3 y3 z3 a b
The first point represents the center of the sphere. The second point represents a point on the sphere where the center of the texture map will touch. The third point is used to form a plane (P1) that is perpendicular to the texture and bisects it horizontally. An additional plane (P2) can be computed by using points 1 and 2 and generating a 3rd point along the normal of P1. P2 will be perpendicular to both P1 and the texture and will bisect the texture vertically. The two angles indicate the extents of the sphere that get mapped to. These are –a/2 to a/2 and –b/2 to b/2 as measured relative to the vector from point 1 to point 2 and within the planes P1 and P2 respectively.

Now to map world coordinates to texture coordinates, U is given by the angle between the vector formed by points 1 and 2 and the vector formed by point 1 and a world point that has been projected to the plane P1. The angle is divided by "a" to normalize it to between 0 and 1. V is given by the angle between the vector formed by points 1 and 2 and the vector formed by point 1 and a world point that has been projected to the plane P2. The angle is divided by "b" to normalize it.

As I understand it we have the three points p1, p2 and p3 and the two planes P1 and P2:

- p1 is the center of the sphere.

- p2 is the point on the surface on the sphere from where the center of the texture will be mapped.

- p3 is used to define P1 by having p1, p2 and p3 lie on P1. We thus get U for a point p by taking the projection point q1 of p onto P1 and compute the angle between p1p2 and p1q1 before normalising by dividing with the factor a. Furthermore, I can compute a normal of P1 as the cross product between p1p2 and p1p3. Let's call such a normal N1. P1 can thus be defined by N1 and a point on it (any of p1, p2 and p3).

- P2 is constructed similarly by having p1 and p2 on it, but the third point is taken along a (non-zero) normal of P1. Since "P2 will be perpendicular to both P1 and the texture and will bisect the texture vertically", P2 is well defined: You can compute a normal N2 as the cross product between p1p2 and N1.

- Given a point p, we let q2 be its projection onto P2. The value V is then computed as the angle between p1p2 and p1q2 divided by the scaling factor b.


If we follow the logic above, the vertical lines of the texture on the sphere will move downward as we move from the middle toward the red dot. In order to get the screenshots posted here, we will have to place P2 so that p1, p and q1 lie on it. However. According to the specification, P2 should be placed independently of p.
I'm sorry, but I'm still not picturing what you  are trying to say, and I still don't see the problem. I'm not saying that you are wrong, I'm just saying that I don't understand. I have attached a file with some extra geometry. The red triangle uses p1, p2, and p3 from the TEXMAP definition. The green square extends that out into an easily seen square around the whole sphere representing P1 from the spec. The blue square represents P2 from the spec as I read the spec. Could you edit the model to move any of the extra geometry to represent what you think the spec says?
(2019-12-17, 4:38)Travis Cobbs Wrote: [ -> ]I'm sorry, but I'm still not picturing what you  are trying to say, and I still don't see the problem. I'm not saying that you are wrong, I'm just saying that I don't understand. I have attached a file with some extra geometry. The red triangle uses p1, p2, and p3 from the TEXMAP definition. The green square extends that out into an easily seen square around the whole sphere representing P1 from the spec. The blue square represents P2 from the spec as I read the spec. Could you edit the model to move any of the extra geometry to represent what you think the spec says?

Good idea. Consider the two points A and B on the surface of the sphere:

[Image: h8TIz9e.png]
The 'V' components of the UV mapping for these two points is computed by taking their projections onto P2. Let qA be the projection for A onto P2 and qB be the projection of B onto P2:

[Image: A1WdyZT.png]

Let VA be the V component of the UV mapping for A, and VB be the V component of the UV mapping for B where we ignore the scaling factor b:

VA = angle between p1p2 and p1qA

VB = angle between p1p2 and p1qB

As you can see on the second figure, these angles are not euqal. The projection of the texture should thus not portray the two points at the same height. In fact, following the specification to the word will result in this projection:

[Image: H940Uu5.png]
OK, I finally think I understand. I think what we really want to do is rotate the arbitrary p1pn vector around the sphere's axis until it lies on P2, and then measure the resulting angle between p1p2 and p1pn'. Does that sound right? And if so, do you have a suggestion on the best way to describe that in the spec?
Pages: 1 2 3 4 5 6