UVs versus normals

Recommended Posts

Hi,

I feel a bit confused at setting positions, indices and UVs on a mesh.

My problem is I want to reuse vertices to have good normals on a circular or closed mesh (a cylinder or torus for example) AND stretch a texture uniformly on this mesh.

Say, I have a path A of 4 vectors3 : A0, A1, A2, A3

and another path B of 4 vectors3 : B0, B1, B2, B3

For convinience, imagine they all have z=0, A has y = 1, B has y =0, and just x varies along A and B.

If you draw two horizontal parallel lines A and B and then mark A0...A3 on one and B0...B3 on the other, you'll understand what I mean.

My final mesh is the surface between A and B.

But I want only 3 segments per path too : [A0, A1], [A1, A2], [A2, A0], so A0 and A3 are the same vector3.

The same with B.

When setting my positions, I push the four point (A0 to A3) coordinates in the positions array. So I declare actually 4 vertices.

Now, UVs :

V has value 0 for every A vertex and value 1 for every B vertex.

For Us, I compute each segment distance from A0 divided by the total path length.

(caution : it's not javascript notation, please read it as natural language)

example :

uA0 = 0,

uA1 = length[A0, A1] / length[path],

uA2 = (length[A0, A1]+length[A1, A2])/length[path],

etc

then uA3 = 1

The same for B.

This means my texture will have its height mapping the same ratio (1) the mesh surface and the texture width will be stretched along the width mesh, each U being the vertex.x/mesh width percentage.

As I read in some OpenGL doc, the UVs are per vertex. So I have right here 4 vertices, so 4x3 = 12 positions and 4x2 = 8 UVs.

Now, indices.

To create the surface between A and B, I create two triangles (faces) per couple (segment) of points of each path.

Let's consider segments [A0, A1] regarding [b0, B1].

My first two triangles will be : [A0, B0, A1] and [b1, A1, B0] (different order on the second triangle to get the faces oriented the same direction, just trust me).

Ok ?  following ?

So next two are : [A1, B1, A2] and [b2, A2, A1]

As you guess the last two would be : [A2, B2, A3] and [b3, A3, B2]

But remember, I want my mesh circular as if each of your line were a circle actually (or a triangle if you prefer, but it has nothing to do with the face triangles I just talk about the indices).

If I want the normals to be computed the right way so the light will reflect in a realistic way on my mesh and no artefact will appear, I need to re-use some vertices and join the last face to the first one.

So my last pair of face triangles must be : [A2, B2, A0] and [b0, A0, B2] => I re-used A0 and B0 vertices and don't care about A3 and B3 (which have the same values as A0 and B0).

When I test an enlighted standard material, I can check the normals are correct : nice reflection, no artefact.

When I texture my mesh material, I notice the texture is only stretch onto the two first segment faces, the last having the texture re-mapped from zero.

It's like the UVs apply only per vertex used in indices. Am I right ?

If yes how to combine vertex re-use (for nice normals) and complete texturing ?

Share on other sites

Before someone asks for a repro in the playground : http://www.babylonjs-playground.com/#1AEYBA

As you can see :

- this is a quite cylindric mesh,

- the vertices from start and at end of the rotation are re-used so the normals are good (put the light on the edge),

- the texture is stretched on the circular part of the mesh but is remapped entirely on the last plane segment.

=> I want it to be stretched around the full cylinder : it seems to be the case if I don't re-use vertices in indices array, but my normals are then ugly

I'm afraid the algo isn't simple to hack, but my question is more about the way to do.

Share on other sites

ouurggg

It seems noboby could understand what I meant

So I will rephrase it with a progressive playground example (as you all love it )

From line 23 to 45,

I simply populate  two arrays pathA ans pathB with points following a circle function.

I also compute each point distance from the first point (distance = 0) and each path total distance for later use.

The ratio pointDistance / totalPathDistance will be each point u for UVs.

The pathA points will all have their v valued to 0.0, and the pathB will all have their v valued 1.0

You can see the corresponding createLines(). Not big stuff until here.

From these two pathX array, I populate then a positions array, a indices array and a uvs array.

positions = [A0, B0, A1, B1, A2, B2, ...]

indices = [ triangle(A0-B0-A1), triangle(B1-B0-A1), ...]

uvs = [ (uA0, 0.0), (uB0, 1.0), (uA1, 0.0), (uB1, 1.0), ...]

With these three arrays and an extra normals array populated with ComputeNormals(), I can now create a mesh (line 95 to 103): this is the surface constructed by successive triangles between the two pathX.

http://www.babylonjs-playground.com/#S9WBW#1

Right.

Let's give it some colored material : http://www.babylonjs-playground.com/#S9WBW#2

The circle is not closed because when populating the pathX I used the lower than (< 2*PI) operator instead of lower or equal (=< 2*PI). This is done in purpose.

You can notice that, if I texture the material, the texture is stretched along each path according to each point distance : http://www.babylonjs-playground.com/#S9WBW#4

So now, let's close the circle.

line 46 :

I add in each pathX array an extra last vector3 : the pathX first vector3. So the last point is now the first point on each path.

I also compute the distance for these new points.

http://www.babylonjs-playground.com/#S9WBW#3

Now you can notice the texture stretches right along the new length (distance) of each path : http://www.babylonjs-playground.com/#S9WBW#5

Let's go back to the former view : http://www.babylonjs-playground.com/#S9WBW#3

As you can see, the edge between the start and the end of the path is visible in the light. The normals, which are used to compute the light reflection, are computed apart for the last points and first points.

In other words, the last points and first points don't belong to the same face (triangle).

The classical way to correct this is to re-use vertices. This means I will construct my pathX two last triangles with both last vertices and first vertices.

Go to line 78.

I don't change the present algo, I just undo the last triangles (not the smartest way to do, but it's easy to understand) and I redo them reusing first vertices :

• delete last 6 indices elements (2 triangles)
• add 6 new elements referencing first vertices (2 new triangles)

http://www.babylonjs-playground.com/#S9WBW#6

Now the normals are right . The light reflects in a realistic way.

But what if I now texture my mesh ?

What is happening here ?

The texture is still stretched along the pathX but only until to the next to last vertex. Then the texture is mapped from scratch from the next to last vertex to the last vertex.

Why ?

I guess the UVs don't rely on the last vertex declared in the positions array (the one with the right u in the uvs array), but on the last vertex used in the indices array.

As I reuse the first vertex in the indices array, the texture has the u value of this first vertex, which is 0.0 and not 1.0 (remember distance of the first vertex was zero). It must have 0.0 ! Because it is also the path first index !!

Now do you understand, readers, what I meant ?

More generally : we can (must) re-use vertices in indices array to have right normals, but we can't re-use vertices in UVs (not allowed and it's mandatory even to redeclare re-needed vertices as many times as needed).

So what is the right way to have both right normals and full stretched texture ?

Share on other sites

mmmh... driving back home, I thought back to all of this

I guess there's no solution actually because a vertex may belong to many faces but has only one associated UV to a given texture.

So a re-used vertex can't represent two different coordinates (uv) on the same texture map

Two full days spent at trying to solve this grrrr  with useless workarounds

It was the problem I had for some kind of textured closed ribbons.

So I think I will arbitrary make a choice : priority to normals for closeXXX paramaters.

If someone needs a full stretched texture, he just will have to close the shape by himself (rotating its paths until <= 2*PI  with closePath = false instead of < 2*PI with closePath = true) : not such a big deal, imho

Share on other sites

that's not an easy question. I think artists can also provide texture that can adapt using texture atlas

Share on other sites

yep, workaround thru art

I'm now convinced : priority to normals !

A texture would probably show some kind of separation edge between start and end (or image left and right, etc) when applied to a closed/circular mesh anyway even stretched along its full length, unless it is expressly designed not to do so.

Whereas we can choose (I mean thru code) to avoid light reflection artefacts by reusing the vertices.

On one hand the code can master an option, on the other hand we just have to speculate about the expected design of an asset (knowing there is still a workaround by closing oneselft the mesh) ...

So definetly : priority to normals.

Share on other sites

I can't agree more. Look for instance how 3dsmax applies texture to s sphere

Share on other sites

linux user, sorry I can't check 3DSMax, but I trust you

So, an easy workaround for the developer using this kind of closed mesh is :

- to let the algo computes the normals for the general case

or

- to add manually an extra vector3 at the end of the path array this way :

`path.push(path[0]);`

just after the populating array for(var i){...} loop, if he expressly wants the texture to be stretched until the end and assumes to give up the normals nice reflection.

Share on other sites

Have you considered alternative normal generation algorithms? Some let you supply a separate structure with adjacency information, or has thresholds to treat vertices on edges which are close together as being the same for the purpose of generating normals.

Editors like 3ds Max actually do not suffer from the issue you are experiencing. They don't use the rely on the representation of a mesh meant for the graphics pipeline when working with normals and UVs. Instead such programs typically represent meshes internally in structures that include adjacency information, like winged edge or half edge, which have extra information about the mesh the modeller algorithms need.

Share on other sites

Jerome, who the heck are you talking-to?  Are you having a conversation with yourself, here in this thread?  (JcPalmer does that, too, and I love it.  He's a genius just like you, Jerome.)

You understand that BJS (what this forum is about)... is a JS layer between webGL and happy game-writers, right?

The things you are talking about... are really OpenGL, right?  I have a feeling that you might need to visit some OpenGL forums... to get the answers.  The odds that others have struggled with this subject and written about it... are quite high, I would think.

You under-estimate your genius levels, you know?  I, and others, would LOVE to answer your questions and engage in conversation... but you are about to take the tops off of the graphics chips on the motherboard, and start digging around in there.  You're scaring the dog!

Now get your butt over-to the OpenGL mesh-wranglers forums and see if anyone has already lived through your hell.     There's GOT TO BE answers to these questions, somewhere... but maybe not here.

It's fun listening to your narration of your thinking process, though.  SUPER fun!

Share on other sites

Jerome, who the heck are you talking-to?  Are you having a conversation with yourself, here in this thread?  (JcPalmer does that, too, and I love it.  He's a genius just like you, Jerome.)

You understand that BJS (what this forum is about)... is a JS layer between webGL and happy game-writers, right?

The things you are talking about... are really OpenGL, right?  I have a feeling that you might need to visit some OpenGL forums... to get the answers.  The odds that others have struggled with this subject and written about it... are quite high, I would think.

You under-estimate your genius levels, you know?  I, and others, would LOVE to answer your questions and engage in conversation... but you are about to take the tops off of the graphics chips on the motherboard, and start digging around in there.  You're scaring the dog!

Now get your butt over-to the OpenGL mesh-wranglers forums and see if anyone has already lived through your hell.     There's GOT TO BE answers to these questions, somewhere... but maybe not here.

It's fun listening to your narration of your thinking process, though.  SUPER fun!

As I said, the solution is to compute normals 'better', using an algorithm that either takes extra adjacency info (eg. D3DXComputeNormals) or which can treat tris whose edges line up as being joined even if they don't share the same vertex indices

Share on other sites

@chg : Actually, I didn't think about recomputing the normals and about taking account of the adjacency information. This might be very smart, imo.

I just use BJS tools like ComputeNormals(). I don't feel smart enough (or feel too lazy ) to implement another normals computation just for this exotic case.

Maybe will it be implemented some days by someone more brave than me in the BJS core depths.

@Wingy : it's not about OpenGL but genuine BJS.

I only use the BJS layer and BJS tools here.

I just face a specific border case (textured closed mesh) and I couldn't find a smart workaround to solve it. So I stop fighting and solve by making an arbitrary choice of behavior : normals are prioritary to textures.

Lazy way to solve it, I know but efficient.

As there is no real bug, but just a different behavior than the one I initialy wanted and no simple way to achieve it, I decide to change my mind and to accept this current behavior.

Then I just have to justify it

So here is the Rule (my justification ) : on textured closed ribbons, normals are prioritary to textures.

Chg proposed another way to sove it by using a new (not yet existing and I won't code it !) ComputeNormals() method which could take account of not face joined but very close vertices. Too complex for me !

I let these considerations to mythological half-gods like DK or Davrous and go back to my humble concerns .

Share on other sites

Nod.  Thanks for the info.  Funny, too!

Trouble at the seams.  Border trouble.

Get the normals right, and the UVs suck.  Get the UVs right and the normals suck.  It's based-upon whether you re-use the starting vertices, or use ending vertices placed atop the starting vertices.  How to close the seam.  At least I think that's the situation, Wingy-simplified.

But, the same problem would be experienced when using OpenGL graphics ONLY, right?

I understand that you are working in the NOW, but I foresee a time when hardware does "seam-sensing" and UVs/normals compensations.  The hardware would likely make the UVs perfect, and then do a fuzzy-logic "averaging" or some other fancy words... to fix the lighting normals across the seam.

And I think I understand the "extra information" thing, too.  Vertices that are flagged as "starter" or "finisher" would be treated differently, and this might apply at hardware, OpenGL, and/or BJS layers.  They are, in a way, a different kind of vertex, so they could affect ANY layer.

Interesting.  I HOPE seam-management and seam-compensations happen at hardware level, someday.  You shouldn't have to concern yourself with these problems/options.  That's what we have computers-for.    Hardware should know what we want, and ignore what we asked-for.  heh

In a way, this is "snap" (align) for vertices. (Lately, I have been thinking about "snap" for assembling complex models using only Babylon basic shapes.  It's a future feature of a Babylon scene editor, but WHICH editor?  I think we have two basic scene editors, and I don't think either has snap/align or object arrays.  All in all, a different subject.)

Share on other sites

I understand what is going with this from a 3d modelling perspective and can picture the UV as green lines on the mesh when it's calculated out

How about we change the way that the seam is closed. Rather than trying to fill a gap, make the last induces the same position as the first (so that it is a full cylinder), then create another set of induces (the same co-ords as the last). Now these ones you can break and create as a join to weld the mesh shut.

I haven't tested this, but I think this should work.

Share on other sites

@spitefire : that's what I did.

The problem is not really how to join edges a closed mesh but :

- normals are per vertex

- a vertex may belong to many faces. If you want a right light reflection, it MUST belong to contiguous faces.

- texture uv are per vertex used in the mesh construction (so per vertex referenced in the indices array only)

- uv are just 2D coordinates, each uv is mapped to each vertex reference in the indices array

So you

• either declare two different vertices (with same vector3 coordinate) and the texture is mapped right but normals aren't right because you don't reuse this vertex for the first face contiguous to the last face of the mesh,
• either reuse the vertex common to first and last contiguous faces, then its normal is right, but you can't set this shared vertex two different uvs (first and last).

I tried it many ways...

If you want realistic light reflection, you really need to reuse vertices for contiguous faces (unless computing especially different normals for these very edge vertices, so it won't be real normals anymore).

If you try any workaround with other set of indices on the same mesh, artefacts appear in light reflection on the surface .

Maybe the right way would be  to have two full set of indices, so two meshes superset in each other : one for light reflection with right normals (and so reused vertices) and another one for texturing with right uvs (no reused vertex) and no reflection... don't even know if it could work

Share on other sites

I would vote that this is most likely a result due to non-contiguous vertex ordering or equidistant surface bias when drawing triangles - as I often experience many variations of this depending on which software renderer is calculating UV vertex indexing and surface bias.  I haven't yet played with this surface vertex and line info, but this is usually a problem depending on how triangles and normals are ultimately calculated through non-contiguous vertice ordering - without my taking time to reconstruct and run a quick trial.

BTW - you're correct - it is quite entertaining to listen to jerome offer up his own dual conversations.      I often feel as simply an observer when reading many of his threads.

Also, if surface bias is scaleable within BJS, then I believe problem solved.

Share on other sites

In the train back from Paris TechDays, instead of sleeping last night, I thought I could eventually add some modification to the computeNormals() method.

Something like an optional array of indices pairs parameter.

If this array is defined, it will tell : please consider each pair of referenced vertices as the same one vertex for normal computation. Even if they aren't declared on the same face.

So we could have contiguous faces, with no vertex re-use (so right uv and right texturing), AND right light reflection.

In brief, have a different normal computation for given vertex.

Share on other sites

In the train back from Paris TechDays, instead of sleeping last night, I thought I could eventually add some modification to the computeNormals() method.

Something like an optional array of indices pairs parameter...

BTW the D3DX lib function is what Microsoft considers open source (MS-PL license) if you want to check the c++ implementation of the alternative I mentioned before (EDIT: btw, you may want to try and focus on edges rather than faces or vertices, I know you are thinking of ribbons now, but fans are common too, for geometry that can be smooth shaded each edge should only belong to two triangles)

...unless computing especially different normals for these very edge vertices, so it won't be real normals anymore).

I prefer terms "lighting normals" vs "surface normals" (the later being what I think you mean by "real" normals) for making the distinction Lighting normals are used for lighting and shading meshes, but they need not be the same as the normals to the geometry as you note, lighting normals are no less valid, as they describe the approximated surface being considered with respect to say say gouraud or phong shading as projected onto the polygonal mesh, for the purposes of making it appear smoother than it really is. That is to say, I think of lighting normals as normals to the higher order shape the mesh approximates, with the surface normals to the actual mesh not necessarily being the best representation.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

×   Pasted as rich text.   Paste as plain text instead

Only 75 emoji are allowed.