Jump to content

ComputeNormals for Blender exported mesh turns out strange material/shade effect


Ned
 Share

Recommended Posts

Hi,

I'm a newbie on BJS.

Recently, I export the TOB js file from blender

When I use MeshFactory Class to import the mesh on BJS , Everything is alright. (pic1)

before.png.8e9f8273b581149c3875079cb4faa67e.png

When I start to apply the vertexDeform function in QueuedInterpolation.1.1.js , the morphing works well in positions

However the texture turns out discontinuous effect. (pic2, triangle discontinuous dark shadow)

It seems like something incorrect. Maybe normals or shadows or something else? 

 

after.png.fb2ec18d1d73d7cb44764e6fd76a12cf.png

 

After tracing the code in QI, I found that the compute normals is involved in vertexDeform related function

So I try to cut down the problematic mesh part and remove the shape group in blender and export again,

just wanna try if ComputeNormals could work well or not.

But the new TOB js shows the same outcome.

(Procedural is simple : (1) instance the mesh (2) Re ComputeNormals in BJS )

(a tiny clue : the fail area is just around the boundary of two different materials, don't know whether it's related)

 

Call helps for the experts in this forum

1. What causes this issue?

2. Are there any recommended solutions to deal with this problem?

    (I'm afraid the vertexDeform/Morphing of Blender output mesh should always do the ComputeNormals function)

 

The playground is as following ( Mesh.js without shape group. Just want to make sure the ComputeNormals works well or not)

https://nedbenson.github.io/BJS_game/index.html

 

Link to comment
Share on other sites

Ah yes, I am well aware of the lines coming from border of Tower of Babel shape key Groups.  See the line under the chin, here.  As you might know, Unlike BJS morph targets, which are the entire mesh, you can have multiple groups on the same mesh like face, left / right hands, etc.  In addition to being much smaller than the whole mesh, also being on the CPU vs GPU allows many more targets do to vertex shader parameter thingie limits.  I kind of doubt that you could have 24 GPU based "targets" like for the Automaton tester scene.

What you are using is pretty old.  I have not posted in about a year. I am actively working on QI 2.0 & TOB 6.0.  TOB 6.0 will now require QI, which makes the generated code more clear, and pushed some code out of the export & into QI.  TOB 6 will also be for Blender 2.80, where materials with textures must be made using nodes.  Unlike BJS, I will be breaking a lot of compatibility.  Forcing all meshes to be QI.Mesh sub-classes means I can redo how mesh clone factories are implemented with almost none of the code implemented in the generated code.  More like code generated calling a QI runtime.

The problem here, I believe, is that for each state of a shape key group, ComputeNormals is called to get the corresponding end point normals.  As to not interfere with other shape key groups, the same vertices isolated for the positions are used to cull the vertices used for the normal state.  Then every frame of the morph, the states of the 2 involved (prior & next) of normals are interpolated and pasted into the normal of the entire mesh, & sent up to the GPU.

I think the list of affected vertices for a group needs to be expanded to be a little bigger to also include those which are also shared by faces of the ones which have a position change.  This list of vertices is determined by the exporter, and I am looking to see if this fixes the problem for TOB 6.0.

I do no know when this will be published.  I am also splitting time with a Kinect data capture functionality for a Blender addin for the next MakeHuman release.  If you only have the one group, and just a keys, you might just use the .babylon exporter / morph targets.  The export will be bigger, but should work for a low number of keys.

Link to comment
Share on other sites

Hi @JCPalmer @Sebavan,

Thank you for the great explanation.

Really looking forward to the new version you are working on !

 

The demo you've shown seems to have the same issue.

 

As you can see in my playground ,

the shapekey group is already removed,

however is still having issue after doing ComputeNormals.

So it is not caused by shapekey group, possibly material group or something else?

 

In our condition, it's really sad that maybe .babylon exporter is not the suitable choice.

Because our mesh is going to own plenty of keys and shape groups.

 

Do you suggest any workarounds on this topic?

 

Thank you again!

Link to comment
Share on other sites

For your playground, I am pretty sure you must have exported without shape key groups, or you would not have been able to mesh clone factory.  You cannot share geometry across mesh clones and morph the vertices on the CPU.

Plus based on your url, I kind of cheated & checked :D.  The mesh extends BABYLON.Mesh, not QI.Mesh.  Wow this is not even ES6 classes for meshes being generated yet.  TOB in repo is older than I thought.

While my long talk about shape keys still creates normal lines of their own, this is not the issue here.  This would mean nothing I am planning would help.  For meshes with multi-materials, Blender vertices are duplicated when exported (.babylon and .js exporter) when a blender mesh face is right next to another with a different material.  Blender assigns materials by face, and the 2 adjacent faces can share the vertex.  BJS requires multiple vertices when in that case.  So this is not an exporter problem.

I am not sure why you need to compute normals, but if you separate your mesh by materials, then parent mesh might fix. This would not increase overhead much, since each material is still going to be a draw call. 

Either that or bake the different materials, so that it is not a multi-material mesh.

Link to comment
Share on other sites

Hi @JCPalmer

 

Thanks for your great insight.:)

Sorry I didn't explain it clear. 

I try to solve the ComputeNormals things because I thought shapekey morphing in TOB-output-js is done with helps of ComputeNormals function.

Are there any ways to workaround? ( vertex deformation in correct material display without computing normals )

 

I follow the suggestions and do some experiments independently.

(A) Separate Meshes and set them to parent mesh

(B) Build all materials into one 

Sadly, I still face the normal-line condition.

 

I referenced your great post reply in the past here,

You said you faced some normal calculation problems and provide a good method,

Do you think we can solve our case with other calculating methods?

 

I also study the great post by @jerome which explains the algo in an excellent way

I try some [options] inside the ComputeNormals . Still not work.

But maybe there are some chances to make it but I fail with some misunderstanding myself. haha

 

Hopefully shape key morphing could work well regardless of vertex/material group boundaries!

Thanks for your expert advice again :):)

Link to comment
Share on other sites

It looks like you're just experimenting the little artifact described in the linked post (mystery of computeNormals()) on the edge of a surface.  You can notice it because this surface touches another one and it looks then not continuous. We would probably have not noticed it on a surface joined to nothing else.

If you want to hack this to make a smooth seam, you need to set manually the normals of the edge vertices to a better value. A "better value" could be the average value of the normals of each edge vertices.

Link to comment
Share on other sites

Hi @jerome

 

Thanks for your reply.:)

I did compare the normal exported from Blender and the one derived by ComputeNormals

All configs are the same but normals outcomes are slightly different.

Maybe it's due to the differences between two algos.

 

From your comments, I think the smoothness/normalization of normals is the key to this case?

However I find it's pretty hard to insert the "average value" idea into the Normal Computing Process without manual works? Don't know how to define where artifacts take place. Is it possible to apply entirely normalization through the long array of normals or other approaches to solve it?

 

Another question here, why the bad shading mesh part turns out regularly-trimesh behavior?

(half light triangle + half dark triangle , repeated ... )

 

Thanks again and again for your great insights :)

Ned 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...