Jump to content

animate or morph mesh dynamically


jerome
 Share

Recommended Posts

thank you

when talking about poor verbose messages, I meant error-dedicated messages : try to give wrong indices in a submesh, you'll get the raw webGL error. Not some user friendly message explaining you probably gave not enough indices in regard of vertex numbers.

 

that's what I meant. ;)

Link to comment
Share on other sites

Yup, I've totally understood that, no problem. ^_^

My post was just to say something like "log is ready to be used, sure there is no real error handling in many parts of BJS but it's never too late to start  :P  (and in fact, there are parts with good error handling, the whole files and textures loading system and indexDB stuff for instance)

Link to comment
Share on other sites

iiiiihaaaaaaa !

 

Finally, DOUBLESIDE problem solved (I hope)...

 

Well, no long post with new PG link record this time ;) , only a short demo :

here is a plane DOUBLESIDE ribbon  with many paths : http://www.babylonjs-playground.com/#16NCF0#3 (please rotate)

Note that backfaceculling isn't disabled because it causes z-fighting issues when meshes are created with DOUBLESIDE

 

The same (rotate please) with its plain material : http://www.babylonjs-playground.com/#16NCF0#2

 

Let's morph it with a simple y-sinus function (line 33 : updatePath) : http://www.babylonjs-playground.com/#16NCF0#1

Now you can check that the light reflects both sides.

Change speed line 89 if you want to play :)

 

This PG code seems pretty long.

So now imagine, once your ribbon is created, you just give new ribbon paths each frame to one single method in the render loop instead.

That's what I'm trying to implement ;)

Just wait and see...

Link to comment
Share on other sites

Updateable is being set true.  It is the arg right after scene in CreateRibbon. 

 

If there was a Float32Array version of BABYLON.VertexData.ComputeNormals, or one that went both ways, mesh.updateVerticesDataDirectly might help.

 

A nice demo, but a very high price is being paid for being open ended.  If you had an end point for positions & matching normals.  You could just interpolate them.

Link to comment
Share on other sites

Gulped, but I have not the time to test it:

        /**         * @param {any} - positions (number[] or Float32Array)         * @param {any} - indices   (number[] or Uint16Array)         * @param {any} - normals   (number[] or Float32Array)         */        public static ComputeNormals(positions: any, indices: any, normals: any) {            var positionVectors = [];            var facesOfVertices = [];            var index;            for (index = 0; index < positions.length; index += 3) {                var vector3 = new Vector3(<number> positions[index], <number> positions[index + 1], <number> positions[index + 2]);                positionVectors.push(vector3);                facesOfVertices.push([]);            }            // Compute normals            var facesNormals = [];            for (index = 0; index < indices.length / 3; index++) {                var i1 = indices[index * 3];                var i2 = indices[index * 3 + 1];                var i3 = indices[index * 3 + 2];                var p1 = positionVectors[i1];                var p2 = positionVectors[i2];                var p3 = positionVectors[i3];                var p1p2 = p1.subtract(p2);                var p3p2 = p3.subtract(p2);                facesNormals[index] = Vector3.Normalize(Vector3.Cross(p1p2, p3p2));                facesOfVertices[i1].push(index);                facesOfVertices[i2].push(index);                facesOfVertices[i3].push(index);            }            for (index = 0; index < positionVectors.length; index++) {                var faces = facesOfVertices[index];                var normal = Vector3.Zero();                for (var faceIndex = 0; faceIndex < faces.length; faceIndex++) {                    normal.addInPlace(facesNormals[faces[faceIndex]]);                }                normal = Vector3.Normalize(normal.scale(1.0 / faces.length));                normals[index * 3] = normal.x;                normals[index * 3 + 1] = normal.y;                normals[index * 3 + 2] = normal.z;            }        }
Link to comment
Share on other sites

aarrrg it runs at 20 fps only at home on my old laptop

it was 60 fps at work.

 

You are right, the computeNormals could get optimized maybe... but I don't feel strong enough to touch something so deep in BJS core and used everywhere, gasp !

 

Well, this ribbon is quite big : 160 x 40 path points, so 140 x 40 x 2 (double side)  = 12800 vertices !!!

this means 38400 indices to pass thru to compute normals :-P

 

I could have done a simpler example... didn't think

It's a special case here : big ribbon, double side and every point of every path recomputed each frame... maybe not that common

don't know

the same with BACKSIDE only (+ backfaceculling = false) : http://www.babylonjs-playground.com/#16NCF0#4

much better performance !

Link to comment
Share on other sites

I'm thiking about another complementary way to improve computeNormals() for this case.

 

A double-sided existant mesh has twice the same positions and twice more indices than the same single sided mesh, because the second side is just the first side copied and inverted.

So, if we could tell to the computeNormals method that it shoud compute normals only on half the indices array, it would speed up a lot.

 

Something like

computeNormals(positions, indices, normals, sideOrientation)

If sideOrientation == DOUBLESIDE then

  • set the limit for computation to half = indices.length / 2 and compute normals as usual until this indices array limit,
  • after the usual normals computation, just add a extra loop to set only the uncomputed backside normals (the second half of the normals array)
normals[i + half] = -normals[i];

Twice less computations...

Should quite as fast as single side normal computation, I guess.

 

BTW, I can see  that the computeNormals() method ( https://github.com/BabylonJS/Babylon.js/blob/master/Babylon/Mesh/babylon.mesh.vertexData.ts#L1116 ) and yours, JC, have both two new array allocations ( positionsVectors and faceOfVertice).

Maybe the computeNormals method wasn't initially designed to be called in the render loop so it was simple and obvious to use new arrays.

And it still works well for single-sided big mesh updated in the render loop.

 

I guess it would worth to think about a way to do without these array allocations. I will give a try... unless someone smarter than me has already the right good idea to do it  :D

Link to comment
Share on other sites

Hi J!  I don't mean to interrupt your questions (I have no answers) but...

 

Isn't there SOME (shader?) method that can be used... to avoid EVER using double-sided mesh?  Assuming jMesh.doublesided = true... this dream shader would:

 

Check if THIS normal is aiming towards "the back" per the current viewProjection.  (Is this a dark normal?)

    -  IF the normal is aimed backwards, invert the normal.

    -  IF not aimed backwards... continue normal-ly  (ar ar)

 

I don't know if this would work... and it would be "simulated doublesided" and not true doublesided.

 

Maybe that's how our current backFaceCulling = false... works.  I haven't studied it.

Link to comment
Share on other sites

Well, I read much about the doubleside topic.

The issue is that, in WebGL, one normal is associated to one vertex : 1 to 1 relationship.

So if you want the light to be reflected both sides of a mesh, there's only one way (as much as I read about this topic) : to duplicate vertices.

 

When updating a mesh (positions update), we need to update twice more vertices for a double-sided mesh than for a single-sided one... and to re-compute normals (which is a heavier calculation than just setting news coordinates : iterate each position, then each indices, make associations per face and compute vector cross products).

 

The topic is then how to optimize this normal computation for big meshes as it wasn't intended to be used in the render loop initially, I guess.

 

I don't think a shader could easily do the job, because we need to know everything about all the vertices at once.

Link to comment
Share on other sites

After think about it, the only changes to the Typescript source for computeNormals() were to pass syntax checking.  Javascript does not do any checking.  I changed one of the hundreds of playgrounds in this thread for storing things in typed / native arrays:

  var positions = new Float32Array(mesh.getVerticesData(BABYLON.VertexBuffer.PositionKind) );  var indices = new Uint16Array(mesh.getIndices());  var normals = new Float32Array(positions.length);

And then using updateVerticesDataDirectly :

  var updateMesh = function(mesh, positions, indices, normals, sideOrientation) {    mesh.updateVerticesDataDirectly(BABYLON.VertexBuffer.PositionKind, positions);    BABYLON.VertexData.ComputeNormals(positions, indices, normals);    mesh.updateVerticesDataDirectly(BABYLON.VertexBuffer.NormalKind, normals);  };

http://www.babylonjs-playground.com/#16NCF0#7

The change was small, but seems much more constant (probably avoiding all the garbage your way was spewing).  Still think a solution that calls ComputeNormals in the render loop is a bad idea.  Are there 1, 2, or 3 points that represent the maximum extent of the morph?  If so, compute the positions & normals for them ONCE.  Use them as morph targets.  Interpolate between 2 of them at a time.

Link to comment
Share on other sites

Very nice change JC :)

 

Meanwhile, I made changes in my direction. Maybe could we then mix both as they are complementary ?

 

http://www.babylonjs-playground.com/#E0HF5

 

What did I do ?

A big refactor and a strong cleaning up in global variables, functions, etc.

Now things are more clear.

I will tag parts with <under the hood> for people who want to understand how it works and <user friendly> for people who just want to use it and ignore the former part.

 

<under the hood>

 

line 128 :

I coded a generic updateMesh() function knowing nothing about the mesh type to be updated. It just needs to receive a positionFunction as parameter.

This positionFunction function will then be  called inside updateMesh() and will set the positions array.

You can also notice (line 135) I don't use the classical computeNormals() method but a localComputeNormals() instead and that I give it an extra sideOrientation parameter.

This part of code can be shared with any update mesh function.

 

</under the hood>

 

 

 

<under the hood>

 

line 100 :

I coded the awaited positionFunction() method with a closure in the ribbon positions dedicated positionsOfRibbon() method.

Thus the updateMesh() method can remain generic and useful for any mesh type.

The mesh positions computation depends only of the mesh type. So we just have to describe in this kind of dedicated method how to compute positions for each kind of mesh : tube, extrusion, lines, etc.

This part of code is coded once par mesh type in the API.

 

</under the hood>

 

 

 

 

<user friendly>

 

line 88 : custom morphing function

This function computes a new paths array for ribbon. This is the only part to be then coded by the user : how is my mesh updated ?

The user has only to deal with a paths array to create or update his ribbon. He doesn't need to know more.

 

</user friendly>

 

 

 

<under the hood>

 

line 23 : localComputeNormals

I just copied/pasted the original computeNormals() method here.

Then I changed some little things : I added a new parameter sideOrientation.

Why ?

Well, we know a double-sided mesh has replicated positions, indices, normals twice. So why not compute normals only on half the positions/indices and then just set the same normals, but negative (backside), to the rest of the un-computed normals ?

It's twice less normal computation, quite the same than for a single sided mesh, isn't it ?

Seems to work quite well :) ... I need to test tonight on my old laptop at home to check the real gain.

For now, on my work computer, there is no difference between a single-sided and a double-sided mesh.

 

</under the hood>

 

 

 

 

<user friendly>

 

line 149 : render loop and morphing

As you can see here, the user has not much work to do :

  • to compute his new paths,
  • to pass them to the couple positionFunction/updateMesh which will be embbeded in a single more pertinent function in the final version : createRibbon(a,b,c, ribbon)

<user friendly>

Link to comment
Share on other sites

mmhh...

something should be still done, I guess ...

On my fast computer here, I have 60 fps in chromium with the yesterday dirty code and original computeNormals() use for DOUBLESIDE : http://www.babylonjs-playground.com/#16NCF0#1

and now only 45 fps with the new link : http://www.babylonjs-playground.com/#E0HF5

 

aaargggg

 

only a good new : the fps rate is the same with single-sided and double-sided mesh  :(

 

don't understand why, because the number of normal computation was really reduced by two !

 

maybe something somewhere else in the code reorganization ?

 

pffff, need to re-check where something went wrong ...

spoke too fast, once again

Link to comment
Share on other sites

You have just too much for me to keep up with.  On Ubuntu, I only seem to ever hit 30 fps.  Have an original  GTX480, powering a 30" display (using nVidia driver).  I almost hit 30 with the one you say is 45 fps.

 

If you are going to use your own computeNormals,  I would switch to typed arrays for the internal vars & not re-create them every call. You create them with a fixed size, so index only.  No pushing.  I am talking about:

 

    var positionVectors = [];
    var facesOfVertices = [];
 

Link to comment
Share on other sites

Ok I understand what you mean.

 

I personaly think the computeNormals() method could be used in the render loop... even it means it needs an optimization (BJS wants to be THE optimized tool, nope ? ;) )

Your implementation shows we can improve it !

 

Well, mine too, now ... pfff... I just retested and have now 60 fps (chromium fps display) :)

don't get why this fps vary this way

 

I guess something smart can really be done for the computeNormals() method :

maybe, as you suggest, to use typed arrays, use indexes instead of pushing

and not to allocate new intermediate array (to store them once at mesh level ?)

and to compute normals only for one side if double-sided and then to replicate negative results to the other side

 

I have to confess I feel afraid to modify this method because it is really much used everywhere in BJS.

So I won't do it :(

I will just give my opinion : it should be optimized to be usuable in render loops and no array re-allocation as well as one-side only normal computation for double sided mesh are good leads imho

and I agree with you : typed arrays and indexes uses are very good leads too.

 

Well, I guess I will implement the mesh update logic using the provided BJS computeNormals() whatever it is or will be.

Things will only get better and better anyway :)

 

 

BTW,

My platform is a DELL Precision T1700 tower

OS : ubuntu 14.04 64b

RAM : 8 Gb

CPU : intel xeon E3-1220 at 4 x 3.1 GHz

GPU : Nvidia Quadro K620 with ubuntu provided nvidia driver

and two 22" screens besides, each at 1920 x 1080 resolution... extended desktop

Unity window manager and many compiz effects : windows morphing, transparency, etc

Link to comment
Share on other sites

tested at home on old laptop ... same result : 20 fps with or without localComputeNormals()

and 20 fps with JC's typed arrays

 

well, this means our "improvements" don't bring much so far :D

 

If you want to test, you people, here are the links (please one after the other, not simultaneously in many tabs) :

 

initial Quick&Dirty = http://www.babylonjs-playground.com/#16NCF0#1

JC's typed arrays optimization = http://www.babylonjs-playground.com/#16NCF0#7

Jerome's half normals computation + cleaning = http://www.babylonjs-playground.com/#E0HF5

 

BTW, I noticed I had often much better fps rate with a script running alone in my browser (from my local webserver) than the same script running in the PG.

Probably due to editor program ...

Link to comment
Share on other sites

mmmh...

after having monitored the 3 versions, it appears that :

  • they run quite the same speed,
  • they don't trigger the GC or impact the VM memory allocator

So these improvements are... above all better code or cleaner code, but nothing really noticeable to the end user.

 

Well ... how conclude ?

 

In this very case, the end user will be provided a way to update a double-sided mesh, right. That's all.

He is not obliged to update his mesh each frame in the render loop, he's not obliged to make it double-sided, he's not obliged to have about 40 000 indices in the same mesh.

 

But if he still wants to, all this at the the time, he would have then to accept that the performance may decrease a lot on not strong GPU.

That's it.

 

 

I had the same kind of problem with dynamicTexture : 40 text dynamicTextures to update each frame... fps crumbling down.

I just found another way to do. ;)

Sometimes you can't fight with the JS VM or the GPU channel.

Link to comment
Share on other sites

I didn't know if I had to post this rather in this old post : http://www.html5gamedevs.com/topic/11907-spherical-harmonic-wingnuts-challenge/?hl=challenge or here ...

 

well, doesn't matter : http://www.babylonjs-playground.com/#27QHMX

 

I just wanted to stress a bit my mesh update algo before starting to code it in TS in BJS ;)

 

the same, public version : http://logiciels.iut-rodez.fr/proto/weathermap/test2/SH.html

 

lines 40-41 :

delay : nb of ms between two changes

steps : nb of steps when morphing

 

under the hood : there is now only one mesh which is updated each frame during a morph sequence instead of a new ribbon object creation in the older script version.

Link to comment
Share on other sites

Oh man, that's gorgeous, Jerome!  Way to go!  Morphing spherical harmonics... YAY!!!

 

Friggin-ay.  Nice.

 

What do you think, guys?  Maybe interpolate the firecolors from previous to next... with the same number of steps as the mesh morph?  (ahem)  :)  Maybe that's not important at this time, but...

 

...if you think-of each morph step as a path point, then... hmm.  Should the mesh update area include an attempted call to customTextureUpdate, much like tube animating tries to find/call a customRadiusFunction?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...