Jump to content

Blender Exporter and IK animation workflow


gryff
 Share

Recommended Posts

@JCPalmer

Just a thought ...

Quote

Make up to 200 pose & sub-poses that can be used / combined over an over.

Jeff, for a couple of reasons you might want to create a number of pose libraries with different names eg. run, walk, jump, upper body,  etc. as :

1. Searching through that little box for a pose to apply might get tedious.

2. There maybe a limit to the number of poses per library (24?). You might want to test that.

That said, you can then "append" a pose library to a rig (I assume the same rig) in a new .blend file.

cheers, gryff :)

Link to comment
Share on other sites

There is no problem on the Blender side with multiple Blender Skeleton Pose Libraries.  For QI, you would just need to export to a QI.SkeletonPoseLibrary file more than once, then put each .js file in the head section of the html, or gulp them together for production.

Right now, a QI.Skeleton class (BABYLON.Skeleton sub-class) can only have one library assigned at a time.  I wanted to take advantage of the assignment to pre-process any scaling required due to bone length differences.  Had not thought much about either switching or multiple libraries.  Not scaling just yet. 

Could have a field of what you wanted to call the javascript module right below the Trans/Rot bone field.  Right now, the module name is set the same value as the file name.  I could just have a multi file module, where the first js file encountered would actually create the library.  Others following would just append their poses.  An example of a multi-file module is BABYLON.

Head hurting yet?

 

Link to comment
Share on other sites

33 minutes ago, JCPalmer said:

Head hurting yet?

Not right now Jeff - but maybe later tonight after I have celebrated the passing of another year.:D.

And one day maybe some simple examples of this QI and shapekey stuff :)

cheers, gryff :)

Link to comment
Share on other sites

Happy birthday!  Test scenes that I build, it publish.  They are not remotely as complicated as the voice sync page.

Am working on a test scene for the QI.PoseProcessor.  Just a MH mesh, and a ground set to wire mesh so that you can detect that the mesh is actually changing positions.  A button that queues a 2 step sequence.  Perhaps a drop down with a number of sub-poses.

Have a Finger shape key test scene to show QI.ShapeKeyGroup.  Not too complicated.

Link to comment
Share on other sites

@JCPalmer -

If you are not already doing so, please create these as a move tree with transition frames which can easily be used to blend these motions as a classic move tree. Also, if you can post a link to your latest scene, that would be great.

Thanks,

DB

 

Link to comment
Share on other sites

ahh, what's a move tree?  I do not actually have frames, so the fact that you mentioned it is not a good sign.  Frames are up to the system to decide.  Everything I am doing is specified in units of milli seconds.  Frames are a trap(ping) of pre-baked animation.  If things are running late when using a 30 fps baked animation, the BABYLON.Animation "player" can only adjust in increments of 1000 / 30, or 33 milli second intervals.  QI can adjust in 1 milli second intervals.

Going the opposite way, if load is light QI can ramp up the fps.  Actually, it is constantly adjusting so that the last target (pose or morph) arrives as close to the requested milli as possible.

Hopefully, this is just a terminology issue.  My backgrounds include finance, executive decision support, data bases, telecommunications IP infrastructure, & chemical structures searching.

Link to comment
Share on other sites

Move trees are hierarchies of character animations used in video games - so in order to get to any "move" (character action) you need to pass through a frame common to the branches in your move tree to allow seemless animation without any "jump" from pose to pose. This can be done using poses as you should always have the exact same pose at the first and last frame for both looping, and for blending from one motion to another. And we call these move trees as prior to animating your character for any game or application, the move tree is designed on paper to fully understand which animations are permitted to change from one to another - or which animations require an intermediate motion to use for blending one animation to another.

Without designing your move tree prior to animating or setting up your poses for runtime interpolation, you'll find considerable gaps in animation as well as issues with blending and looping - as you won't easily be able to add motions if you haven't first designed and determined every motion and which ones blend to others as well as loop. These hierachical charts look similar to tree charts (such as ancestory charts) which is why we call them move trees as a standard term in game development.

DB

Link to comment
Share on other sites

Ok, first I agree in order to take intelligent action when queuing additional poses, you need to know what preceding pose will be or already has run.  I have modified the PoseProcessor class, adding a function getLastPoseNameQueuedOrRun(), also callable through the mesh.  That would not be something you would want to do at the application / game level.

Not quite on board with the rest yet.  Whatever might be done will need to be done on the javascript side.  Blender has nothing to relate poses. 

I think there will never be a "jump" issue with an interpolator.  I could make a sitting to handstand pose look smooth, if not believable. 

2nd, there is no need to end a series of grouped poses with first and last pose being the same.  That would actually be bad.  The class to do these groupings is called QI.EventSeries.  It has a repeats argument.  This first/last pose would be done in between repeats twice.  (could set a last pose to run in 1 milli, i guess, but kind of wasteful)

I did a very early test while still using a BABYLON.Animation object to store my poses.  Here is the code I used to run.  I added a some code of how I thought this might be implemented:

    function walk(mesh, repeats, doPrep) {
//        mesh.assignPoseLibrary("poses");
//        mesh.addSubPose("headLeft");
        var events = [new QI.PoseEvent("walk@7", 233), // 233 milliseconds
                      new QI.PoseEvent("walk@11",133),
                      new QI.PoseEvent("walk@18",233),
                      new QI.PoseEvent("walk@27",300),
                      new QI.PoseEvent("walk@32",167),
                      new QI.PoseEvent("walk@45",433),
//                    new QI.PoseEvent("walk@7", 1) // wasteful
                     ];
        var eventSeries = new QI.EventSeries(events, repeats);
        
        // here evaluate if something needs to be queued before
        if (doPrep){
            var lastPoseName = mesh.getLastPoseNameQueuedOrRun();
            switch (lastPoseName){
                case "sitting":  stand(mesh, false); break;
                case "flying":  land(mesh, false); break;
            }
        }
        mesh.queueEventSeries(eventSeries);
    }

You could implement this as a group of functions {walk, stand, land, sit, stand, etc}.  Each function could evaluate the pose & take action.  The doPrep arg on each function is so there is not infinite prep.  If you want multiple re-mediate actions coming from certain previous poses, just put multiples in the case.

The idea of coming up with something in a data structure gives me the feeling of entering a quagmire. 

 

Link to comment
Share on other sites

On 6/17/2016 at 3:04 PM, JCPalmer said:

I think there will never be a "jump" issue with an interpolator.  I could make a sitting to handstand pose look smooth, if not believable. 

If you are able to make a pose interpolator in which the resulting animation looks even reasonably good, my hat's is off to you. Even after many years working in character anmation, this is not something I could accomplish, and in my previous personal attemps, I've never had any resulting character animation using run-time pose interpolation which was ever useful in production; as I have never found this to be as simple as you describe - or even possible to create useful character animations.

So again, if there's anything I might assist with, I can offer unique experience in character animation - including what I learned from having tthe task of creating more than 90% of the character animation for the final battle scenes in Return of the King, in which my team had to create more than 6300 unique character animations in 5 months including every type character such as bipeds, quadrapeds (even the horses were all CG), and including some 6 and 8 legged creatures. And I mention this because I took the approach of building move trees for each character type so the pipeline resembled a game more than it did a film; and I had a developer and an artist on one of my teams who proposed building a pose interpolation tool for the many characters which were difficult to motion capture, such as horses, and creatures which didn't exist. So I gave them substancial resources and as much time as they needed. Yet even though they were incredibly talented and experienced and at the top of their industry, it once again demonstrated to me that pose interpolation simply does not provide usable results.

This probably sounds as though I want this to fail, but quite the opposite - if you can accomplish this, it would be a game changer. I'm just trying to caution you to not keep your expectations too high, that's all.

So I'm really interested in seeing what you come up with, and if you are able to accomplish this, then you have done what no one else has been successful in past attempts in the many times I've been personally involved, as well as others I know who have also attempted to accomplish in past productions. I do however recommend using MotionBuilder to generate the poses required - as the pose tools in MB will save you an immense amount of time to build what is required. And if you don' have a way to purchase or to borrow a copy of MB, then the evaluation license is  30 day full functioning license - which is sufficient time to generate your pose library.

Please keep this post updated as you progress, and again, if I can help in any way, I'm up for it.

Cheers,

DB

Link to comment
Share on other sites

We are gonna see pretty soon.  Blender does real time pose interpolation in a python interpreter using only cpu.

Multiple Blender Skeleton Libraries-  Solved it multiple ways, neither of which involve the JS module. 

  • You specify a library name. If the name already exists when it loads, validation occurs to ensure that all bones are present & have the same length.  If passes, then any poses from other files are added from subsequent files.
  • There is also a "Include all Blender Pose Libraries" checkbox.  This is probably the preferred way, since fewer <script> tags are required, and already merged in case gulping.

Selection_217.png

I just roughed in my test page, complete with 2 models (No scaling reqd, Toddler) & a follow camera.  Tested out the dynamic JS loading of characters, initial setting of fields every reload, & checking for disabling controls which may not always be valid.  Have exported a skeleton library with 3 stepleft poses, 3 stepright poses, standing, sitting, & rest.

All the overhead is complete.  Time to play!

Selection_216.png

Link to comment
Share on other sites

@jJCPalmer ;

Quote

All the overhead is complete.  Time to play!

Well Jeff, when you are really satisfied - perhaps a link:o

And I would like to have a look at the .blend file - if possible - so I can see how you are setting up parameters for export.

cheers, gryff :)

Link to comment
Share on other sites

Well, I am going to break for a day or so.  Bootstrapping up, have solved many problems.  I had to put in temporary manual changes to the computer generated pose library for translation / rotation to be applied to the mesh.  That means that I cannot clean up issues with hands rotated poorly, and too much spine bone "involvement".  Will just have to look past that right now, or my hand changes will get removed. 

Right now there are 2 sources of mesh translation / rotation.  One coming from the root bone, and another added by javascript.  The later is how you can make animation that goes straight forward.  If I cannot get the source from python working, back up plan is to rely totally on 2nd source.

I do need some help in getting a better ground.  I really need like a black - white checker board effect.  I am sure someone would know an easy way to do this.  The diagonal lines are really confusing, and lack of color change makes it tough to count distance traveled.  Recommendations appreciated.

Need to definitely change the translation of the first step depending on the previous queued or run state.  When coming from 'standing' or 'rest', it needs to be 1/2 of the normal for going between stepleft3 to stepright1 and vice versa, I think.  If the 2 sources of translation are used, the javascript input would be negative for the first step from a previous 'rest', 'standing' state.

Here is a 3 step left turn decelerating to 1250 milli from 750. @dbawel, is this looking "usable" for indie game standards?  I think standards have to be a little lower than from a $ billion grossing 3D movie, though BJS can to stereoscopic 3D, resolution can be bumped to 4k, and the QI.TimelineControl class has both a master clock that can generate frames based on a fixed frame rate or real time, & there is also a hook to add a frame capture mechanism.  I had worked briefly on converting a Java based H265 mp4 encoder, but never finished.  Even if I do get that to work, that part is not going to be distributed, at least not for free.

Link to comment
Share on other sites

Quote

I really need like a black - white checker board effect.

@JCPalmer : Jeff, Here is a .blend that has a simple floor 50x50 units with B&W checker. Simply File-->Append to any of your files. The texture is packed into the .blend, but I added the checker.png just in case ;)

floor.zip

If you want to create a floor with your own scaling, in layer 2 you will find the Plane_base object (2x2 units). Scale that to your wishes (let us say x), apply the scale. Now go into Edit mode, select all the verts, then in the UV window, with all verts selectedin the UV window, S --> x

Hope that is what you are after, enjoy the days off :)

cheers, gryff :)

 

Link to comment
Share on other sites

Hi @JCPalmer -

4 hours ago, JCPalmer said:

Here is a 3 step left turn decelerating to 1250 milli from 750. @dbawel, is this looking "usable" for indie game standards?  I think standards have to be a little lower than from a $ billion grossing 3D movie, though BJS can to stereoscopic 3D, resolution can be bumped to 4k, and the QI.TimelineControl class has both a master clock that can generate frames based on a fixed frame rate or real time, & there is also a hook to add a frame capture mechanism.  I had worked briefly on converting a Java based H265 mp4 encoder, but never finished.  Even if I do get that to work, that part is not going to be distributed, at least not for free.

This is of course, all very subjective. So if the quality of animation meets your standards, then it is fine in all respects. And I'm certain that based upon your previous work, the game will be enguaging and fun to play - regardless. But if you really want to know my personal opinion, I personally wouldn't approve this animation for release - as for me the quality of the animation is distracting enough to detract from the gameplay. Especially the feet sliding on the floor, which you might improve within your pose interpolation tool, as this will become more of an issue when you begin adding shadows to your scene, as the placement of the character in the environment will suffer far worse than what is currently observed in your scene now, as your character will appear to be floating at times. And in my opinion, the time saved in production by using your tool doesn't justify the resulting quality of animation. And any "character" style in your animation is also lost as there is no unique character to the animation. But again, this is simply my own personal opinion, as I would personally be distracted enough to avoid playing. Especially since we can very quickly create quality character animation and move trees using 2 or more Kinect V2 devices (even one is sufficient) plugged into my laptop to generate whatever motions I require with little effort in little time - and under $400 for 2 of these currently on Ebay.

But again, this is all very subjective, and I'm simply being honest concerning my personal opinions and experience. But if this works for you, and users find the animation quality to be acceptable, then that is all that matters. I hope you accept my opinion as simply that - my own opinion. And what I think really doesn't matter at all - the only thing that matters is what you and people who might purchase the game think.

Otherwise, the resulting animation is what I expected as a quality standard. This is about as good as you might possibly expect.

Cheers,

DB

Link to comment
Share on other sites

Quote

I cannot clean up issues with hands rotated poorly

@JCPalmer : That is a common issue. One approach is to double the number of bones in the upper and lower arms (and the legs too). In each case, one bone becomes the joint bone and the other a "roll" bone. I've seen a couple of MotionBuilder rigs set up that way.

I also remember about two years ago watching a talk about the rigs of two main protagonists in "The Last Of Us" - 300+bones and also shapekeys to get lot of control over muscles an tissue. Great lengths to get things right :)

cheers, gryff :)

Link to comment
Share on other sites

  • 3 weeks later...

I now have enough changes to advance this thread.  No public scene yet, but have just published my finger shape keys test scene.  Things work well as long as you make it quick, 200 milli (there is duration field to slow it down).  Definitely worth the removal of 40 bones. Impact is light.  I am not expecting to including every morph target for each hand in production exports.  The rude gestures are just there for completeness.  Unless you are building a 'Richard Nixon Game', you would not likely need a peace sign on both hands, etc.  (Maybe I should make a thumbs-up target for "The Donald" :lol:.  I would probably have a hard time with the hair though).

After a few days off, I started with adding the checkered background. @gryff your solution was blurry.  I ended up writing a JS function which writes all the black squares, then merges them into 1 mesh. I set the scene clear color to white, and made a "checkered ground" check box.  I works really nice with it off while you are just getting the timing of poses right.  After timing is right, turn it on to get corresponding position / rotation worked out.

- I got rid of any conversion of root bone translation being done on export from Blender.  Having 2 sources just causes extra work.

- I added the names of sub-poses as arguments to a PoseEvent.  These poses are only effective during the event.  An example is turning the ankle when walking in a turn.

- Also switched the method of interpolation from the one that just lerp'ed each of the 16 numbers in the matrix.  If there was much rotation, e.g. sitting, the feet left the ground for a while.  I always had problems with the decompose - (lerp / slerp the pieces) - recompose method.  Trying to make that efficient is difficult, but worth it since decomposing every frame is a waste.  Re-doubling my effort got it to work.  Issue resolved.

- Added a poseImmediate('pose name') on the mesh.  Useful to set a pose prior to the mesh being visible.

- For your input about lack of variety, @dbawel, I added an addCompositePose() on SkeletonPoseLibrary.  I also changed the name of the Rest pose in the library to "basis".  It works very similar to composite shape key targets like those used to make visemes.  You might have 3 stepRight poses.  You could also have 3 cChaplinRight poses.  You could make a walk that was 80% stock, 20% charlie chaplin, and call 'customRight'

myLib.addCompositePose(['stepRight1', 'cChaplinRight1'], [0.8, 0.2], 'customRight1');
myLib.addCompositePose(['stepRight2', 'cChaplinRight2'], [0.8, 0.2], 'customRight2');
myLib.addCompositePose(['stepRight3', 'cChaplinRight3'], [0.8, 0.2], 'customRight3');

myLib.addCompositePose(['stepLeft1', 'cChaplinLeft1'], [0.8, 0.2], 'customLeft1');
...

I have not tested this yet, but plan to try to make a jog series by mixing step & run (need to make run first).  Suppose you could also have hip sub-poses in the events (like the ankle turns) to simulate a woman as another way to do customization.

Also, I think you are painting a too rosy picture of data capture.  I saw one of those back story segments of a competitor on 'American Ninja Warrior' whose was a game dev.  I and most people could never do any of the things they showed him recording.  I can with IK though.  The BJS animation sub-system is also very uncoordinated.  I can coordinate poses with shape keys with millisecond accuracy.  Have not made any muscle shape keys yet, but you can walk, talk / show expression, and change fingers all at the same time in QI.  (Maybe, bouncing boobs for female walking) 

- I am having a problem with pose scaling.  Did the same method as @adam did for bones with parents in bone.CopyAnimationRange().  The toddler mesh looks good, but it now has the same center of mass as the unscaled. This means it way off the ground(Y), and just a little forward (Z).  I backed into the value the root bone would have to have, but do not see how I make a calculation which returned that.  Going to try adding up all the adjustments I do make.  Maybe they add to that.  I am dead on in X axis.  Maybe left and right sides just cancel themselves out for X.

Link to comment
Share on other sites

3 hours ago, JCPalmer said:

Unless you are building a 'Richard Nixon Game', you would not likely need a peace sign on both hands,

@JCPalmerWell in some parts of the world the err ...  "V sign" is considered a rude gesture if delivered with the back of the hand towards you and victory if delivered with the palm towards you.

Legend has it that the former goes back to Welsh and English archers at the Battle of Agincourt  using it as a gesture of defiance to their French adversaries - letting them know they had both their bowing fingers:lol:

Apart from that the fingers work well - I am though curious as to the impact the number of vertices of a full higher poly figure has on animation speed. But I'm sure we will see more demos.

cheers, gryff :)

Link to comment
Share on other sites

Actually, the face and ankles are in the finger test scene.  They just have a transparent material applied.  In general, it is better to not merge clothes, shoes, and hair with the body.  Even though each shape key group ( FACE, WINK, L_HAND, & R_HAND) only has the vertices they affect, changing any one of them will cause a memory copy to the gpu of the entire set of positions & normals.  Even if you merged the clothes etc, it would not reduce draw calls since they are different materials.  There are no teeth, tongue, eye browse, or eye lashes in that test scene though.

The separate shape key groups also facilitate independent FACE and HAND morphs.  There is still only 1 before renderer for the mesh, so there is only 1 transmission even if there is more than 1 morph in progress.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...