Jump to content

Avatar Animation via Kinect v2


Polpatch
 Share

Recommended Posts

Hi Polpatch :)

 

Avatar that I use is a native of blender (http://blog.machinim...atar-workbench/) and the initial pose coincides (luckily) with the one you described.

 

Actually, I'm not sure what you mean by  "native of blender". What I see looks remarkably like "Ruth"  the Second Life (SL) rig for creating animations for SL with products like  QAvimator.

 

The Avastar  Blender version probably originates with the work of Domino Marama of Domino Designs.

 

I might also point out that the stable version of MakeHuman (v1.02) has the an SL rig that you can assign to the character you create (same bone names). However, the pose is an A-Pose but the MakeHuman addon for Blender MakeTarget allows you to change that A-Pose to a T-Pose. There is a video here on how to do that. The lady does chatter on a bit though ;)

 

cheers, gryff :)

 

 

Link to comment
Share on other sites

I'm not sure why you are trying to use the CMU Acclaim files Jeff. There are .bvh files of those same animations here - and they work fine with the Make Human addon for Blender - MakeWalk. There are several issues with the CMU files - not centered and there can be offsets, and of course, the 120fps which I gather from your pics above, you have found. I use BVHacker to clean up those issues with the BVH versions of the files - the "Blue Lady" uses a CMU "Samba Dancing" file

 

If you want some other Acclaim files (that are  30fps - don't know about offsets or centering) to play with there are some free ones at Credo Interactive - part of the MegaMocap download.

 

cheers, gryff :)

Link to comment
Share on other sites

Hi Polpatch :)

 

 

Actually, I'm not sure what you mean by  "native of blender". What I see looks remarkably like "Ruth"  the Second Life (SL) rig for creating animations for SL with products like  QAvimator.

 

The Avastar  Blender version probably originates with the work of Domino Marama of Domino Designs.

 

I might also point out that the stable version of MakeHuman (v1.02) has the an SL rig that you can assign to the character you create (same bone names). However, the pose is an A-Pose but the MakeHuman addon for Blender MakeTarget allows you to change that A-Pose to a T-Pose. There is a video here on how to do that. The lady does chatter on a bit though ;)

 

cheers, gryff :)

Hi Griff!
You're right, I said "native of blender" meaning it was a .blend file. I'm in wrong  :D
However, I believe that the avatar in the scene is already in T-pose, or at least satisfies the description.
Link to comment
Share on other sites

@Polpatch - by bone center I mean the axis (center) of the bone itself. Not the center of the bone mesh - as we only care about driving single object axis' and let the hierarchy handle the rest of the computations for us. You will want to create your cubes, or a null object as only an additional object with a center axis is necessary, unless you want to view a surface such as a cube for behaviors - which I recommend. So bone or joint center refers only to the center of the axis driving the bone. We're not concerned about the facade mesh representing bone size and length, as this is not used in any way to compute skeleton animation or binding a character mesh to a bone's center.

 

It's great to see you using WebsocketIO to read the Kinect data directly. As I previously mentioned, there are a couple of other users who have been working on this, however, you might be the farthest along in your pipeline at this time. Once you are able to drive a retargetted character skeleton directly into Babylon.js, I can help advise on adding additional features to make this a usable tool for all. As mentioned, I can't believe I was young and stupid enough to believe you could use Kaydara's software to drive character skeletons when it was stage lightng control software - and now it is Motionbuilder. So I'm sure I can assist anyone with character animation and retargetting, as I worked with Kaydara on software development in Montreal until they sold to Autodesk - lucky boys.

 

Keep going. :)

Link to comment
Share on other sites

@ benoit-1842 - To use what Polpatch is buiding, you should only require the Kinect V1 as long as you have the Kinect SDK running with the Kinect V1 plugged into a USB 2.0 port. The Kinect V2 requires USB 3.0, and for motion capture, there is very little difference in quality between the V1 and V2. So I'm excited as well, and will do my very best to and help Polpatch build a featureset for users to quickly setup and capture - once he has the Kinect motion data retargetted to a charcter in real-time in Babylon.js. And yes, he obviously is a very talented developer - far beyond my abilities. :)

 

@ Polpatch - I'm sorry, I don't fully understand your question. But if I try, it appears that you are concerned about local and global axis. What target box (where in the levels of connectivity) are you referring to? Is this the box (object center) which is the constraint driving one of the bones in the retargetted skeleton? If so, then there should be no issues as you are simply using the box center axis as an offset to drive the center axis of a bone. If this is not clear enough, please outline what your concern is in more detail, and I can help with whatever concern you might have. Illustrations are always valuable tools when discussing such complex issues, and perhaps we should Skype if you have any problems to make certain that your setup is complete. If you'd like to Skype in the next couple of days, please message me nd we can set up a time to go online where I can review and assure your setup is correct to retarget without error.

 

Cheers,

 

DB

Link to comment
Share on other sites

Sorry Guys, I did a mistake...
When I've create the list of sourceBox i didn't consider the hierarchy between the boxes.
 
Now i have issues in the initialization of  the rotation of sourceBox(i'm not so good ahahaha)
Unfortunately all the changes that I apply to an object becomes local when I assign a parent.
So, here the list of source box
//I tried an avatar kinect similar to the skeleton, but I failedvar listMesh = [			BABYLON.Mesh.CreateBox("source_spineBase", scale, scene),			//0			BABYLON.Mesh.CreateBox("source_spineShoulder", scale, scene),		//1			BABYLON.Mesh.CreateBox("source_neck", scale, scene),				//2			BABYLON.Mesh.CreateBox("source_head", scale, scene),				//3			BABYLON.Mesh.CreateBox("source_elbowLeft", scale, scene),			//4			BABYLON.Mesh.CreateBox("source_wristLeft", scale, scene),			//5			BABYLON.Mesh.CreateBox("source_handLeft", scale, scene),			//6			BABYLON.Mesh.CreateBox("source_elbowRight", scale, scene),			//7			BABYLON.Mesh.CreateBox("source_wristRight", scale, scene),			//8			BABYLON.Mesh.CreateBox("source_handRight", scale, scene),			//9			BABYLON.Mesh.CreateBox("source_kneeLeft", scale, scene),			//10			BABYLON.Mesh.CreateBox("source_ankleLeft", scale, scene),			//11			BABYLON.Mesh.CreateBox("source_footLeft", scale, scene),			//12			BABYLON.Mesh.CreateBox("source_kneeRight", scale, scene),			//13			BABYLON.Mesh.CreateBox("source_ankleRight", scale, scene),			//14			BABYLON.Mesh.CreateBox("source_footRight", scale, scene)			//15		];
I created the hierarchy:
var InitializeHierachy = function(ListBox){		ListBox[1].parent = ListBox[0];		ListBox[2].parent = ListBox[1];		ListBox[3].parent = ListBox[2];		ListBox[4].parent = ListBox[1];		ListBox[5].parent = ListBox[4];		ListBox[6].parent = ListBox[5];		ListBox[7].parent = ListBox[1];		ListBox[8].parent = ListBox[7];		ListBox[9].parent = ListBox[8];		ListBox[10].parent = ListBox[0];		ListBox[11].parent = ListBox[10];		ListBox[12].parent = ListBox[11];		ListBox[13].parent = ListBox[0];		ListBox[14].parent = ListBox[13];		ListBox[15].parent = ListBox[14];	}
 
I also created some functions to simplify switching between sourceBox / targetBone / sourceJoint etc.
 
This is the function for the positioning of the list of sourceBoxes (with hierarchy), it works:
var SetPositionFromTarget = function(boxList){		for(var i = 0; i < boxList.length; i++){			//compute the global position of related targetBone			var bone = targetSkeleton.bones[HelperSourceBoxTargetSkeleton(i)];			var boneMatrix = bone.getWorldMatrix();			x = boneMatrix.m[12];			y = boneMatrix.m[13];			z = boneMatrix.m[14];			var positionBone = new BABYLON.Vector3(x, y, z);			//in case of rootBox use only global position			if(boxList[i].parent == null){				boxList[i].position = positionBone;			}			//In case of non rootBox compute local position from parent			else{								//global position of parentBone				var parentBone = targetSkeleton.bones[HelperSourceBoxTargetSkeletonL(boxList[i].parent.id)];				var parentMatrix = parentBone.getWorldMatrix();				x = parentMatrix.m[12];				y = parentMatrix.m[13];				z = parentMatrix.m[14];				var parentPosition = new BABYLON.Vector3(x, y, z);								//compute local position				var position = positionBone.subtract(parentPosition);								boxList[i].position = position;			}		}	}

position1.pngposition2.png

 
 
 
But when I try to insert the (local) rotations through the following algorithm, this does not coincide:
var SetRotationFromSource = function(boxList){		for(var i = 1; i < boxList.length; i++){						//global orientation of related sourceJoint			var sourceJoint = HelperSourceBoxSourceSkeleton(i);			var jointOrientation = sourceSkeleton.jointOrientation[sourceJoint].normalize();						//global orientation of the related sourceJoint of the box's parent (it's so hard to write ahahaha)			var parentJoint = HelperSourceBoxSourceSkeletonL(boxList[i].parent.id);			var parentOrientation = sourceSkeleton.jointOrientation[parentJoint].normalize();						//compute the local rotation			var localOrientation = BABYLON.Quaternion.Inverse(parentOrientation).multiply(jointOrientation).normalize();						boxList[i].rotationQuaternion = localOrientation;		}	}

rotation1.pngrotation2.png

I colored the various zones to have a greater distinction (the parts are mirrored for kinect):
right arm - Green
left arm - red
right leg - magenta
left leg - yellow
 
The rotations aren't random, I think I have to work on the axes but I do not know how.
 
Maybe, is my function that calculates the local rotation incorrectly?
 
for the skype call... my english is very very very very bad  :lol:
Link to comment
Share on other sites

Yeah I agree with Dbwael...  If you could do a type of youtube tutorial that would be very useful for everybody (it can be with subtitles if your english is rusty like mine ;).  But I think that putting your work in progress in the record could be very invaluable.....

 

If you need any assistance, tester etc.  I  am there...  I am totaly not a javascript guru....  But I know how works quaternion, rotation, motion capture, retargetting etc....

 

Thanx,

 

Benoit

Link to comment
Share on other sites

It's very difficult to see how you have placed your retargetting boxes (axis) - or kinect boxes, as I cannot tell by just looking at your screenshots. The mapping of global to local or local to global axis' shouldn't matter in this case - as you're as simply passing rotation transforms from one axis to another. In viewing your screenshots, I'm not certain what I'm looking at, with many of the boxes (green, red, yellow. purple, blue) in different locations than the bone axis'. Before you even attempt to retarget to a skeleton which has different bone lengths than your Kinect skeleton, the skeleton to which you are retargetting the Kinect skeleton must be the same size and proportions.

 

It is necessary to do this first test using the same bone count, scale, size, hierarchy, and proportions of both the Kinect skeleton and the skeleton you're retargetting , as without additional functions, any positional offset will produce incorret transforms. This is why I highly recommended to retarget to a copy of the Kinect skeleton as a first test - as you are testing your retargetting application and your Kinect motion capture skeleton for the very first time - and so you definately want to use elements which you know will work the best and the simplest - as you may have issues with your application, and you'll never discover if you have any problems in your application with so many additional variables to potentially cause additional problems.

 

Once you know that you are correctly retargetting to an exact copy of your Kinect skeleton, then you know your application is working correctly, and can then begin to change elements such as your retarget skeleton, and will be able to build a reliable application much faster and without issue - otherwise, you'll most likely be chasing problems in your application which probably aren't problems with your application, but issues with your skeletons. So you really need to begin simply using known and controlled objects such as retargetting to an exact copy of your Kinect skeleton. It already appears to me that you are not placing your retargetting boxes (axis') correctly, so please start with an exact copy of your Kinect skeleton to retarget. Just build a quick mesh in Blender and bind this to the copied Kinect skeleton. This will then simplify all controlble elements of your scene, a will tell you if your application is working correctly first. After cofirming that your application is working without apparent errors, you can then add complexity in steps from that point. And I highly recommend that you only advance the complexity of the elements and functions you will add one at a time, as what you are attempting to build is highy complex, and there are so many variables (unknown issues) in your application in which you are guaranteed to have problems with. Otherwise, if you don't simplify your skeletons and all othr scene elements now in the first version of your application and testing, I'm certian that you will be chasing multiple problems which will not be obvious by attempting to retarget to a skeleton which is different in any way from your Kinect Skeleton.

 

I'm telling you this from experience, as I was very fortunate to be working with the very best developers in the world at Kaydara, and the pioneers of skeleton retargetting; and we still had many problems to solve using the most simple tests we could possible design in the early stages of attempting to retarget recorded motion capture data in post - and you're already complicating your process much more by attempting to retarget motion capture data in real-time. So based on my personal experiences, I don't believe that you will have greater difficulties, since you have the gret advantage of the Kinect SDK providing much of the calculations - which we had to build from no previous knowledge or experience initially. However, in order to identify any problems which will undoutedly arise, you must begin with as few variables in your testing elements as possible.

 

Also, as benoit has also recomended, if you want the very best help that the BJS community can provide, please download and use an application to capture your video screen of your setup and application working in real-time, so that I and others can see your setup and how the skeletons behave and respond, in order to assist in any troubleshooting. And please keep at this, as you will come out with greater experience than you might ever imagine, and then be able to assist others in this community with information which will certainly advance babylon.js a great deal in the future.

 

DB

Link to comment
Share on other sites

Also, if you're not able to solve this in the next few days, I'm sure I can find time to create a tutorial using Motionbuilder and my micropone - simply to demonstrate how to set up retargetting between skeletons with nothing more than boxes or nulls (objects with axis), parenting, and constraints. But you appear so close to completing a working method, that I believe you will be able to accomplish this very soon - especially if you follow my advice to first retarget from your Kinect skeleton to an exact copy of the skeleton using the process we've discussed. I know this doesn't sound that exciting, but as I mentioned previously, this removes all possible anomolies due to differences in the skeletons, which then you will know you have a working application which is completely reliable. I can then help you build the functions you will need to retarget between skeletons of different attributes such as size and bone count. But these functions are not simple to compreghend until you've actually built them - and only after you are able to retarget to the same or similar skeletons in real-time. So let's take this in small steps, and be assured that everything works before progressing further.

 

DB

Link to comment
Share on other sites

  • 2 weeks later...

Sorry for my absence, i really thank you for all the help that you give me.
Unfortunately at this time (between holidays and family) I am not at home and I have not easy access to the Internet. I'm still working on the project, hoping to give you good news soon.

Merry Christmas (even if late) and Happy New Year folks!!!  :D  :D

Link to comment
Share on other sites

  • 1 month later...

I have suspended working on motion capture itself, now that I have found Inverse Kinetics.  I have not completed that, but have made the blender exporter more friendly towards it.  Meaning if you name your bones with .ik & check ignore ik bones, they do not follow you to BJS.

I also implemented Blender actions to BJS Animation ranges, copying animation ranges across skeletons in BJS, and exporting only the key frames of skeleton actions (Tower of Babel only).  I have yet to get my bone interpolator to work right, but feel that even if I got IK & motion capture to work, I could do more with IK than mocap.

I also now export skeleton rest pose & bone lengths for Blender.

Link to comment
Share on other sites

Hey Ben,

PolPatch appeared to be extremely close to capturing from the Kinect directly into the babylon framework. I hope he ees this post, and completes his work. I doubt he would require much more time to complete his working code, based upon his most recent posts and questions.

I hope he continues...

DB

Link to comment
Share on other sites

  • 6 months later...

Hello everyone!

(Especially @Polpatch and @dbawel,if you are interested...)

I'm working on an VERY similar project.

Kinect v2 -data to Babylon.js through WebSocket. I would like to have some guidance from you guys and maybe help you too in some way.

DevPic 16-9-23.png

@Polpatch Are you still working on your project too? If so, maybe we can help each other? For your rotation problem: I didn't invert or multiply any of the parents rotations. I just added the joints and it's parents x, y, z and w quaternion-values together and it it working as you can see from the picture. Hope that helps you to continue your project.

 

Also: I'm not yet very experienced in software development or in the English language so go easy on me! :)

 

Hope you get my message and continue this thread,

Mazax

Link to comment
Share on other sites

On 2/22/2016 at 4:38 PM, JCPalmer said:

 

I have suspended working on motion capture itself, now that I have found Inverse Kinetics.  I have not completed that, but have made the blender exporter more friendly towards it.  Meaning if you name your bones with .ik & check ignore ik bones, they do not follow you to BJS.

 

@JCPalmer - I assume you mean Inverse Kinematics and not Kinetics? It's probably a dumb question for me to ask, as I can't imagine anything in the realm of kinetics - but as you're generally leading the rest of us in developing new features into the BJS framework, I thought I'd make certain this is the case. And if there is any further progress in rendering real-time mocap using BJS, I'm certain ost everyone would be interested - especially those of us who use the Kinect (V1 or V2) regularly. And of course @Deltakosh would certainly take notice of this, as he appears to have been key to the development of the Kinect for Microsoft - or for it's use with the Xbox. And DK, if there is a video or article which describes what your role was/is specifically in the development of the Kinect, I would be very interested in understanding your contributions to the Kinect specifically; as it was and still is a revolutionary piece of hardware which has contributed greatly to producing quality motion capture, 3D scanning, and other applications not only within the gaming community, but to industrial, scientific, and other areas which would not have been able to afford motion capture and z depth camera/pixel registration.

DB

Link to comment
Share on other sites

  • 3 weeks later...

Hi! 

I need to get the 3D-model move according to the Kinect-sensors data inside BabylonJS. I'm already getting the body data from the sensor but having some problems with the movement of the model, as you can see from my previous post.

I have done a simple debug 3D-model in Blender with the right skeleton-armature (right names, hierarchy and bone weights (with "limit total" of 4) according to BJS and Kinect). Everything works fine in Blender (moving the skeleton deforms the model), but not in BJS. This is my first time making a rigged 3D-model, so i'm not really familiar with all the terms and things in 3D-modeling and/or 3D-models in BJS.

The main problem what I'm having is that I don't know how to move the model. BabylonJS documentation seems to focus only to ready-made animations and not to the manual deforming of the models. 

I've moved and deformed the skeleton in BJS, but cannot get the model to move with the skeleton. 

notGoodAtAll.PNG

Here is the code I'm using:

function LoadCustomAsset() {
    BABYLON.SceneLoader.ImportMesh("", "assets/", "boxman.babylon", scene,
        function (meshes, particles, skeletons) {
            mesh = meshes[0];

            mesh.position = new BABYLON.Vector3(0, 5, 20);
            mesh.scaling = new BABYLON.Vector3(1, 1, 1);

            //make skeleton visible
            var skeletonViewer = new BABYLON.Debug.SkeletonViewer(mesh.skeleton, mesh, scene);
            skeletonViewer.isEnabled = true;
            skeletonViewer.color = BABYLON.Color3.Red();

            //Move the skeleton
            var scale = new BABYLON.Vector3(1, 1, 1);
            var rotation = new BABYLON.Quaternion(0, 0, 0, 0);
            var translation = new BABYLON.Vector3(2, 0, 0);

            var matrix = new BABYLON.Matrix.Compose(scale, rotation, translation);

            //Apply the move-matrix to the base bone
            mesh.skeleton.bones[0].updateMatrix(matrix);

            mesh.applySkeleton(skeleton); //this really doesn't do anything...
        });
}

What am I missing? What is the right way to manually deform/animate the model?

I got a blender-animation working in BJS, which got me thinking about that what if I just do a manual BJS-animation of the model. Making a BJS animation-frame for the model from Kinect's body data. Is that even possible? I have read the documentation about BJS animations but I don't really understand it. If manual animation is the right way to go, can you help me to understand how it is done? 

 

Thank you for your help.

Mazax

Link to comment
Share on other sites

@Mazax -

I can tell you that others have been attempting this in both Blender and also natively in BJS - and I've not yet seen it working, yet allot of time has been spent. Perhaps with v2.5 this might be more feasable, but only @Deltakosh or @davrous could answer this - most likely - since I know that DK was instrumental in the development of the Kinect at Microsoft - and of course is the author of babylon.js.

However, I've been running a similar pipeline for almosr 2 years now, and I found it well worth spnding the $500 and buying the Brekel Pro Body V2 which works flawlessly as a plugin to Motionbulder which is my preferred pipeline. At the very least, you can get a full functioning trial version of Motionbuilder for free use for 30 days, and Brekel will give you at least a week or two for trial as well prior to deciding if you want to purchase. But for me, it's a no brainer, as even if you are able to capture into Blender or babylon, good luck with creating move tress, creating and blending motions, etc. - as it's far more complex than simply recording motions. They will be of little use once recorded without a solid motion editing platform which if you're not familiar with MB, it will save you months of chracter work if you are building a character based game or project with more than a few bipedal animations.

If you need more help with this or advice, please ask as I've been in mocap for 20 years now and devloped much of the software currently in use - including Motionbuilder which was once named Filmbox. I bought the very first license from Kaydara, and it was so primitive at the time (yet also fantastic for 1995) that I was forced to become intimately involved with their development team in Montreal for years until they were bought by Autodesk. Features such as the TCB sync generatorand TCB recording was all mine. :D I hope I'm not telling you about a pipeline you already know. If not, at least try it - you'll thank me...;)

Cheers,

DB

Link to comment
Share on other sites

  • 1 year later...

Hello Devs -

Anyone crack this nut yet? How about it @Mazax and @Polpatch? And JC @JCPalmer? I might have a need to implement this for a large electronics company, and welcome any updates. It would really suck to pick this journey up on my own, but if I must then I must. I find out this week (maybe tomorrow) if we might need this marked as done.

DB

Link to comment
Share on other sites

I have my doubts those other people are still around.  I did do a look around very recently into Kinect.  Here is the Current SDK Manual.  The install is here.  From your description of your current use, that might already be installed by you or the software you are running. 

One major problem I see is the javascript reference for Kinect is for 1.8, and gone for 2.0. That may be overcome by this Github repo as a replacement for the Microsoft javascript solution.  It was update just 2 months ago, so probably Kinect 2.

I assume your requirements are for live transfer rather than as capture, since you can already do capture & your client could just buy the stuff you did.  I was primarily seeking a capture capability with a very short workflow pipeline.  I am in progress with my own commercial output at this time. 

With both voicesync & lipsync operational, or at least operational enough for me to use, I have switched back to armature experiments.  Have expanded to start to play with IK directly in BJS, as well as exported poses out of Blender.  I am starting to have a feeling that I have a way to deal with root bone translation which will really solve the "floating" effect.

also adding key frames from a Kinect might be worth it if I did not have to run them through some long workflow pipeline.  I cannot really work on this right now, above my own work.  Root bone translation is my priority right now.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...