Jump to content

Picking parts of a mesh for FPS (headshot, bodyshot, etc)


timetocode
 Share

Recommended Posts

How does one structure a humanoid mesh and then use it within babylon such that one can ray pick isolated body parts as they move through animations? Does one just leave each body part as a separate object in blender..?

How will pickResult tell me that a ray traveled through the right hand? Currently the character is 11 separate meshes and animated completely in javascript, which makes the picking work just fine but is non-desirable for adding animations :D. Also not sure about the performance ramifications

2018-11-14_02-42-35.gif.d47324e42bee0b236a1761dd58ab4ae5.gif

Link to comment
Share on other sites

Yep, using a separate mesh as a body part in blender should work. One note is that picking only works for node animations, skinned animation picking will pick from the static position but if your characters are like in your image node animations should work fine. If the characters have complex geometry it is also recommended to use invisible hit boxes for the picking instead of the actual mesh to save on perf.

Link to comment
Share on other sites

Thanks I'm sure that'll make things much easier.

How do I make something a node animation vs a skinned animation (are there bones/rigs/armatures allowed) ?

How would one take one animation and use it on a dozen different models? All my characters have the same proportions but different heads, torsos etc.

Link to comment
Share on other sites

@timetocode, If you are thinking of node animation, you are animating the individual parts of the body with no skeleton. You can use parent relationships, but trying to retarget animations to other meshes would be impossible. The best approach to this is what @trevordev mentioned above. Use your normal skeleton/skinning process and animate as you normally would. The thing that you are interested in, however, is creating a cube primitive and making it a child of the important joints in your skeleton.

For example, placing a cube as a child of the hand ensuring that the cube fits the hand mesh as close as possible. Do this for each body part you need to raycast to and what you will end up with are cubes following along with your animations. When you export that mesh, you want to use a naming convention that would be easy to scrub for in code so you can gather all of them quickly and set them not to render. Then you will only want to raycast against these targets. In this way, you can combine your character's mesh into one mesh to reduce draw calls, but still get the individual targets to cast against. 

Then for sharing animations across models, make sure they are all skinned to the same skeletal structure (naming and hierarchy needs to be the same) and then you can point one animation at any other mesh you want. So long as the bones have the exact same conventions as the originating animation, it will animate correctly. https://doc.babylonjs.com/babylon101/animations The mesh, textures, and skinning information does not matter for the animation, only the skeletal structure as the animation will crawl the heirarchy looking for a named joint to set a rotation value on. It has no concept of the skinning information, the mesh, or the materials on top of the skeleton. 

I hope this makes sense, but please feel free to ping me if you have more questions.

Link to comment
Share on other sites

All of this sounds great! I do have more questions, as the end state sounds really ideal for what I'm making but I'm not quite sure how it all works together.

I'm hearing that using animations from blender to position a rigged model does not actually change the transforms of each body part on the CPU. Is that true? (is it a shader...?)

How does attaching a cube to the leg bone produce an object that does actually move on the CPU? Because at face value I don't see how the cube is any different than the leg itself. Is it simply b/c the cube would be a separate object? (and then no shader gets used..?)

If so then a lot of my confusion may come from me not knowing how to correctly make a 3D character to begin with. I'm using a voxel art program called MagicaVoxel for the modeling itself, and I drew each body part and saved it as a separate obj file (!). This is what I'm using in the current game prototype, where these 11 body parts are parented to each other manually in my code and then animated tediously by functions that rotate each mesh.

Example (no need to read too carefully, just conveying a general approach)

image.png.42720da78b2e5d53f3dd0166fd6119ce.pngimage.png.d97cef9d39c2f7fbcdea21edafbee40a.pngimage.png.75c89b4a4d7a671e00fa3c41734ad24a.pngimage.png.dfcef005b3eecca1e96fe10ebf514c2d.pngimage.png.dd6733c0e58bbffbdf0b4ffb8fcf0755.png

 

// example; template contains all of the body parts loaded once and positioned
// I have one template per "model" (zombie with suit, and the soccer player)
this.leftUpperArm = template.leftUpperArm.createInstance()
this.leftUpperArm.parent = this.torso

this.leftLowerArm = template.leftLowerArm.createInstance()
this.leftLowerArm.parent = this.leftUpperArm

this.leftHand = template.leftHand.createInstance()
this.leftHand.parent = this.leftLowerArm
// skipped: the 8 other body parts and the weapons


// example of how the animation logic ends up looking...
update(delta) {
    if (this.isWalking) {
        // swing the cute little legs
        this.rightLeg.rotation.x += 4.6 * this.walkToggle * delta
        this.leftLeg.rotation.x -= 4.6 * this.walkToggle * delta

        // flip the direction the legs are moving
        if (this.rightLeg.rotation.x > 0.8) {
            this.walkToggle = 1
        }
        if (this.rightLeg.rotation.x < -0.8) {
            this.walkToggle = -1
        }
    }

    if (this.isPlayingDeathAnimation) {
        // makes the whole character fall down, towards 'death.rotation'
        // 'this.node' is a the parent to which everything is connected
        lerpTowards(this.node.rotation, 'x', this.death.rotation.x, delta * 5)
        lerpTowards(this.node.rotation, 'y', this.death.rotation.y, delta * 5)
        lerpTowards(this.node.rotation, 'z', this.death.rotation.z, delta * 5)

        // makes the character sink a little bit into the ground
        lerpTowards(this.node.position, 'y', this.death.position.y, delta * 5)
        if (this.node.position.y === this.death.position.y) {
            console.log('death animation complete')
            this.isPlayingDeathAnimation = false
        }
    }
    // skipped: like 400 more lines like this for animation
    // and 800 for the player  logic :(
}

 

So is it really just whether these things are separate meshes/objects (not sure the term) in blender that determines whether the individual parts are can be hit with rays? Or have I missed the point

I should mention that the end goal here is to handle lag compensation of shots in a first person shooter where characters are comprised of multiple hit boxes with animations (it works!). So NullEngine is a significant component (thus no GPU and all of these questions!) . I'm now trying to make this thing bearable to use, that means being able to use art tools for the animation.. and I guess I've also now learned that  these meshes are 11x more expensive to render than they ought to be. Also that hypothetically the server should load only a "hitbox" version of the models, while the client loads the actual render-able version (in addition to the hitbox, which it needs for optionally placing a blood effect when predicting a shot).

Thanks so much for the replies @trevordev and @PatrickRyan

 

Link to comment
Share on other sites

I crammed some blender tutorials, so I can actually attempt to implement this thing now. I did get stuck though.

2018-11-21_03-20-15.gif.201475b73b967403553c596f0846b9d1.gif

The white box on the zombie's left forearm is the TestHitBox

image.png.f4c472198810e7562d2d44175deda104.png

It is a little messed up but it is good enough for this test. The whole zombie is one mesh this time.

image.png.28dafc289b0e2f4ad2b61e93e757d8db.png

The forearm hit box is a separate mesh and its vertices are assigned to the "forearm left" bone so that it moves with the animation (perhaps this part is wrong).

When playing the animation I'm getting no discernible changes in rotation/position from the TestHitBox. I'm presuming this means I couldn't hit it with a raycast. Or maybe it just needs its matrices updated...

BABYLON.SceneLoader.LoadAssetContainer('http://localhost:8080/models/', 'test-anim.babylon', this.scene, (container) => {
    //console.log(container)
    container.meshes.forEach(mesh => {
        mesh.position.x = 90
        mesh.position.z = 90

        setInterval(() => {
            console.log(mesh.name, '::', mesh.rotation, mesh.position)
        }, 1000)
    })        
    this.scene.beginAnimation(container.skeletons[0], 0, 100, true, 1)
    container.addAllToScene()
}, null, err => { console.log('err', err) })

Also tried mesh.computeBonesUsingShaders = false on just the hitbox mesh -- still the console logs only says:

TestHitBox :: d {x: 0, y: 0, z: 0} d {x: 90, y: 0, z: 90}

Now that everything is set up I feel like we're super close!! Worst case I could probably parse the animation and position the hitbox via javascript in parallel to the gpu doing its thing.

 

Link to comment
Share on other sites

You are going along the right path so far, and I just see one issue from your last post. It sounds like you are skinning the white hitbox to the skeleton which puts you in the same problem as before where it will only hit the box's bind position. This is because the vertices are taking their final position from the translation of the joints they are skinned to and interpolating a position based on an offset between them weighted by the skin. 

An example would be that you have a vertex that is skinned to two joints with a 0.7 weight to one and a 0.3 weight to the other. All skin weights must be normalized (add up to 1) and you can have up to 4 joint influences in Babylon.js. When you move that sample skeleton the vertex will take its final position as a linear interpolation between the two joints, not midway between them but 20% closer to the 0.7 weight joint including the offsets from the bones. 

What you want from your hitboxes is not to calculate the vertices of the box like you do for skinning, but rather to take the translation, rotation, and scale from a joint and apply it to the transform of the hitbox. The vertices of the hitbox do not change at all and just take their position from the triangle list of the mesh. To do this in Blender, you are looking for a parent relationship like this:

https://docs.blender.org/manual/en/latest/editors/3dview/object/properties/relations/parents.html

Setting a parent on an object confers no skin to it, but rather the transform takes the translation, rotation, and scale from the parent node. In essence, it is a separate mesh with the properties you need for a ray cast, but follows a joint. In a sense, it's similar to skinning to the skeleton, but the difference is that the mesh won't deform and you can only take the properties of your single parent node. That means if you have a leg mesh and parent it to the leg joint, and the knee bends, your leg mesh won't follow that. This could be useful for a simple minecraft-type character, but again, you would need to carry 1 mesh per body part rather than one mesh for the whole body, which skinning allows. 

For attachments, however, parenting is the best way to go... that could be accessories, hitboxes, or even things like attaching a character to another character like a mount.

Link to comment
Share on other sites

I've tried a few options now, including:

  • hitboxMesh.attachToBone(forearmLeftBone)
  • blender, select hitbox then select forearm bone in pose mode, then Parent to:
    • object
    • bone
    • bone relative

image.png.b7f5cb04c924c2b0ef6be36f85f16cca.png

Some of these look okay in blender (some don't). None of them have produced a transform that changes while an animation is playing yet.

In fact I went on to dig into the skeletons[0].bones objects, and even while a visible animation is playing in BJS, none of these objects are changing at all.  So I may be accessing these objects incorrectly. I'm still a little confused about GPU vs CPU so I don't know if bones not moving within the javascript application (instead only moving on the gpu)  is just how it works...but I'm suspicious that I'm just doing something else wrong.

I'm going to be digging through the bjs bone demos, because I distinctly remember one of them was manipulating bone orientation from javascript... 

Link to comment
Share on other sites

Yep, I was definitely wrong about the bones. I just happened to randomly pick bones from my model that weren't rotating. Bones do have their transforms changing during an animation.

I'm not sure how to connect the hitboxes in blender yet, I'll post back later.

If anyone comes across this thread for their own learning, this playground demo has bones, animations, and a tool for visualizing them: https://www.babylonjs-playground.com/#1B1PUZ#15 

Logging the varying bone positions to console or creating additional BoneAxesViewers has been pretty good for learning.

Link to comment
Share on other sites

As far as I can tell parenting a mesh to a bone is fine in blender, but not supported via the blender to babylon exporter. Does that sound possible?

While I am a newbie with blender, I think I've tried most sane permutations of parenting relationships and while several work in blender animations they all result in a stationary hitbox in babylon (often at the foot of the model). The only way that *bones* produce movement from blender in babylon is with weighted vertices (I think?).

This is all hypothetically possible though, as @Sebavan's suggestion of attachToBone can prove. Both attachToBone and regular old .parent will produce a bone moving a mesh in babylon which is all that needs to happen in the end. Though I'll note that this alone does not fully repair any of the models I've tried, because when the mesh gets attached to a bone on the babylon side it is in an orientation that isn't the same as it was in blender.

I'm a blender/importer newb but that's my conclusion.

I think this thread may come to a similar conclusion: http://www.html5gamedevs.com/topic/22851-blender-exporter-missing-meshes-when-parented-to-bone/ I could totally see why this feature was not valuable for animation, though it is good for attaching weapons, advanced collision checks, and fancy network stuff.

I've got a few final experiments to do, but unless someone chimes in and tells me that this doesn't sound right,  I'm going to pursue alternatives (have a few in mind already, modify exporter, or making two models and attaching them on the babylon side).

Happy Thanksgiving for those that celebrate. ?

 

 

Link to comment
Share on other sites

I'm using the bone functions getPositionToRef() and getRotationToRef() to sync hitboxes to bones.

22 hours ago, timetocode said:

......, modify exporter, ......

If you went down this path, you would help others using the Blender exporter as well :P

You would have to somehow use the rotation in Blender to fix the rotation in BabylonJS, though, as as you say, the rotation in BabylonJS is different from that of Blender. I think the model in BabylonJS inherits the rotationQuaternion of the bone. You would either rotate the model/hitbox each frame, or take the difference from Blender and BabylonJS and apply this before attaching the model to the bone.

Link to comment
Share on other sites

I took a look at many things including the exporter, the importer (aka mesh.parse) and the alternative of disabling the bone shader for a mesh.

I couldn't make much headway with the exporter as I don't know python, blender, or 3d math (lol...). So take what I say with a heap of salt. What I think I learned though was that parenting a mesh to the bone does not exist in the python exporter, nor does it exist in the babylon mesh Parse. The exporter+importer have the concept of meshes being parented to meshes, and bones being parented to bones and each follows different schemas... most of the bone stuff being arrayIndex based and most of the mesh stuff having to do with either a dynamically assigned id or the name of the mesh in blender. Each method made sense in its own context, but a mesh to bone would be a bit new. First off, it is a slightly ambiguous relationship. Blender itself has two parenting modes for this, one called "object" (presumably the normal relationship) which doesn't move the mesh even when the bone moves for some reason and one called "bone" which is the one that makes the mesh actually move when the bone moves. Babylon can do regular .parent to attach a mesh to a bone as well as attachToBone, which I'm not sure what the difference is, both seem to move the mesh attached to the bone. So at least in the exporter something pretty new would need to be added that would say that a mesh has a parent, that this parent is a bone (special), which armature the bone is in, and which bone index. On the babylon side there would need to be a parser (well, its just json) for this new property that can be on meshes, and then someone would have to decide what the relationship actually is... i believe just .parent is where its at, but I'm not sure. And then we get to the part that I don't know how to do: what is the actual LocRot of the mesh? We want the mesh to move with the bone like any object parented to any other, that part is already done by the virtues of transforms matrices. But the initial offset position of the mesh relative to the bone (whose parent is the armature) needs to be captured in blender and then expressed in babylon. I'm not sure how to even see those numbers in blender.

Using computeBonesWithShaders = false was another option I explored. In this scenario one makes a hitbox in blender and allows it to be transformed just like the skin of the mesh (i set all the vertex weights to 1 for one bone). This *almost* works. Specifically it seems that using computeBonesWithShaders = false on the whole zombie mesh did allow the raycaster to adjust to changes in its shape. I was uncertain about how accurate it was, but I was able to register hits on poses that had the arms in a different positions. However when I added multiple new meshes to the arms as hitboxes in blender and tried the same on them, I would lose them. Everything was okay in the renderer, but when firing rays at the character these hitboxes could only be hit when they were near the neutral pose. I mention this because perhaps a slight change in here might be another way to create a multi-mesh picking system compatible with animations -- performance ramifications unknown.

So yea that's my summary. 

What I'm actually going to do to finish this project is to create the hitboxes in babylon. I'm going to iterate through the bones, I'm going to create my 11 hit boxes using MeshBuilder.Box, then parent them to bones, and then fiddle around a fair amount with their positions until these hitboxes are sitting in approximately the same place as the limbs of my meshes. I know that sounds only like a partial solution, and it is, but the time investment is modest. These hitboxes only need manually typed in the first time, and then they'll conform to any animations that are added from blender.. which is a big plus because my previous approach was entirely programmatic with no art software involved in animation.

Link to comment
Share on other sites

I do want to revisit the exporter after I learn more 3d stuff. For now I'm going to mark this as solved. Thanks for the education everyone.

After loading the model:

skeleton.bones.forEach(bone => {        
    const hitbox = createHitbox(bone.name, this.scene)
    if (hitbox) {
        hitbox.attachToBone(bone, container.meshes[0])
    }                
})

Example of a manual hit box:

const createHitbox = (name, scene) => {
    let hitbox = null

    if (name === 'torso') {
        hitbox = BABYLON.MeshBuilder.CreateBox(`${name}-hitbox`, {
            width: 15,
            height: 18,
            depth: 10
        }, scene)
        hitbox.position.y = 2
    }
    // etc hardcoded positions for the other 10 hitboxes
}

Blender:

image.png.bb3e65796ea74773ae02dc06da41feb8.png

In-game test (hitboxes 120% scale for visibility):

2018-11-23_04-17-49.gif.93f5a8d31e8790da989d848d01d3a0ad.gif

The hit boxes come out approximate this way, but definitely close enough. It would be a little tedious but I think this approach would also be viable for realistic models.

I haven't done the network part for this version of the character yet, but I think it is going to be the same as before and no problem. I also haven't figured out how to have a run animation on the lower half while the upper body plays a different set of animations yet, but I'm sure between adding an upperbody/lowerbody bone, or cutting the guy in half, or posting here again it'll work out.

Thanks again all!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...