Jump to content

CPU vs GPU - movement, animation, bones


timetocode
 Share

Recommended Posts

Could someone explain a little bit about how meshes are positioned and when this information is on the CPU vs the GPU? How do animations affect this?

I used to think that all movement was essentially changing the position, rotation, or scale of a mesh on the CPU, and then that this information would be sent to the GPU for rendering. But I've recently learned that bone-based animations leverage GPU, which implies to me that the rendered position of a mesh (when using bone animations) is not going to be the same as that mesh position as set in our javascript code. Is that true?

I ask because I'm doing a lot of movement of objects in NullEngine (where this is no rendering) and I'm trying to very accurately sync the transforms of meshes across a network. Everything is working great so far, but I am doing my animations somewhat tediously on the CPU to reduce any potential errors (100% transform-based in javascript, no rigs/bones/animations).

 

Here's a rough version of the type of code being used to move things, but I am curious to what degree bones and other types of animations can be used.

https://www.babylonjs-playground.com/#YYH1CJ#13  (shows 100% babylon-based animation... just using the transforms)

Link to comment
Share on other sites

for make optimize GPU always better than CPU

think about object that have more than 100k vertex

and you wanna control then by CPU with all condition in each frame ? (in some mobile device's that is not impossible but GPU can do that easily )

for rig+skeletons after animation converted to transform matrix ( in cpu side ) in each frame Engine just send a frame matrix to shader  that is how GPU control that

** we can make full GPU animation system too but that is so complicated 

 

Link to comment
Share on other sites

Only bones and morph targets are done on GPU side. Everything else is done by the CPU and thus you should not have big issues synchronizing over network.

Bones can be forced on CPU with mesh.computeBonesUsingShader = false but this will be a major drawback on your performance

Link to comment
Share on other sites

Thanks for the replies.

Perhaps someone could advise from a babylon perspective about lag compensation shots in a first person shooter. I'll explain the network part, so we can narrow the topic down just to ideas of how to use babylon in this context.

I did something like this in the past and posted on this forum, but this time I've made a much more complicated character with multiple body parts and animations and the hit detection can hit any part of the body and report it accurately.

I save the positions of each body part of the player character which provides me the ability to rewind to any past position. I have a working prototype of this. It is possible to have multiple players running and jumping and still land shots in the head, or the hand, or anywhere, at pings from 10 ms to 500 ms.

Animations

These have been discussed a little already. They're entirely made by parenting one part to another and moving them in an update loop. Stuff like lowerLeftArm.rotation.x += Math.PI * 2 * deltaTime. From what I'm understanding so far, this isn't the best for performance, but given my goals maybe this is the best way...? Any other way of doing this would have to be compatible with the network rewind stuff down below. In the end my character is very simple and minecraft-esque, but I am hoping to enrich it with a large number of animations. I remain unclear as to the degree which changing to bones would be superior. Will these actually move in NullEngine or is it GPU only?

Saving Snapshots of Body Positions

The game doesn't need to save everything about the transforms, just a few things. Every pose+animation so far can be achieved by changing ~20 variables. This is mostly thanks to babylon's ability to parent meshes to each other, so I don't have to bother with setting every single transform property.. usually just the rotation of x or y is sufficient to make something like a bending elbow. ( https://www.youtube.com/watch?v=a8Gdt5Qeo1Q ) My video really doesn't show how much everything can blend into everything else, but there is also jumping, aiming down sights, and the upper body and lower body can be doing very different things.  These 20 variables are what get saved/networked and can deterministically put the character into any of its possible poses (and everything in between). I'm on the fence as to whether doing something more conventional with network messages such as "startAimDownSightsAnimation" would actually be compatible with blending multiple other animations and being rewound/replayed. If I could get something like that fully deterministic, it would certainly save some bandwidth.

Rewinding and Checking Collisions

Here's where things get pretty weird. I run a ray through the *past* state of the game and see what it hits. Using nengi.js (network lib) I obtain the past positions of all characters in the vicinity of the shot. This required some creativity in babylon:

// note: instance comes from the networking lib api
// REWIND
const pastStates = instance.historian.getLagCompensatedArea(latency + 100, area)
pastStates.forEach(pastState => {
    // getting the current version of the entity
    const current = instance.entities.get(pastState.id)

    // don't rewind ourself, that would mess up the shot
    if (isMe(current)) { return }

    // move the entity to a past position
    Object.assign(current, pastState) // uses get/set to modify transforms

    // dummy mesh to which the player model is attached
    current.node.computeWorldMatrix(true)
    // individual body parts of the player model
    current.model.head.computeWorldMatrix(true)
    current.model.torso.computeWorldMatrix(true)
    // etc for all 12 body parts
})

// HIT CHECK
// the predicate allows only hits against
// 1: the map geometry
// 2: players that aren't ourself
const predicate = () => { /* skipped */ }
const hit = scene.pickWithRay(ray, predicate)
console.log('scored a hit in the:', hit.pickedMesh.tag)
// e.g. "scored a hit in the: leftHand"

// NOT shown: unrewind - restore the character to the correct position

Concerns and questions:

I had to invoke computeWolrdMatrix on just about everything before the meshes seemed to be in a new position.  All of this occurs essentially instantly and *between* rendering ticks...so that makes sense right?

Is this an appropriate way to use the scene and quickly check collisions? I figured rewinding the entities in the existing scene and re computing their matrices was a way to avoid creating entirely new meshes or scenes.

What kind of optimization should I consider as I add buildings and generally a ton more stuff to the scene? Babylon's Octrees? Multiple scenes / some sort of custom spatial structures?

NullEngine is extremely awesome btw.

Thanks :D

 

 

 

 

Link to comment
Share on other sites

15 hours ago, timetocode said:

All of this occurs essentially instantly and *between* rendering ticks...so that makes sense right?

Yes sir! This is a good way to check collisions

Octrees will be your next step if you have a lot of meshes: http://doc.babylonjs.com/how_to/optimizing_your_scene_with_octrees

 

Quote

NullEngine is extremely awesome btw.

Thank you:)

Link to comment
Share on other sites

Is there an example of manual usage of the octree? I can't figure out what the _creationFunc needs to be. Also not sure about addMesh(mesh) vs update(start, end, meshes).

Goal would be essentially something like this:

const broadphaseOctree = new Octree(null, 64, 2);
shootableStuff.forEach(mesh => {
  octree.addMesh(mesh);
})

// the meshes contained in the octree nodes touched by the ray
const broadPhaseObjects = broadphaseOctree.intersectsRay(shotRay);

// and then doing ray vs mesh collision checks to what got actually hit

 

Link to comment
Share on other sites

Never mind, finally found the code (for some reason github searching does not seem to find things).

Manual usage:
 

// test meshes
const boxes = []
for (let i = 0; i < 10000; i++) {
    const box = BABYLON.MeshBuilder.CreateBox('myBox', { height: 15, width: 15, depth: 15 }, scene)
    box.position.x = Math.random() * 100
    box.position.y = Math.random() * 20
    box.position.z = Math.random() * 100
    boxes.push(box)
}


const octree = new BABYLON.Octree(BABYLON.Octree.CreationFuncForMeshes, 64, 2)

// adding meshes
octree.update(new BABYLON.Vector3(0, 0, 0), new BABYLON.Vector3(100, 20, 100), boxes)

// ray
const stuff = octree.intersectsRay(someRay, false)

// sphere
const stuff2 = octree.intersects(sphereCenterVector3, sphereRadius, false)

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...