Jump to content

Support for Point Level Animation Aka: PLA, Point Cache Animation, FBX Vertex Cache


dinos
 Share

Recommended Posts

I am looking for a way to "import/convert" sophisticated vertex animations from Cinema 4D into Babylon. I have done so numerous times into Unity3D; for which the process is to [link to example Video]:

  • bake the mograph etc animations into keyframes inside cinema 4d
  • export as an FBX file which includes the point cache as an extra ".pla" file
  • import it into unity (traditionally with a thrid party plugin called "mage fiers" and its "mega point cache" feature). However, unity3D will add native support for point cache animations in the upcoming release.

image.png.f590ec62b973922814487aa25d7e8d68.png

Question

How do you guys get Cinema 4D mograph animations into babylon? Is there maybe also another way?

Could Babylon maybe add PLA-Support? That would unlock plenty of creative input.

Old & Related Threads

Link to comment
Share on other sites

Thank you! I have spend some time to build a "simple" example for a typical Cinema4D -> fbx/pla -> engine workflow. You see the example as follows, where the letter B (for Babylon) is inflated via a soft body dynamic in C4D over 90 frames. You find the FBX with its .pla folder and the important .mc file hosted online (link at the end of the post).

babylon-pla-render.gif.e6e42a5a842daefd10bca839e25a6f40.gif

Setup

This is a simple soft body animation without any third party plugins. I use Cinema 4D Release 20; anybody should be able to open the original file original.c4d .

00-softbody_in_c4d.thumb.PNG.be5d88ee7e072708133032a3419793f9.PNG

Exporting an FBX with PLA (for anybody coming from Google, finding this post)

The conversion follows this video here (https://youtu.be/r80iOjhjh1s?t=584). Baisically, the "letter_b" subdevision object is first converted into an Alembic (right click -> Convert to Alembic + Delete). Than dragged into the Animation Timeline and Converted into Keyframes via Functions -> Bake Objects. Finally, the Alembic is made editable (deleting the reference to the external alembic file).

01-pla_keyframes_in_c4d.thumb.PNG.5ca5a9c523e9886c4729f271d30633bb.PNG

Finally, the FBX is exported by going to File -> Export -> FBX with the following settings (selection only: select the object first).

02-fbx_export_settings_in_c4d.PNG.55e5ec537d0c4680bfd8d486f461a817.PNG

Ready for Import via FBX & .pla folder

Once the process is complete, the fbx files, including a folder called *.pla, is exported. The important file for any vertex animation here is the included (and rather large) *.mc file.

03-fbx_with_pla_folder.PNG.c1df25e3816361b21f1fa60a587e9455.PNG

The *.mc file is the point cache, that in case of unity, you would import via "mega fierce -> point cache" into the engine: http://www.west-racing.com/mf/?page_id=1335

04-mega-fiers-tool-example.PNG.2b4e4622ff1af5ab548092baf3982381.PNG

Its written in C# with source access. Maybe a nice way to understand how to parse the relevant files.

FBX Example File & Download

The baked files are way to big to upload here. So you find them under the following link for download: 

https://drive.google.com/open?id=1k02wiAWcQzZ1lSavFYlnWp6RjaaIF3CA

Again, the important files are

  • original.c4d --- (animation before baking; aka procedural)
  • baked.c4d --- (animation is baked to keyframes)
  • fbx-export/
    • letter_b_demo.fbx --- (the model)
    • letter_b_demo.pla/ --- (folder)
      • letter_b_pla.mc --- (the point cache)
      • letter_b_pla.xml --- (some meta information... not really relevant)

I hope this helps,
best

Dino

Link to comment
Share on other sites

  • 2 weeks later...

Hi, @dinos! Sorry for the delay in response. I dug into the files you supplied and unfortunately what you are doing right now isn't supported in Babylon.js. Let me break this all down for you and offer some alternatives, though it will greatly change your approach to building your assets. 

  • The mesh is a 54,334 vertex mesh, 108,672 triangles
  •  Per vertex animation data is used as well as node transform animation on top of that generated from mo-graph simulation
  •  90 frame animation
  •  No material

The main problem with bringing this directly into Babylon is that we do not support pure vertex animation without the use of a skeleton or morph target. The main issue here is that per vertex data is very heavy and thus expensive to store when you are calculating 60 frames a second. With bone animation and morph target animation, we are able to calculate easily within the shader, but with vertex data, we have to load new data each frame which has a huge overhead.

You also have the issue of larger file sizes for download on the web because you are storing 90 frames of transform data for each of the 54K vertices in a file that needs to be downloaded. With bones, you store the transform information on each joint (which will usually be fewer than 50) and a skin for the mesh, which is the influence on a vertex from up to four bones in normalized weights, to determine its final location on a frame. This is far less data to download on the web. With morph targets, you store multiple mesh states and then a target for interpolation per vertex per frame. This carries with it a larger file size than bone animation due to the extra triangle lists copying your original mesh but this is cheaper than per vertex animation because the shader can calculate the interpolation between the start position of a vertex and the morph target's position while keeping everything stored in memory. 

Think of it this way, with a skeleton, we load one mesh and one joint hierarchy. With morph target, we load a mesh and up to four morph target meshes. With per vertex animation we will load 90 meshes. We haven't had this type of request before because we are rendering 60 frames per second on the web with a single core while also needing to target lower-end machines. Most of the content that will be created for web will use game industry optimization and tricks as there is a limit to rendering resources and we need to decide where the trade offs must be made. 

Now some options that will change your workflow but make this possible would be this:

  •  Determine if you can do an effect that you like with a skeleton. You could potentially create a skeleton rig in the B that would mimic your balloon animation, but it would be much more work to rig and animate as you won't be able to rely on the ease of the simulations in C4D. 
  •  Determine if you can develop motion to be able to utilize morph targets. You could rely on C4D simulations to generate your target meshes at the extremes of the animation, and then use your morphs to get motion that feels similar. The drawback with this approach is that you can only have 4 morph targets and interpolate between each of them per frame. In this way, you can get some interesting mixes, but you lose control over the subtleties of your shapes because of the limitation of morphs you can have. This is where planning the motion around knowing you are using morph targets comes in. It's easier to use a portion of your current workflow in generating your morphs, but are more limited in the motion you can create. 
  •  Ask yourself if you NEED 3D objects for everything in your scene. If you have some motion that you don't allow the user to move around, you could render out an animated sequence or video that gets rendered on a quad. You are able to move the quad around, but don't have the overhead of vertex animation of any type. You can use your current workflow for this type of element and set up a sprite sheet animation or video.

Some things to keep in mind when building assets for WebGL

  • Try to keep your vertex count low and utilize normal textures for more detail. Use the minimum amount of vertices to render your silhouette only. Let all other high frequency detail live in your texture set. 
  •  Look at the overall stats for your scene. Even if you keep your vertex counts low (5 - 10K) using large textures or many of them will also hit performance.
  •  Optimize your UV shells so that you can pack more geometry into each texture. If you have an area that isn't important, reduce its texel density to allow for more UV shells on the sheet.
  •  Keep your skeletons as simple as you can. Objects that don't move don't need bones.

If you have more questions or would like examples, please let me know. We haven't had a lot of people coming from motion graphics to Babylon.js yet, so we haven't had a lot of requests like this one. Take care and I hope this helps somewhat, even if it's not the answer you were hoping for.

Link to comment
Share on other sites

Thank you @PatrickRyan for your extensive reply. I am actually a 3D Enthusiast but a Software Engineer for Web/Mobile/+. In that sense, I feel you of scooping out the drawbacks of WebGL. In fact at first I wanted to provide a low poly sphere to cube pla animation. But I knew that the immediate answer to that would have been morph targets.

With the contrived inflation example, that — due to creases and wrinkles — would be “impossible” to cramp into 4 (or 8?) morph targets. I wanted to see whats possible from a workflow perspective neglecting performance for now.

So, going into follow up questions

Pre-Compile Solution
I think its clear that a pla folder with an actual vertex cache data file of various formats, such as MCX (Maya), PC2 (Max) or ABC (Alembic) is impractical to serve on the fly/on the web. Do you think its possible to come up with a “smart” transpile solution? I.e. going from the FBX vertex cache to a js file/format that facilitates per-vertex animation/morph/interpolation?

Automatic Workflow Alternatives
As you said, creating something like the example above manually could be an extensive endeavour. So I wonder if there is an indirect but semi-automatic alterntive? Maya? I.e. are you aware of a way to “convert” a pla timeline into optimal morph targets (pick the best states by staying truest to the original animation). I know I could make snapshots manually — but picking the right moments on the timeline seems like a lot of try and error.

Looking for Approaches
I am wondering if I should investigate the possibility of building a pla -> morph targets transpiler or a way to compress vertex animations (interpolation per vertex over time) as a JS/WASM solution. The problem is that I don‘t have a clue about shader magic necessary for GPU based interpolations. What do you think? ??‍♂️

?

I think the reason that no motion designers are showing up is rather a chicken egg problem, than a lack of interest. Also see the thread refs in my first post and the likes on github for Cinema 4D support (an industry standard VFX tool).

Best
Dino

Link to comment
Share on other sites

Happy to explore more of this topic with you, @dinos. Here are my thoughts on possible approaches with the understanding that neither glTF nor Babylon currently support vertex animation so any direction that we explore that is outside of bone/morph target animation will need support and ratification in those formats.

Pre-Compile Solution
There was a lot of work done for the Actiongram experience for HoloLens that involved streaming mesh data as well as video textures mapped to them. This is basically what you were looking at before and from what I understand I believe they developed a system to store and stream the delta to the vertex position in such a way as to be able to run at 60 fps on HoloLens which needs to target mobile specs due to power consumption needs coupled with the real-time tracking and reconstruction system. You can see the product of that work in this video. The trick with that which wouldn't need to be ported here is to sync the video texture with the streaming mesh. Our team has reached out to the team that created that tech and are trying to start a conversation about it to see if there is anything that we can leverage for future features. In the mean time, due to the complexity of not only finding a way to stream the data in an efficient way, we would need to get support in the formats and with glTF would mean ratification of a new feature. That would be a longer term goal but something worth investigating if we can speak with the Acitongram team on their tech.

Automatic Workflow Alternative
Finding a way to pick the optimal targets of a simulation to convert to morphs would be the most straight forward approach, but also would need some experimentation. My initial thought would be to analyze the bounding box of the simulation and look for the axis with the greatest delta in length. Choosing the frames that gave you the extremes of the delta would likely give you two good candidates for your first two morphs. Then I would choose the axis with the second greatest delta and take frames from its two extremes for the other morphs. This would likely get you part of the way there, but then you would still need to remap curves to fit the target transitions. Scrubbing the simulation to see where in the timeline each extreme sits would give you a general idea of how to author a curve to match the extremes with the morph animations. You would likely need to manually choose what the rest pose is with the mesh (your default mesh) as that would be hard to procedurally determine. The system would also likely need a looping option which would need to invert the curves and animate back to the rest pose for animations that need to loop.

If I were to choose a path to bringing a mograph simulation into real time rendering, it would likely look like the automatic solution above. With the chicken and egg problem you posed, at least the morph target method is already supported so it is just creating an art pipeline to get the asset into Babylon.js that is missing. I have done a bunch of work with morph target animation and planning to create sample assets for glTF and I keep coming back to the same conclusion, you have to really plan what  you want to do with the animation and target small achievable results because of the limitation of 4 states beyond the rest pose. But with some tinkering, there may be a proof of concept out there to guide very strategic mograph assets into WebGL. I just don't know if this would solve the problem without more investigation.

 

Link to comment
Share on other sites

@PatrickRyan this is certainly a very interesting optimization problem and I am curious to hear what the actiongram team has to say. Regarding the “automatic” work flow, there is a whole bunch of artistic knowledge still required (btw, in envy).

Per Vertex I/O vs Computation
At this point I am wondering if the issue is web I/O or Computational strength. For example, to get over the incredible file size of the vertex cache at 60fps, I can imagine to trade in computational effort by curve fitting the vertex positions in batches of 30pfs. It’s still a form of interpolation — but on a per vertex level. Leading to much greater flexibility.

Regression_pic_assymetrique.gif

Tick data for stock & trading applications is streamed in such a way.

Stitching Multiple morph Animations (Phase-Function)
I believe the biggest issue with morph targets as a pla output is the limited resolution of just 4 targets. At this point I am wondering if it would be possible to come up with a smart pre-processor that slices the mesh and the timetimeline into multiple segments. I.e. into blobs of vertecies and 30fps. In the machine learning space, this sounds a lot like a phase function CNN that is also used to reduce huge amounts of motion capture data into manageable and compatible chunks.

The advantage here is that no ratification is needed, as one pla transpiration would just lead to an array of morph target sequences.

Nonetheless, per vertex — instead of per mesh — interpolation would really be much more desired...

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...