JCPalmer

Members
  • Content Count

    2510
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by JCPalmer

  1. This is not really a Blender question. Basically, all a bone is in BJS is a matrix & bone length which is only used as an aid. I am not sure, but thought about removing some of the weight around each side of a joint (actually programmatically). Reminds me, @Deltakosh, what happens if the sum of the weights is less than one?
  2. No, it does not. I am also not sure how compatible they are, and BJS is changing for this current release, I think. I have been exporting particle hair, but from a converted mesh from the particle hair system, not directly from the particle system. Also, these meshes have no faces, only vertices, and the .babylon format exports faces.
  3. First, none of your attachments equals an export log file. 2nd, the exporter does nothing with NLA tracks. It only exports actions. You have the choice of exporting either the currently assigned action of every object that has one assigned, or all actions to every object which has a currently assigned action. Things get a little messy when doing the later with multiple objects with an action. In that case, if a certain action only applies to one of the objects, then use the name pattern 'object name-action' to isolate that action in the export to only one object. 3rd, babylonJS only supports one animation (though it can be against multiple properties). When actions are exported, they are all contained in that one babylonJS animation. There are some frame gaps between them for sanity purposes, but when exporting more than one action, you cannot just run all the frames of the scene or even the object. You, however, also do not need to know the range of frames where everything gets put. You can start each BABYLON.AnimationRange by name, which is also the action name. skeleton = scene.getSkeletonByName("bonesCharacter"); skeleton.beginAnimation("Walk"); You could also have other problems, but making these adjustments & checking what is actually happening in the log file is start.
  4. Looking at your screen shot, _evaluateActiveMeshes, 26.9% total, is where computeWorldMatrix gets called, 16.7% total, is called. While this is required if the scaling, location, or rotation changed since the last frame, if you know of meshes that are never going to move, this can be eliminated for them. No amount of this automated optimization can ever know that. FreezingWorldMatrix of background meshes is the kind of overhead that can be taken out without sacrificing or redesigning the scene. One other area without sacrifice / redesign is merging meshes of the same material, which also do not move, scale or rotate. After that, much of the low hanging fruit has been picked.
  5. Not sure dragging is obvious. You need to parent the mesh to the armature. right-click select mesh, shift right-clickselect armature, then cntrl P. Pick 'with automatic weights'. This is a general Blender question. Not the best forum for that.
  6. If you wish to get the bones loaded, they have to be attached to a mesh in the JSON file, per what @Deltakoshsaid above. Then you have to load this shame mesh. You might either export it as not visible, or even better, disabled for the custom export properties. If you are just going to throw this mesh away, make it a plane with 2 faces. Your cube is wasting 209 faces (636 / 3).
  7. Cameras, are mostly the same, except device orientation or VR, regardless the platform. Think the question you are really asking is how do I make them work using touch. Add pep any everything should work. <script src="https://code.jquery.com/pep/0.4.3/pep.min.js"></script>
  8. The first thing to check is the exporter.log file. If an armature was exported, it will list it. Next thing to check is that the mesh it is wanted for has skeleton weights & indices exported. Checking at run time, as you are doing, gives the final verdict, but checking your upstream workflow is wise.
  9. I had the same thing in my animation extension until recently. I mention this in case it might be helpful here. I was stringing together 15 short duration morph targets synced with sound (talking). It was acceptable when I ran in a live scene. On the final frame of an animation, the next morph target was submitted. That was correct, but the first frame of the next animation "reset the clock to zero", so 0 percent of the next animation was performed the first frame. This results in 2 frames with the exact same results. The sound kept on going, of course, so I was getting out of sync 1000/60 (16.6 millis) every animation, or a quarter second by the end. It still looked in sync though. When I started producing 24 fps videos, that meant I out of sync 1000/24 (41.6 millis) every frame, or .625 seconds. Not near close enough. I have an old shoot where this is a result: 24_lagging-vp9.webm I am not familiar with waitAsync, but this "looks" exactly like duplicate frame issue. I fixed my problem by adding on one frames worth of time into the calculation of how much to animate. I also edited my code, putting in console.logs every frame with the value of the property to prove that was the problem. It sounds like a real pain to have to do edit BJS for logging. Being sneaky, maybe you sine based, IN & OUT easing for both animations. I never used easing (stole it though), if eased on both sides, then it will not be moving very fast at the beginning or end of an animation, so it will look less noticeable.
  10. First, which OS? iOS could easily be skinning on the cpu if you have more than 25 bones, while Android can handle many more. 2nd, that sounds about right. It can difficult to profile on remote devices to find your bottlenecks. Use your browser's profiler on the desktop. Even if you can not visibly see slowness there, any improvements on your desktop should translate. At least the knowledge of your biggest issues give you a chance. Blindingly doing "optimizations" without the faintest idea of where & in what amounts your application is spending it's time is not a winning strategy. All you are going to get from what you provided is random suggestions. Posting code is not necessarily better than screen shots of BJS debug layer stats, or a shot of the top time percentages from profiling.
  11. Minor, but there is a plane mesh, which can be scene with mouse drag. If not intentional, get out your delete button.
  12. Using cornerRadius, you can get close http://www.babylonjs-playground.com/#WWBIKZ#8. Strange why giving same dim for height & width do not give a square though. If this could finallized, perhaps a CircleButton class could be made.
  13. I recently went thru all my source for my QI extension. Changed every reference of var to let or const unless it was absolutely needed, 1 place. Changed to "for of" in every place where "for(let i = 0.." was just indexing. tons of places (Typescript has an option for transpiling ' --downlevelIteration ' for ES3 & ES5). Changed to backquote for all strings concatenated with a "+". Kind of a mechanical process, but helped by using an ide with good search. Cleaning made the code more readable. Does not replace reading, but for libraries it seemed worth it.
  14. Might be you wrote it out this way: write_array(file_handler, 'indices', list(set(self.vertexGroups[indexgroups]))) I was not sure what that did. Looking at this entire topic, it was probably for duplicates. That could be put into the to_scene_file(). Duplicates can also just be avoided by moving up & indenting the code that appends them only when the vertex is added for the first time. Duplicates are created automatically for the vertices which border a material change, but those are not implemented through indices, but actual duplicate vertices. On your animation of vertices, this can get really expensive file size wise.
  15. Not sure what you are seeing in Blender, but submesh does not have a name. You might be able to something to retrieve by corresponding material name. Going the opposite direction is let matName = my_model.materials[my_model.subMeshes[0].materialIndex].name; getSubMesh(mesh, name) { let mat = mesh.materials; let idx; for (let i = 0, len = mat.length; i < len; i++) { if (mat[i].name === name) { idx = i; break; } } return idx ? mesh.submeshes[idx] : null; }
  16. Looks like your gameavatar Mesh is finding an armature. I also see only positions, normals, & UV's being written. No matrix weights or indices. Without .blend or log file, can only guess: The Armature modifier on the mesh has been removed (though you say it is animating) You checked the ignore skeleton check in custom properties
  17. I look at your mesh.py. I refactored it creating a small class called BJSVertexGroup, since you were doing an array of arrays, then indexing into it with loop, and also including a matching name. It is very small, and looks a lot like the SubMesh class: class BJSVertexGroup: def __init__(self, group): self.name = group.name self.groupIdx = group.index self.indices = [] # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - def to_scene_file(self, file_handler): file_handler.write('\n{') write_string(file_handler, 'name', self.name, True) write_array(file_handler, 'indices', self.indices) file_handler.write('}') This adds functionality, so I bumped the version to 5.7. I ran it in against a blend with armature of 25 bones, but one of the meshes is only affected by 2 bones (vertex groups are used for bones too). It correctly only matched for 2 vertex groups. Data looked reasonable. I do not have code to playing with this once in BJS though. Please run this mesh.py against your .blends to verify, before I push up the new .zip file. Also, please add an announcement topic illustrating, since this is the only change for 5.7
  18. Not at all. Even worse, if you desired that, the vertices are not even directly consulted. A feature in Blender that allows you to get a temp copy of geometry, with modifiers applied is used. So if you were using a modifier like mirror, then twice the vertices would be exported than were actually be in the mesh. Inside of using the mesh copy, vertices are indexed in the export for compactness & that is how BJS loads them to gl context.
  19. Saw the pg, and downloaded last file, but did not diff it to the repos. As it is the largest source file in the add-on, I would need to diff to find the lines added / changed. Not a big deal in Netbeans. I do not a use for right now in my own work, but now that I know it is an option uses might come up. If you are thinking about a PR, a checkbox in the custom properties (default false) would definitely be needed, since if you are not using, it could really increase the size of the export file.
  20. Thanks, but no, I had not. I can say that I am all about control & that looks to have none (no frame rate, no quality, no resolution). I just completed using toDataUrl(). It is the only one in which you can control quality. In the 1.6 sec clip below, the .webm + .wav files combined size is a MASSIVE 8,858 kb. That is a lot for so small a clip, but when the multi-pass VP9 codec convert & sound track merge is done by ffmpeg, it is only 277 kb. As I am merging the consolidated soundtrack afterward anyway, giving ffmpeg the most crisp frames as a source to encode as VP9 or H264 is very desirable. It takes a lot of RAM, but I have 16 gb & room for 16 more. The annotations in the cropped black bars were supposed to be just a joke, but it is really helpful to bake settings right into the video during dev. You can easily mix your files without knowing. Am now starting to work on a clip with actual talking, work on recording code is done, unless I fine something. The alpha for VR is probably in the cameras, not background thinking about it. Going to throw VR under the Bus. Actually, YouTube can show 360 videos. Not going to attempt this right now, but wonder about having a rig with say 300 cameras & viewports. The VR distortion on the combined, probably wrong for this though. side-by-side-vp9.webm
  21. Maybe try something like this. I do something like this to make WEBM videos of arbitrary resolution via canvas.toDataUrl('image/webp', quality). toDataUrl works with 'image/jpeg' too. In the playground, I could not get my canvas sizing to obey, but this does work outside of PG. I also do not know how to write the .jpg file correctly. It is commented out. If you un-comment the afterrender registration, it takes the capture & puts it on a new page.
  22. I came so close to getting a completely successful test of Canvas.captureStream on FireFox, . Whether on Chrome or Firefox, the VR rig worked fine: In either case though, you cannot specify a codec. Firefox puts out VP8, but chrome does not even put out a true WEBM file. It has an MP4 codec. The killer is you cannot set the size of the capture in code. It is whatever the physical size of the canvas is on the screen. It makes sense, but that is a problem which cannot really be overlooked. Am going to stick with toDataUrl() method, and table the VR rig for now, unless some knows how to size a physical canvas (probably need to create the canvas in code). I have a 30" high res display (2560 x 1600), so could not do UHD (3840 x 2160). Do not know if that is a real problem or just imaged. Code I use to size canvas: // make videos of an exact size, regardless if looks weird on screen function sizeSurface( width, height) { const canvas = engine.getRenderingCanvas(); canvas.width = width; canvas.height = height; // may not have auto resize; if it does no harm doing it again engine.setSize(width, height); };
  23. I am going to do a quick test on chrome or Firefox using the HTML Canvas.captureStream() instead of toDataUrl(). I tried it earlier, but got strange results. If this method does give good results for the VR rig, I think I have found a way to get around the issue I have with this method. That issue is it is realtime-based. It is much faster than toDataUrl(), because it just passes a memory pointer of the canvas to a browser background thread. But, you cannot use it to directly render at a true, dependable, settable frame rate. And also, not to a frame rate might be greater than your scene can be rendered at a given resolution on your machine. An example is a complicated scene, with many meshes using 2 sub-cameras & postprocessing for 3D, say @ 1080 or Ultra-HD resolution. A think a commandline program ffmpeg has an option which allows you to over write whatever the capture said the time was with a fixed increment. Now you can capture at perfectly timed frames at given points in time, regardless of when they actually render. I need to merge the final consolidated audio file with the video file anyway. The option is:
  24. Hold on, something that just happened. I changed my test to find the start of data from 'VP8 ' to 'VP8 '. let keyframeStartIndex = webP.indexOf('VP8 '); The video in not now all black. The area in the background is black & jagged, but that's a start. I am really thinking this has to do with alpha of the clear color. My checks for 'VP8X' are still successful, so BOTH must be in the file when using VR rig. (Am going to have to check the quality 1.0 thing again too). Still similar question, can the alpha be taken out of clear, or is this also being used by the edges of what is rendered, so will not matter?