JCPalmer

Members
  • Content count

    2,340
  • Joined

  • Last visited

  • Days Won

    9

JCPalmer last won the day on March 5

JCPalmer had the most liked content!

4 Followers

About JCPalmer

  • Rank
    Advanced Member

Contact Methods

  • Twitter
    whoopstopia

Profile Information

  • Gender
    Male
  • Location
    Rochester, NY
  • Interests
    Power Napping

Recent Profile Visitors

3,725 profile views
  1. JCPalmer

    Blender workflow for camera positioning

    Really depends on whether on you are going to load multiple .babylons or not. You could dedicate one export for each mesh, and use SceneLoader.Append() without a callback for each, then put a scene.executeWhenReady() at the end of the list. In that case, could be a kludge which .babylon had the camera, unless it was an arcrotate or follow type. Makes most sense when the entire scene is a single .babylon. Btw, in Blender it can make sense to have lights, but in the multiple .blend per scene scenario, you might not want to export them. You can put lights on a separate level. When working in Blender, make sure that layer is one selected for viewing materials. When exporting, set the exporter for selected layers only, and un-select the layer when exporting. One final scenario is when you have all your non-moving meshes together in one mesh, like a background set. It would be a good idea to have the camera & lights in that .blend / .babylon. All the moving meshes / characters you have their own .blend / .babylon.
  2. JCPalmer

    Blender Exporter Error

    From a quick look, there is something odd about the shape keys. If you wish to just get something out now, or you did not even know there were shape keys, just delete them. Also, there are shape keys on both the body & the bottom teeth with the same name. Even if the export worked, having separate meshes is going to require more work to animate. Better to joint the bottom teeth to the body prior to export. I do not think morph manager animation can even be put into a .BABYLON file. The morph animation will have to be done in JS. I will look again at this probably next week.
  3. 1) Compressed textures can only be url based. This is because the actual texture to use is not known at build time. An iOS system is going to want PVRTC or also ASTC in the future. A Win desktop is probably going to pick DXT. You only have one string, not 5. If you are only saying that you cannot use CreateFromBase64String while compressed textures are enabled, that is true. Whether it is a bug or not, do not know. If you can PR change that allows both at the same time, go for it. 2) When falling back due to a .ktx file not found, it looks like null is being passed when createTexture calls itself. Not sure if this can pass the value from the first call or not, but that is where you might make a change. I think I just did the same thing as a DDS container did. 3) Yes, I am pretty sure that cube textures in .ktx are all sides in one file, since facesExpectedis a constructor argument. Been a while since I wrote this, and never had the time to test cubes. Try using the interactive PVRTexTool for one of those. I did the bat version, but also think the node version only did one face textures too.
  4. Now that I think about it, it probably is the children holding a reference to the parent by the BABYLON.Node super class. Combined with the parent holding a reference to children, neither gets to a reference count of 0. In my work, I never did a lot of removal with meshes which were both big & had children to notice. Try to manually edit the file to remove the references to children. they are not really used by the parent, I just put them there so you could easily get a reference without going thru the framework, getChildren & then sort out the right one. cloning ? child_armour1(scene, this, source.armour1) : child_armour1(scene, this); cloning ? child_armour2(scene, this, source.armour2) : child_armour2(scene, this); cloning ? child_armour3(scene, this, source.armour3) : child_armour3(scene, this); cloning ? child_armour4(scene, this, source.armour4) : child_armour4(scene, this); FYI, your example is way to big, so I made a sample to show code (top level Cube wt ChildPlane). This is version 5.6. I have not committed it. It has one minor problem that I have not gotten around to dealing with. It is now ES6. Looks like now 2 problems. Was going to finalize soon to be the final version pre-Blender 2.8 though. Anyway, the top level mesh is subclassed from the base class you specify, and any children are just instances of the base class. Have not thought of whether I will keep the child references or add lines in dispose override. There can be many levels of children, so I'll look at the python to decide. class Cube extends BABYLON.Mesh { constructor(name, scene, materialsRootDir, source) { super(name, scene, null, source, true); if (!materialsRootDir) { materialsRootDir = "./"; } defineMaterials(scene, materialsRootDir); //embedded version check var cloning = source && source !== null; this.position.x = 0; this.position.y = 0; this.position.z = 0; this.rotation.x = 0; this.rotation.y = 0; this.rotation.z = 0; this.scaling.x = 1; this.scaling.y = 1; this.scaling.z = 1; this.ChildPlane = cloning ? child_ChildPlane(scene, this, source.ChildPlane) : child_ChildPlane(scene, this); this.id = this.name; this.billboardMode = 0; this.isVisible = false; //always false; evaluated again at bottom this.setEnabled(true); this.checkCollisions = false; this.receiveShadows = false; this.castShadows = false; this.isPickable = true; this.initComplete = false; if (!cloning){ this.setVerticesData(_B.VertexBuffer.PositionKind, new Float32Array([ 1,-1,-1,-1,-1,1,1,-1,1,-1,1,1,1,1,-1,1,1,1,-1,-1,-1,-1,1,-1 ]), false); var _i;//indices & affected indices for shapekeys _i = new Uint32Array([0,1,2,3,4,5,5,0,2,4,6,0,6,3,1,2,3,5,0,6,1,3,7,4,5,4,0,4,7,6,6,7,3,2,1,3]); this.setIndices(_i); this.setVerticesData(_B.VertexBuffer.NormalKind, new Float32Array([ .577,-.577,-.577,-.577,-.577,.577,.577,-.577,.577,-.577,.577,.577,.577,.577,-.577,.577,.577,.577,-.577,-.577,-.577,-.577,.577,-.577 ]), false); this.setMaterialByID("Layout.Material"); this.subMeshes = []; new _B.SubMesh(0, 0, 8, 0, 36, this); if (scene._selectionOctree) { scene.createOrUpdateSelectionOctree(); } } if (this.postConstruction) this.postConstruction(); this.initComplete = true; if (matLoaded && !_sceneTransitionName){ if (typeof this.grandEntrance === "function") this.grandEntrance(); else makeVisible(this); } else waitingMeshes.push(this); } dispose(doNotRecurse) { super.dispose(doNotRecurse); if (this.skeleton) this.skeleton.dispose(); } } Layout.Cube = Cube; function child_ChildPlane(scene, parent, source){ var ret = new BABYLON.Mesh(parent.name + ".ChildPlane", scene, parent, source); var cloning = source && source !== null; ret.position.x = 1.0001; ret.position.y = .2351; ret.position.z = -.2684; ret.rotation.x = 0; ret.rotation.y = 0; ret.rotation.z = 0; ret.scaling.x = 1; ret.scaling.y = 1; ret.scaling.z = 1; ret.id = ret.name; ret.billboardMode = 0; ret.isVisible = false; //always false; evaluated again at bottom ret.setEnabled(true); ret.checkCollisions = false; ret.receiveShadows = false; ret.castShadows = false; ret.isPickable = true; ret.initComplete = false; if (!cloning){ ret.setVerticesData(_B.VertexBuffer.PositionKind, new Float32Array([ 1,0,-1,-1,0,1,-1,0,-1,1,0,1 ]), false); var _i;//indices & affected indices for shapekeys _i = new Uint32Array([0,1,2,0,3,1]); ret.setIndices(_i); ret.setVerticesData(_B.VertexBuffer.NormalKind, new Float32Array([ 0,1,0,0,1,0,0,1,0,0,1,0 ]), false); ret.subMeshes = []; new _B.SubMesh(0, 0, 4, 0, 6, ret); if (scene._selectionOctree) { scene.createOrUpdateSelectionOctree(); } } if (this.postConstruction) this.postConstruction(); ret.initComplete = true; return ret; }
  5. I am not really sure. I do not use this browser. Do not know what a back_pointer exactly is. Looking at your very large file, I see you have child meshes as well. That is probably something to do with it. Each child mesh has a reference to the parent. This is BJS, not TOB. First, using existing export, call without using MeshFactory: var mesh = new Armour.armour("armour", scene); mesh.dispose(); mesh = null; If that does not release memory, then either it is either the Armour.armour mesh sub-class or BJS. I doubt it is the child meshes, since they are member of the parent class, and should go out of scope after mesh = null; this.armour1 = cloning ? child_armour1(scene, this, source.armour1) : child_armour1(scene, this); this.armour2 = cloning ? child_armour2(scene, this, source.armour2) : child_armour2(scene, this); this.armour3 = cloning ? child_armour3(scene, this, source.armour3) : child_armour3(scene, this); this.armour4 = cloning ? child_armour4(scene, this, source.armour4) : child_armour4(scene, this);
  6. Well, finally went to YouTube. You cannot actually download AFIK, not surprised. I'll find a WebM somewhere with audio to get a visual on the layout somewhere else. It can display stats from a right-click though. This is from an Incredibles trailer. Output is vp9 / opus. Wondering what happens if you give them vp8? There is one at least one way of finding out, I guess. I see 854x480@24 for the resolution. Is that HD+? Think the @24 is for 24 FPS. Anyway, if 24 is good enough for the Incredibles, it is good enough for me.
  7. Thanks, @Magilla. I needed to get an ebml viewer (ebml is what webM is) anyway, to see samples of how an video & audio track are organized together, and debug my output. I will try right clicking a YouTube vid, and hopefully get a menu option to save to file. Here is a Java based viewer. Assuming you have a Java runtime on your system just: download ebml-viewer-2.0-bin.zip. Extract to somewhere. Double click ebml-viewr-2.0.jar The first launch you will only get a toolbar. It is trying to remember your window size from the last launch, but there isn't one. Drag to window size you want. Opening the clock.webm from above, I see it is V_VP8 ( highlighted row ) Think I am still going to have to submit samples to YouTube with different frame rates though. webM does not really even store an FPS in the file. Every frame has its own duration. You can actually change the duration on a frame by frame basis. That is one of the improvements on Whammy that I have done. Though I will not be using this, DoubleWhammy can record using the standard render loop in real time. Also need to see if YouTube is doing some processing to frame rate. If you give them a 24 fps, & they modify to 60, then better to give them a 60. btw, @dbawel, I am now pretty sure those 5k rates are for special effects. They can take an explosion that looks like a glorified firecracker, record it at a really fast rate. When they slow it done it makes it look really BIG & long duration. That's why I asked if you know the OUTPUT rate of movies. Thought 24 fps makes things look bigger, and 60 made things look like cheap video.
  8. Well, YouTube says it does WebM, but does not break out codecs. I guess I will put out a sample & see what happens. If they support that many input types, I am guessing they run in through some type of conversion so that te site always shows some standard format. From my view, a real time render / game engine can produce fixed rate output for video, but let's see how well what the movie industry uses works as a game engine. Much work on my part, but I get a 2fer. With QI's timelineControl, I assign what time BJS thinks it is, so I can do any speed. Here is a test page where you can switch to fixed frame mode. Ignore the mp4 encoding part. This is a replacement for that. It works whether you say 24 fps or 120 fps. I am not going to exceed 60 fps though. Sounds like the film industry is recording at such high rates, because it is so expensive to do production, that they do not want to miss anything. In post, they can the best frames if forced to. What is the output rate though?
  9. If tone.js can produce an AudioBuffer, or you can make one from all of the parts available, then you can supply this instead in the BABYLON.Sound constructor. Things are not looking promising otherwise, though can you produce an Audio element?
  10. 1. Depends on your amount of memory and number of meshes. Keep in mind if you are using a TOB MeshFactory, it will be creating clones, which are for saving GPU memory, not CPU memory. Also the default dispose() only releases GPU memory, or only use count when using cloning. More meshes WILL reduce speed though, just not because of memory. There are 2 cause: The cpu time to loop through each mesh in the scene, do computations, see if it will need to be drawn, etc OpenGL draw calls for each mesh. This can be avoided using BABYON.InstancedMesh not clones, but not really that useful imho. They still suck cpu just just like clones. They save on the draw call, BUT they must be of the same material. Clones can change the material. If these meshes do not move like trees, better to merge them all together after cloning than use instances. You save on the draw call & cpu time. FYI, meshes can only be merge when the material is the same. 2. You must make sure you hold no references in order for the Javascript class to be garbage collected. Any heap reference vars should be set to null. There is also a way to delete Javascript objects, but heard that it is slower than GC. 3. If you are using a MeshFactory, there is a function clean() in it. In each Mesh class, dispose() is overridden & clean() is called when you have a mesh factory. This should remove it as a held reference, and thus not be an impediment to GC. The reason they have to be kept around in MeshFactory is so a donor mesh, that has not disposed of its GPU geometry, can be located.
  11. Well, I guess I should mark this solved or something. I cannot fix different organizations branching off from one another. This project seems better & better each day, regardless. Remember when I refactored 3D right into the BABYLON.Camera base class in 2015 (see topic "Making ALL cameras 3D"). I also added a couple of new rigs, one being stereoscopic side by side parallel. Back then, I was actually trying to screen cast a live scene with a Android tablet to a Sony 3D TV. This recording process can record that as well without changing a thing! @Pixar "look out, I a comin fer yer daughter". I am not yet operational for the video, but I took a sample WEBM from the Whammy repo & put it on a thumb drive (a clock). I plugged the drive into the TV & it almost worked. When I selected the file, it said it was a "unknown format", but put up a square video with 4 seconds. There was garbage in the square, but that means it knew what a WEBM video was. It just did not know the vp8 codec. I would bet a beer it knows vp9 though (there are only 2 in WEBM). Full implications of this, I cannot calculate (well yet). Going to have to see what YouTube actually wants. Whether you can submit both 2D & 3D videos on the same channel, what resolutions work, & if 24 FPS is supported. 24 is the one for movies, right @dbawel ? If I get the right answers, stereoscopic might get an update. I am also looking at a number of low priced programs which will do format conversion / compression once the original is rendered. I will be able to examine changes to the shader more easily, because I can just play one file then compare to another video of the same thing. Much tougher when you have to have multiple directories of the scene with different versions of the framework. I just ran out of time before. FYI, while most may think BJS is for games not video, I am pursing a sort of "Pirates of the Caribbean" business model. The channel supports the game, & the game supports the channel. That is also why I spent so much time on speech. clock.webm
  12. @davrous, actually nothing on the d.ts front, unless you can clarify why lib.d.ts, mozzilla, and W3C are out of sync with each other. The project is a week in, and the video track rewrite is getting close to completion, with improvements beyond being Typescript & OO. I am beginning to think all integration to BJS should be outside of this actual code base. Args passed like BABYLON.Scene & BABYLON.Sound would be replaced with HTMLCanvasElement & and an interface of { getAudioBuffer() & play() }. Would make it more useful to contexts outside of BJS, if I decide to publish the repo. Have kind of put that decision off. It has to work first. My initial integration will be with the TimelineControl after render in my QI animation extension, and in the place all sound.play()'s are called. Sound is tightly integrated in QI for sync requirements of speech. For general BJS use, adding a simple after render for adding video frames would require nothing to change in the framework. Sound.play() on the other hand might need something like this added: if (typeOf(WEBM.AudioMixer) !== "undefined" && WEBM.AudioMixer.recording) { WEBM.AudioMixer.addTack(this); } Otherwise, the recorder will never know the track was played, & exactly when it started. I do not think the framework currently provides the audit trail needed to determine those 2 things. I have not completely fleshed this out, but will if I decide to publish to the public, which would include a dblWhammy.d.ts if that were the case.
  13. JCPalmer

    Apple deprecates openGL

    There are other programs in peoples work flow that will also be impacted, if they are not using or not converting like: Blender MakeHuman Maya This is not really a fatal problem. It just means that no one that doesn't have to develop on OSX should. I was planning a Cordova app which would also run on iOS. As far as I know, you need a mac to generate one. My 8 year old MBP is starting to fail & maddeningly slow with only 2 mb RAM. Was waiting to see what this year's refresh would bring. Now will just see what Craig's list has. Not giving them any more money.
  14. JCPalmer

    Blender Exporter Total Frames

    Check your log file. I get 3 actions for your armature: processing begun of skeleton: Armature, id: 0 processing begun of bone: root, index: 0 processing begun of bone: foot_ik.L, index: 1 . . . processing begun of bone: Leg_R.001, index: 86 processing action AMN_GAME01: in[0 - 1], out[0 - 1] processing action ANM_GAME: in[0 - 630], out[10 - 640] processing action CameraAction.001: in[0 - 1], out[650 - 651] When you are exporting multiple actions, you need to start them as animation ranges. Looks like you only wish to export the current action. In that case, select 'Only Currently Assigned Actions' in the Exporter Setting Panel on the Scene Properties tab.
  15. I realize that this was pretty much a bitch thread, but checking what chrome supports for MediaRecorder, found some interesting stuff in addition to opus support. In order to get the codecs=vp8 for the video, have to execute "canvas.toDataURL('image/webp', this.quality);" for each frame. Now wondering out loud, whether if just could make a video only webM like whammy currently does, then in the second pass add that as another source when mixing the audio sounds? Certainly would be less messy. Could also output as "video/webm;codecs=vp9,opus". vp9 might have interframe compression, which I know vp8 does not have.