• Content count

  • Joined

  • Last visited

  • Days Won


JCPalmer last won the day on February 17

JCPalmer had the most liked content!


About JCPalmer

Contact Methods

  • Twitter

Profile Information

  • Gender
  • Location
    Rochester, NY
  • Interests
    Power Napping

Recent Profile Visitors

1,340 profile views
  1. I just wonder what the starting point that is being used for those graphics. (Much easier to improve when you suck). For instance, are they starting from identifying unique vertices like Blender / 3ds Max exporters already have? This makes your data size drop off a cliff. Are they starting with bone matrix indexes already packed? One thing I have found is just calculating the normals on load is really fast (& 100% size reduction). Blender exporters have a 'Defer Normals' checkbox. I have made it the default of the Tower of Babel variant. Think it should be changed to the default on the JSON variant too. Not hating, but those type of improvement cannot be related just to compression. It probably involves a lot of data reorganization / representation changing. I looked at this earlier. I do not really remember it, but just recently reorganized my representation of shapekeys. Size dropped by about 50% with very little extra cpu to load. Think I was inspired looking at that code.
  2. (<BABYLON.StandardMaterial> mesh1.material).opacityTexture = null; casting has no impact on generated javascript
  3. see
  4. I do not know. I would suggest just coding the minimum amount to create a WebCL context, 2 lines maybe. Run it on your hardware / browsers, and you will know for sure. (let me know, please). What you describe was done by Olivier Chafik, now at Google London. He, at my urging, made some bindings in Java for openCL, called JavaCL. He also made a demo which calculated the positions of 4 different colors of sand, up to 65k grains, as it shifted. Pulled the data back to the cpu, then pumped it into an openGL context. It was pretty impressive for 2010.
  5. Funny, @dbawel Actually OpenCL IS from Apple / NVidia, which is what WebCL is very likely to be based. I did not mention cause it did not advance any argument. If one were to look at the WebGL 2 standard, you will see the co-author bi-line is the same as the author of the WebGPU blog post. So, I guess they actually did write that standard, but how about implementing it, maybe with ASTC exposed like Metal, not just PVRTC. Maybe more than minimum the # of vertex uniforms. Actually customer requirements.
  6. Well, like I said, opinions here probably have a very low impact. Sounds like anyone could join that group if they wanted to. Will say, Apple really needs to put out WebGL 2.0, or they are probably going to face pitch forks. Hard to know where anything is with them, since all test stuff is NDA. For a commercial concern, the fees are not that high to buy your way in though. Think that since most people, even commercial projects, are not going use this without a framework, more empathis should be on our actual problems. The various frameworks are getting better and better. You can spend an enormous amount of time starting from scratch. PlayCanvas seems to think shader compile time is a major bottle neck. Point is, we are the customers.
  7. I read some of the article you reference. I also did not like the "frame based time scale" for animation. I have built my own animation system, primarily for highly integrated / choreographed animations & sounds, think speech. I abandoned anything to do with "frames", like skeleton frame animation in favor of skeleton pose interpolation. Everything is measured in millis. I also have a master time control (implemented as a scene after renderer), so all participants both know "when" this frame is & can adjust timescale to speed up / slow down. The master time control also detects / manages tab switches. Think there have also been changes recently to allow some time scaling in BJS animation too. The standard animation system, I thought, did adjust if it found it was running late though, albeit coarse. Have no extensive plans for physics nor knowledge. You may need to resort some kind of update loop for physics. If you know Typescript, there may alternatives to "forking". It is possible to sub-class both Scene & Engine. As long as you stick to referencing / overriding public methods & properties, you can use everything else without being responsible for an entire fork.
  8. yep. Checking, .babylon files can set geometry to updateable (FYI, incremental hardcoded as un-updateable). Not sure what exporters support this other than BJS serializer (Blender does not right now). Check with your exporter source. Assuming it does not, doing a new setVerticesData() from the results of a getVerticesData() would seem to be required.
  9. See your PR. While I do not merge stuff on this repo, one extra line might be good for the general public. If no format arg is supplied, then test for a '.JPG' extension. If JPG, then do RGB. That way you get the benefit without changing all the way back to Blender / 3DS Max, nor complicate things.
  10. Thing looks dead from a discussion POV. Also, OpenGL ES 3.1 has compute shaders. Up until this "web metal" proposal, some assumed there would be a WebGL 2.1 still based on ES. That took some of the emphases off WebCL, perhaps.
  11. yep, but problem may be that coming from a .babylon that is always set to non-updateable. For my source code generator for Blender, I only generate update-able geometry source code when there are Blender shapekeys on the mesh. Perhaps, get the geometry, then do another setVerticesData(). This time setting last arg to true.
  12. KTX, or multi-platform compressed textures that I added for 2.6 are much small in GPU. Your finding makes my comparisons to regular textures even worse than I thought. If you make this change, you are going to have to modify a lot more if you make it a parameter. Maybe, just determine the extension cannot possibly have alpha, and adjust internally. Otherwise, you are probably going to need to go all the way out to the exporters, since few people actually write Texture constructor code. Also, when making this change, remember to do it to createCubeTexture() too.
  13. From a Blender perspective, no the Blender camera type is not translated into BJS camera mode. Does not look loo big, but also you could just add: scene.getActiveCamera().mode = BABYLON.Camera.ORTHOGRAPHIC_CAMERA If camera mode is a valid property that can be put into a .babylon file, I could add it, but not before the end of the month. As far as some of the other things @dbawel, is setting, I see no Blender properties to use for top, bottom, left, or right. BJS Target is only translated from Blender track to constraint when the camera is arc rotate or follow. In any case, I will not be changing 4.6.1. Just left that around, but not updating.
  14. If you are making your own materials post import, then you will need to disable backside culling on those materials too. mymat.backFaceCulling = false