• Content Count

  • Joined

  • Last visited

  • Days Won


JCPalmer last won the day on March 5

JCPalmer had the most liked content!


About JCPalmer

  • Rank
    Advanced Member

Contact Methods

  • Twitter

Profile Information

  • Gender
  • Location
    Rochester, NY
  • Interests
    Power Napping

Recent Profile Visitors

4,021 profile views
  1. Using the concept of layermask, you can also have at least 4 simultaneous scenes in the same place at the same time. A camera has a layermask, as do meshes. You could set all the meshes for one scene to a mask. The meshes for "another scene" can have another mask. If you set the mask of the camera to one or the other, then a different "scene" will be shown. If you set the camera's masks to a bitwise or of both, then both will show. If you are familiar with Blender, same concept. One Blend file / scene. If you wish to show only certain meshes put them on the same layer. If multiple layers are selected, more than meshes for one layer with display.
  2. As far as clones, these are just meshes sharing geometry. Disabling one has no effect on others. Also, there is no such thing as a MASTER clone, which means the first mesh can also be deleted with no effect on others. In general, these "tricks" as you call them are due to people perceiving that they are operatiing on meshes. In the GPU / reality however, the primary thing be operated on is materials which translate into a vertex / fragment shader pair. This difference of what is really being done to how you THINK you are operating, is causing your disconnect of what should be the way "things should be". For instance, having vertices in the CPU does not mean anything to the GPU until a material is created for them to be used as data. This compiling of shaders / GPU programs is what causes the latency. As @JohnK says, adding a mesh & disabling gets the shader programs compiled, so everything is ready when you want that mesh (really material) to be seen. Not straight forward, unless you look at it from the GPU's point of view.
  3. I know @Sebavan was trying to do something with angular in this regard, and stopped. Think the difficulty factor is going to be way up there.
  4. Though a webgl framework might help you display your findings or take actions with those findings, getting the findings through photo analysis is the really hard part. This has little, if not nothing, to with webgl, though once you know webgl might benefit. BabylonJS does zero photo analysis, I think. Getting data from WebRTC and then passing it around is going to be slow. If you are going to use native capabilities anyway, you should probably follow their examples of retrieving the camera data. I will say using Cordova is probably not going to help you without a lot of work. When you access the camera in Cordova using the common plug-ins, those just call either the videocam or photo app for a given OS. When you close those, control returns to your javascript. If you write your own plug-in to access the hardware directly, which I am doing, then you have 2 new problems: It is platform dependent, so you will have to code for each OS. You are probably going to have to do your analysis in the plug-in itself. Reason is that Cordova plug-ins can only return strings. Turning one to base64, passing it back, converting back to an image will slow you down to about 5 fps. This is even before you start trying to work with the data. You might write a Cordova plug-in which accesses each platforms native AR offerings, or scourer the net for a plug-in which already does. The same problem of passing the camera data back in string format is still going to be the bottle neck. Using WebRTC on browser, and a second native app which also accesses the camera is probably the only realistic way today.
  5. scene.executeWhenReady( () => { console.log('blah'); } ); is also available. I think it works better with Append() rather than ImportMesh(). This leaves all the accounting to the framework. Edit: If you are having problems with memory and want to serialize, then you would want to put the next Append inside of the callback: const sections=[ "background-site", "section-1", "section-2", "section-3", "section-4", "section-5", "section-6", "section-7", "section-8", "section-9", "section-10", "section-11", "section-12", "section-13", "section-14", "section-15", "section-16", "section-17", "section-18", "section-19", "section-20", "section-21", "section-22", "section-23", "section-24", "section-25", "section-26", "section-27", "section-28" ]; let i = 0; function loadSection() { if (i == sections.length) allLoaded(); else { BABYLON.SceneLoader.Append("", "assets/site/", sections[i]+".babylon" ,scene, () => { i += 1; loadSection(); }); } } function allLoaded() { // normal post loading } // actually start it loadSection();
  6. Be absolutely sure that is the mesh you think it is. The number of times someone got this wrong is huge. Either console.log the mesh name, or better scene.getMeshByName('victim').applyDisplacementMap(...) Next check console that your file is actually being found. If this does not work, then a topic in the Q & A, might be better, since this is not really a blender issue.
  7. Since the exporter does not support it, you would have to add it. Kind of a waste of my time outlining any changes, since there does not seem like it can be serialized. Parse() is what is called to deserialize an exported mesh. See not place where a url can be taken from the data & call applyDisplacementMap() for you.
  8. Blender exporter does export displacement maps at this time. You need to specify true for the forceUpdate parameter. An exported JSON (.babylon) is essentially a serialized mesh
  9. JCPalmer

    Babylon.js v3.3 is out!!!!

    Great! Btw, I did not actually do any testing with my older scenes. I just tried one right now & looked at console, and engine 3.2 was loaded. Has this been submitted for CDN? Is this part of the "tree shaking" process? I do not actually know what that term means.
  10. JCPalmer

    Blender to babylonJS lighting issues

    Yes, see Whether this can be expressed in a JSON (.babylon) file, or further into one made by Blender is unknown. Let us know. Good luck.
  11. If someone wants an idea, maybe either pinball or battleship. I am too busy.
  12. JCPalmer

    Motion capture & root rotation

    Doing some more searching, I found some info on using the location data for getting orientation to camera as well. I had tried to do this myself as well, but using the hips instead of shoulders. Mine was a real kludge. The last response was: Not sure exactly how to do this yet, but I know for a fact that the location data is consistent all the time. Any comments?
  13. I am using a kinect2 for data capture inside of Blender. For all but the root bone, I am using the absolute joint position method. Trying to use the rotation quaternions is a nightmare, which is not helped by: Blender being right-handed & Kinect2 being left Y & Z dimensions switched Data capturing & not mirroring quaternion method is all or nothing. Complete garbage where you cannot figure out what is wrong. With location data you can sort of see what you are doing wrong. I get decent results, when facing forward, but if I do a twirl location data can never work. I was think of using the quaternion data solely for the rootbone. Unfortunately, the data looks kind of weird. Below, is a clockwise twirl. I have not corrected for being a mirror yet. The second group is using Blender's quaternion.to_euler('XYZ'). Can anyone explain this data? Do people even try to data capture stuff like this?
  14. Perhaps you could use a beforerender , as long a person cannot see the mirror and the mesh at the same time (highly likely, since angle of incidence == angle of reflection). You should probably use a scene level beforeRender, because it ALWAYS runs. Mesh level only run when mesh is going to render. The beforerender might test if the mirror is inFrustum set the visibility to 0 else 1.
  15. JCPalmer

    Going to full screen; mouse button stuck down

    Ah, that jogged my memory. It is only a lack of control in the PG. Not sure why PG sets requestPointerLock = true. It has to be one or the other. Why would someone want this in the first place, VR? DeviceOrientationCamera works without buttons.