• Content Count

  • Joined

  • Last visited

  1. @Deltakosh That was a useful one! Regarding to my other ponderings and specifically Mapbox, there is a discussion about exposing Mapbox camera coordinate system and specifically the discussion is about Threejs and in general feedback on how one would like to have it (I don't know if BabylonJS projections are as in ThreeJS, for instance). The issue is at and I provide it here in case your or someone has an opinion in that regard. That about Pixijs is just freshly implemented in and I provide that too here in case someone finds it useful. There's discussion about coordinate systems, performance and occasional notes about loading gltf models (likely not implemented) and in that sense hints how engines like BabylonJS etc. could augment geographic engines. One particular thing seem to be also rendering volumetric data. <edit: Also, custom layers examples wanted at
  2. Spelunking around, UtilityLayerRenderer and gizmos might be something to use might be something to use here. Difficult to tell! I also wonder that if Mapbox/ component has these rotation, scrolling and picking helpers that what if BabylonJS code moves the objects and if Mapbox would work after that... There probably is a way to do to that, at least by wiring the BJS thingies back to Mapbox/
  3. Hey, thanks for the note @ssaket. I took a look at Illuminated city and it looks it could use something like rendered on MapBox cancas that is then rendered on BabylonJS canvas, or something. 😁 I edited the headline as I think in general I'm asking if it's possible to render to WebGL canvas/buffer and then use BabylonJS to render that. I think at least, asking and hopefully learning while going -- have to have a project that is sufficiently long term and interesting.
  4. Hi! I was looking some geographic visualizations created with MapBox GL ES and and I was wondering if someone has done something similar before or could pass directions how could one go about combining these with BabylonJS. How could other WebGL canvases used with BabylonJS? Is it possible? In the following some background wondering. Here is one titled Mapbox GL JS + AR.js + A-Frame (VR) = Mapbox AR/VR and at is an example of Three.js being used to import a glft model into a scene (this) and more on integrating MapBox and at and a note on camera things too. To my brains this feels like really cool way to visualize geographic data sets and even cooler would be to add virtual reality things here. I could assume that since MapBox and work together and render into a WebGL canvas and the first link demonstrates capturing the buffer as an image and project it to AR.js cube, something like that could be done with BabylonJS too. Maybe avoid the capturing the WebGL buffer as a image altogether. The second link also shows how ThreeJS gltf model was imported into map data. I can somewhat follow the examples how MapBox and are used together, but I have very little clue how could I bring BabylonJS in the future to make the stuff even sweeter! That is, I'm thinking to start with data visualization with MapBox and as far as the examples go, but sooner or later move towards BabylonJS. I'm not sure how possible is it to use BabyloJS (hopeful due to AR.js example) or what should I know APIwise or otherwise with this endeavour. Some help or nudge or whatnot on offer?
  5. Thanks, @JCPalmer and @Deltakosh. It appears a hybrid one is the option to do that sort of stuff or a mobile device without a backend API. Until WebXR thingies make the roll a bit more.
  6. I was recently thinking to educate myself a bit more regarding BabylonJS and mixing it with reality, so to speak. I had an idea to create a digital measuring too, but quickly realized it's a good deal more difficult to pull of than one initial thinks. So hence a question if it's really impossible without access to native APIs currently and if so, what might be the deal breaker? I saw news such as where both Apple and Google are showcasing their new framework capabilities. From those examples I gather plane detection is at least a very desirable usability thing, but I'm not sure if it's necessarily needed. I don't know if BabylonJS can do plane detection since it looks like it requires access to native capabilities and it could be achieve by opening a WebRTC camera (using BabylonJS as usually) and sending them to backend or perhaps hosting BabylonJS inside a web frame in a native app and calling it. What I think is more important is some sort of depth perception (which, I suppose, would mean access to the lens properties such as focal length or more than one camera or some other sensor such as laser?) so that when a beginning and end of some line have been marked, the length of the line could be (accurately enough) calculated. I wonder if the above is about the right kind of reasoning? Or put otherwise, does someone have an idea how to implement a digital measuring ruler in BabylonJS and how to go about it? I see there's some new WebXR stuff going on in the GH repo and this might actually pave a way towards. Naturally it'd be nice to work towards this already now. By the way, I've looked into examples like that combine some virtual object with a real world scene. Instead of a ball it could be a ruler fixed other end being fixed into some specific point. <Edit: Found also