Jump to content

Recommended Posts

Hi everyone,

I did find quite some threads which discuss this topic in one way or another but none of them really helped me understanding what I don't understand yet :(

I want to use vector map tiles (like this https://mapzen.com/projects/vector-tiles/) on a sphere, so it's a globe. For that matter it could also be osm xml, topojson er things like that.

The thing I don't really get is the step to go from the json to the image. The examples I found are either using image map tiles or don't explain how the conversion was done.

There are libraries like mapbox, openlayers and leaflet. Would I combine those with babylon to use them for the geo-part of things?

Or is it really that I have to write my own 'kind of library' which uses the data from the geo json and draw the lines and polygons myself?

I don't care about extrusion or a 3d effect. I want to be able to have a sphere with eg borders, roads mapped and it should be efficient  so thats why I think vectors would suite better than images. Also I like the adaptive level of details which map tiles provide.

Would be great to get some opinions,

thanks, Silvio

Link to post
Share on other sites

Hi Silvio, welcome to the forum.  Boy, you sure know how to ask big questions.  :)

The BabylonJS dynamicTexture is the tool to use... to map context2d images onto spheres.  And, there might be some stretching/distortion.

But... there's one detail that you have not disclosed.  Do you want that sphere/texture to be zoom-able, and have varying levels-of-detail (lod), depending upon viewing distances? 

Do you want it to have layers/layer filters (choose to show ONLY roads or show ONLY buildings, etc)?

If so, that is a whole new challenge, and will likely require generating a new dynamicTexture with every change.

Of course, a BJS camera can zoom on a dynamicTexture mapped-onto a sphere, no problems. 

But, according to my short study of the Tangram simple demo,  about every 4th mouse-wheel "notch"... the LOD changes.  At each LOD change, the tiles likely need AT LEAST re-parsing, and perhaps need re-requesting from the server.  Conceivably, this could cause some gruesome delays at each LOD-change point, as the context2d Image() is redrawn from the new tiles, and then applied to the BabylonJS dynamicTexture.

Perhaps Mapzen tiles are retrieve-once-recalc-manytimes.  But, I got a feeling... that tonight's gonna be a good night, that with each LOD change, new tiles need to be requested from the server.  Perhaps you can enlighten us about that.

WHEN does Google maps and similar things... during zoom in/out... retrieve new tiles?  Dunno.  I would assume that it happens whenever "better data" is available.  Walkabout takes about 4 seconds to complete its redraw... when a fast zoom-in/out happens.  Perhaps add another 2-3 seconds to create a new BJS dynamicTexture from the imageData of a walkabout screen.   So, you might be looking-at 7 seconds for each LOD re-draw of the sphere dynamicTexture.  Perhaps not plausible, but I'm no expert.

SO, in general, I have no decent advice at all.  Sorry.  If you simply want to map a "freeze-frame" of a world-sized mapzen-generated image... onto a sphere, using a BJS dynamicTexture, then that should be fairly easy to do, and we would be glad to ATTEMPT to show you how.  But if you want to zoom on the sphere, and use layers, in similar ways as we see in the Tangram demo, then that is one hell of a monster project.

As for learning HOW to convert tiles to images/sub-Images (portions of a context2d image)... I think learning Tangram is the best first-step.  Go touring through its source, and search for occurrences of... oh... 'context2d' and 'new Image()'... things like that.  Once you get the tiles into an image format, THEN it is ready for use in a BJS dynamicTexture (likely base64 format)

Take a look at this playground:  https://www.babylonjs-playground.com/#22FWI5#16

Line 77 has a base64 image, line 64 has a dynamicTexture, and line 65 gets the context2d for that dynamicTexture.  You'll see that 'ctx' used in other places, too.  The context2d is how you "mess with" a dynamicTexture's data.  With a context2d, you can paint, draw, move, overwrite... pixels, and "insert" little patches of image (tile images) into the BIG image being used for the dynamicTexture.  This playground shows one other cool thing... at the top.  How to include-into the playground... an external JS file, such as tangram.min.js

But remember, you still don't have a system to do zooming/LOD stuff that measures camera distance to sphere, and determine IF/WHEN it is time to retrieve fresher tiles and re-build the dynamicTexture context2d image.  Likely, you'll need to determine WHEN/HOW Tangram demos such as basic demo and walkabout... determine when new/re-parsed tiles are needed, and trigger THAT... with camera distance to sphere value.  Both types of zooming... BJS ArcRotateCamera AND Walkabout... are mousewheel-based, so, it's all possible... but perhaps not so easy.

I hope this helps.  It probably isn't wonderful news for you, though.  Stay tuned, others may comment soon, and perhaps wiser.  I'll be nearby, too... feel free to tell more thoughts... we're listening.  Again, if you don't need LOD/details/layers to change... when zooming in-to/out-from the sphere, then the task is much easier.  Do tell us about that part, if you please.  thx.  Party on!

Link to post
Share on other sites
  • Wingnut unfeatured this topic

Hi Wingnut,

thank you for this already very helpful response! 

It's still early phase in the project I'm attempting and right now I'm trying to figure out what general options are on the table.

"Do you want that sphere/texture to be zoom-able, and have varying levels-of-detail (lod), depending upon viewing distances?" 

-> yes

"Do you want it to have layers/layer filters (choose to show ONLY roads or show ONLY buildings, etc)?"

-> maybe not, I guess it could be figured what layers are shown at a specific zoom (like only borders if far away, but also roads etc when looking closer)

Is it wrong to assume that if I would use a texture on the sphere I might also just use the 'image map tiles' which the map-tile providers provide? The idea is to use the vectors to get rid of the images (bc there is also a transparency thing going on in my idea :))

Almost always the map would be zoomed in so close that it wouldn't look like a sphere/globe but I thought it would be good to have one as the underlying system so that you can 'travel around the world'. 

So the other way I can think of is using the coordinates I get in the json and create the lines for the roads from it myself.  I would need to calculate from the lat/long to my sphere coordinate system.

Do you see other issues with that approach? (except performance probably)

Thank you for your welcoming attitude, it's very appreciated!

 

Link to post
Share on other sites

Hey einSelbst,

Having worked both with BabylonJS and geo libs such as OpenLayers, I thought I'd drop by to help.

What you describe is pretty much what Cesium does: https://www.cesium.com/open-source/

Unsurprisingly, rendering geographic data in multiple formats on a globe with varying LOD etc. is a complex task. Vector tiles for example is a format of data that holds much more than just geometry (it also describes all the geographic features inside of it) and requesting a webservice to stream down data at the appropriate LOD is also a complex task. Geo libs exist because of that.

Also you probably won't be able to "slap" a map renderer such as Leaflet on a 3D mesh as the map itself is already a WebGL scene with all kinds of events and stuff going on inside it.

To answer your second post, you could theoretically write a custom application that renders a globe and vector features on top of it, based on data received from services such as MapZen. Most geo formats are well documented and you could definitely parse them yourself. The variable LOD thing will make things much more complex IMO.

Be aware that you won't receive vector data in the form of a nice series of polygons or lines that you will just be able to pass on to BabylonJS though :) Geometries can be Polygons, MultiPolygons, LineStrings, MultiLineStrings, Points, MultiPoints, GeometryCollections, polygons can have holes in them, you may have to reproject coordinates... but it is doable for sure.

Hope that makes things clearer for you :) don't hesitate to ask if I was not clear enough.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...