Jump to content

same shader with different uniforms?


timetocode
 Share

Recommended Posts

Hello! Loving v5 it's great!

I've been trying to render an extremely large map of tiles in smaller chunks that load in as the player moves around. I've done this with sprites and a render texture per chunk and it works fine, but now to learn some webgl I'm porting the project to use pixi Mesh + Shader.

I've gotten to the point where I have the vertices and uv coords for a 16x16 chunk of tiles (skipping the code, nothing special). I then create a mesh per chunk of tiles and position them around. Code looks like this:

const shader = PIXI.Shader.from(vertexSrc, fragmentSrc, uniforms);
const chunk = new PIXI.Mesh(geometry, shader); // etc for many chunks

and then I just change the chunk.x and chunk.y 

Now what I'm trying to do next is actually show different textures for each tile within each chunk, for which I'm using a collection of tileTypes which is either going to be an array or texture uniform with the data for the 256 tiles that comprise the 16x16 chunk. I hope that makes sense.

In any case, because all of the chunks have the same geometry and the same shader if i change the `chunk.shader.unforms.tileType` it changes all of the chunks at the same time. If I create a new shader for each chunk so they have a unique uniforms object each, it ends up being an expensive operation which creates a visual hitch. I could probably create a pool of meshes and shaders and reuse them such that I wouldn't have to actually create new shader objects at run time as chunks load into the game, but before going down that path I wanted to know if there was an easier way. Can I somehow create meshes that share the geometry, vertexShader, fragmentShader, but have a *unique* uniform per instance of the mesh?

 

Thanks :D

Link to comment
Share on other sites

I was testing a little more, and while creating a new shader every time is a bit expensive, creating new geometry from the save vertices + uvs is cheap, and then I can put the data in an attribute on that geometry instead of as a uniform. I'm new to webgl so not sure if that is the right way, but seems promising.

Link to comment
Share on other sites

Btw, programs are cached so its fine to create same shader many times.

Now what I'm trying to do next is actually show different textures for each tile within each chunk, for which I'm using a collection of tileTypes which is either going to be an array or texture uniform with the data for the 256 tiles that comprise the 16x16 chunk. I hope that makes sense.

OK, so, there are two ways:

1. multi-texturing shader, put textureId somewhere there. Its https://github.com/pixijs/pixi-tilemap approach

2. put types of tiles to uniforms, put tile instances in attribute. Maybe use instancing there - need to be done on vertex shader

3. draw big 256x256 quad and analyze current pixel in fragment shader - works good on modern devices, but speed is the same as for pixi-tilemap - you will have dependant texture reads.

I just cant describe it all in a short post, and I'm very tired, so please make some kind of demo first that doesnt work, with minimum number of lines - and i'll try to fix it later.

Link to comment
Share on other sites

Okay so I only ended up making two fully functioning implementations, though I did go down a few paths just to benchmark varying techniques. I have not yet done any techniques with instancing (I've only been doing webgl for a few days and need to study some more). Hopefully someone finds this useful.

So a few details about the underlying map data before getting into the specific techniques.

The game is networked (nengi.js) and sends chunks to the game client as the player nears them (like minecraft more or less, but just 2D). Each chunk is an array of tiles, it happens to be a 1D array with some math to turn it back into 2D, but this detail probably doesn't matter. The map as a whole is sparse collection of chunks, which also doesn't matter too much but means that the chunks can be generated one at a time -- the whole map doesn't have to be filled in and exist when the game starts. The clientside chunk graphics generator, for both techniques below, would only generate one chunk per frame and would queue them so as to avoid ever generating too many graphics in a single frame and experiencing a noticeable hitch (sounds fancier than it is).

Let's say that the chunks are 8x8 tiles, and each tile is 16x16 pixels (I tested many variants). The network data for receiving a chunk then contains the chunk coordinates and 64 bytes. If not using a network or using different  dimensions then this would vary, but I'm going to stick with these numbers for the examples. They were also benchmarked on two computers which I will call the chromebook (acer spin  11, Intel HD 500) and the gaming rig (ryzen 1700, 1070 ti).

The first experiment uses render textures. So it receives the 64 tiles, creates 64 sprites according to the tile types, and then takes the whole thing and bakes it into a single texture. That chunk sprite is then positioned as needed (probably at x = chunkX * chunkWidthInPixels, etc for y). On the gaming rig many varieties of chunk and tile sizes and multiple layers of tiles could be baked without any hitches. The chromebook was eventually stable at 8x8  chunks with 3 layers of tiles but anything bigger than that was producing notable hitches while generating the chunk graphics.

It is also worth mentioning that the above technique *minus baking the tiles* is probably what everyone makes first -- it's just rendering every single tile as a sprite without any optimization beyond what pixi does by default. On a fast computer this was actually fine as is! Where this one runs into trouble is just on the regular rendering time for the chromebook-level device... it's simply too many sprites to keep scrolling them around every frame.

The second experiment was to produce a single mesh per chunk with vertices and uvs for 64 tiles. The geometry is created after receiving the network data and the tiles in the mesh are mapped to their textures in a texture atlas. I feel like, as far as webgl options go, this one was relatively simple. The performance on this was roughly 4-6x faster than the render texture scenario (which already worked on a chromebook, barely) so altogether I was happy with this option. I was reading that there would be issues with the wrapping mode and a need to create gutters in the texture atlas to remove artifacts from the edges of the tiles as they rendered, and I'm not sure if I will need to address these later. My technique for now was to make sure the pixi container's coordinates were always integers (Math.floor) and this removed artifacts (stripes mostly) that appeared at the edge of the tiles due to their texture sampling.

That's all I've tried so far, but I'm pretty satisfied with both the render texture technique and the webgl technique. I'll probably stick to the mesh+webgl version as I'm trying to use webgl more in general.

Link to comment
Share on other sites

The game is networked (nengi.js) and sends chunks to the game client as the player nears them (like minecraft more or less, but just 2D).

Hello, minecraft modder and creator of gameofbombs.com here :) canvas2d 1920x1080 @ 60 FPS 2013 year 
https://www.youtube.com/watch?v=az5S9oQKXIQ

Let's say that the chunks are 8x8 tiles, and each tile is 16x16 pixels (I tested many variants). 

i usually go for 256x256 per one rendertexture

The chromebook was eventually stable at 8x8  chunks with 3 layers of tiles but anything bigger than that was producing notable hitches while generating the chunk graphics.

three layers? oh right, you want characters between them. I used 3 layers in first versions too, then switched to one.

The second experiment was to produce a single mesh per chunk with vertices and uvs for 64 tiles. 

Yep, pixi-tilemap is good for that. it doesnt have rotations yet but im about to add them this week. It also cares about edges, however that's 1.5x penalty for fragment shader - depended reads (modify uv's inside fragment). Yes, Math.floor can take care of that.

> That's all I've tried so far, but I'm pretty satisfied with both the render texture technique and the webgl technique. I'll probably stick to the mesh+webgl version as I'm trying to use webgl more in general.

Same conclusion as mine. Extra canvases worked even in canvas2d back then. When I implemented webgl meshes in 2013 they shown same , now I believe meshes are faster but I didnt try instanced attributes yet - that can lower attributes buffer significantly.

 
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...