Siberia

RenderTexture vs Filters performance

Recommended Posts

Hello !

I have questions concerning performance and "RenderTexture and/or filter" for a specific case.

The context :

Our canvas is a big container with a lot of layers, here the order of rendering :
1 - Background image layer (a huge texture)
2 - A tile layer, a container that hold x sprites (furniture, etc.)
3 - A character layer, a container that hold x sprites controlled by players
4 - A lighting layer, container that hold individual animated light sources AND vision sources for characters (PIXI.meshes and custom shaders)
5 - A controls layer that hold x PIXI.Graphics objects

Some characters have a nightvision, it wouldn't be a problem if their nightvision wasn't grayscale.
To handle the grayscale, we need to turn to gray the background layer, the tile layer and the character layer into the field of vision
The option we have retained :
    1. Create a RenderTexture on layer 1/2/3 (only when 1/2/3 have changed), and process the texture in layer 5 in a PIXI.Mesh with a custom fragment and vertex shader.
    2. Create a RenderTexture on layer 1 only (only when 1 has changed), and use filters on individual sprites in 2/3, only when necessary. Often, it would be less that 15 sprite, but sometimes more that 15. :)
    
Above all, we are looking for the best performance.
Option 1 has big advantages, but an acquaintance tells me that the option with the filters would undoubtedly be more efficient. You see, the "probably" is a problem.
But it is true that the layers 1/2/3 can be particularly heavy, with huge textures and a lot of sprites.
Do you have any advice on which option to choose?
Thanks.

Share this post


Link to post
Share on other sites

probably 1. Filters use temp renderTextures to process stuff.

Also that kind of setup is easy to make using pixi-layers: https://pixijs.io/examples/#/plugin-layers/lighting.js , it has special "layer.getRenderTexture()" feature. just swap your container for layer(no need to set parentLayer yet), and use it.

Share this post


Link to post
Share on other sites

Oh, thanks Ivan!

I could steal some code from pixi-layers. we just need the getRenderTexture method :
I Need :
-
LayerTextureCache (without double buffer support)
- LayerTextureCache handling in our own PIXI.Container subclass

  - especially in render method

I didn't miss anything?

Share this post


Link to post
Share on other sites

Hey, it's working fine! :)

I just need to calculate a matrix and pass it to the vertex shader to position the texture correctly. By the way, is there a method to calculate a matrix automatically based on the properties of the target container?

Another point, you have to make two render calls to display the layer to the screen (in addition to generating the cache). I was wondering if it would be good to use the rendered texture in a sprite rather than calling render twice?

And thanks again Ivan!

Edited by Siberia

Share this post


Link to post
Share on other sites

 is there a method to calculate a matrix automatically based on the properties of the target container?

need more details. I've done it many times, but i dont understand why do you need it, for lighting layers its always screen-sized.
 

I was wondering if it would be good to use the rendered texture in a sprite rather than calling render twice?

Its the same. The difference is you have to be tricky to call render() method inside itself, that's why my code in pixi-layers binds RT and then return previous texture. Also if you get order in tree wrongly, like, sprite first and layer second - you'll see previous frame in sprite :)

Edited by ivan.popelyshev

Share this post


Link to post
Share on other sites
1 hour ago, ivan.popelyshev said:

need more details. I've done it many times, but i dont understand why do you need it, for lighting layers its always screen-sized.
 

Here an example.
The background is a container holding n container, which have all a zIndex.
- The background, natural elements, objects, etc.
- n Characters
- n Line of sight (a pixi mesh, bound to characters)

Basically, I need to get a render texture from the background, and pass it to meshes (n meshes).
Meshes are in fact quad, with specific shaders to render the line of sight.
I just need to pass a portion of the render texture to the mesh, where the texture will be rendered in grayscale.

image.thumb.png.855bddd6ac6decf0babab3ccdd67fd2b.png

Share this post


Link to post
Share on other sites

Oh, right, you need normalized coords. Add screen width/height to uniforms and divide on them in vertex. Do not use "gl_FragCoord" thingy because you dont really know where are you rendering, there might be a filter on top of everything ;) 

Edited by ivan.popelyshev

Share this post


Link to post
Share on other sites

Ok, so, i think I have a brain lock. This is often the case when i'm working with matrix and projection 😄

Here the vertex :

  precision mediump float;
  attribute vec2 aVertexPosition;
  attribute vec2 aUvs;
  uniform mat3 translationMatrix;
  uniform mat3 projectionMatrix;
  uniform vec2 canvasDimensions;
  varying vec2 vUvs;
  varying vec2 vSamplerUvs;
  
  void main() {
      vUvs = aUvs;
      vSamplerUvs = ((translationMatrix * vec3(aVertexPosition, 1.0)).xy - (mesh position?)) / (canvasDimensions?);
      gl_Position = vec4((projectionMatrix * translationMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
  }

 

Share this post


Link to post
Share on other sites

It's me again with a little question. 😅

I try to map my uv coord to the sampler coord in my fragment shader, but i have problem when i move or zoom in zoom out the mesh.
I tried this :
- Put MeshDimensions as a uniform to the fragment shader

- Pass translationMatrix as varying to the fragment

To map my uv coord to the sampler coord, i'm doing this in the fragment :

vec2 mappedCoord = (vec3(uv * meshDimensions,1.0) * translationMatrix).xy / canvasDimensions);

But it dont work... i know how to do it in custom shader for filters, but not for a mesh.

 

Edited by Siberia

Share this post


Link to post
Share on other sites
6 minutes ago, ivan.popelyshev said:

You overcomplicated things. I dont know why do you need meshDimensions in first place. Why "translationMatrix * aVertexPosition/canvasDimensions" didnt work for you?

At this point I have to ask you to make minimal demo :)

Yeah, it works well 😅
but with some meshes, I need to apply specific effects. Waving the sampler from the center of the meshes. So in this specific case, I Just need to map the mesh uv coords to the sampler coord.
I'm preparing a demo 😀

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

  • Recently Browsing   0 members

    No registered users viewing this page.