Jump to content

PIXI / WebGL Theory


blakjak44
 Share

Recommended Posts

Hey Guys,

I'm still pretty new at this but I have been playing with PIXI quite a bit this year (which I want to thank the developers for all of your hard work - PIXI is amazing) and, while I have gotten pretty familiar with it,  I am trying to dig deeper in an effort to optimize my application.

I have been going through the source code to try and wrap my head around how it all works under the hood and it's starting to become more clear. My question is more or less about how WebGL works under the hood and how PIXI manages textures. For example: I need to render around 50 1920x1920 16bit images at a time (I posted on this before: https://www.html5gamedevs.com/topic/45138-what-does-this-sad-face-mean/?tab=comments#comment-250221). PIXI has done a pretty damn good job of this, but I'm sure I have a lot of room to optimize. I understand there are 16+ texture slots available depending on the GPU. Does this mean the flow would be something like this: 16/50 textures are uploaded to the GPU --> 16 textures drawn --> textures purged from the GPU --> repeat until all textures are drawn. If that is the case, are the textures constantly being removed from the GPU and then re-uploaded for each frame? Or do the uploaded textures remain on the GPU and PIXI will just bind the necessary textures to the available slots before each draw call. How does GPU memory impact this process?

Hopefully that's clear, but I apologize if it isn't. Thanks!

Edited by blakjak44
Link to comment
Share on other sites

short answer: no, it does not purge textures from gpu mem. 16 locations mean we can use 16 textures in one drawcall, but we can specify which textures of already uploaded they are.

> 50 1920x1920 16bit images at a time

ugh... which internal format do you want to use?

btw, you can capture one frame of webgl commands using SpectorJS extension (doesnt work on pixi-examples iframe)

Edited by ivan.popelyshev
Link to comment
Share on other sites

OK that makes sense. And if I do end up exceeding the GPU mem, does PIXI then start swapping textures as necessary?

So I've been able to get this working using the Mesh class with a custom fragment shaders (actually a modified version of your demo: https://jsfiddle.net/flek/pct2qugr/175/). Since each texture actually represents a single channel of a multichannel image, I am using the frag shader to generate pixels from multiple textures. Here is my approach:

let FS =
`#version 300 es

precision highp float;

uniform highp usampler2D uTexSampler0;
uniform highp usampler2D uTexSampler1;
uniform float uHighs[2];
uniform float uLows[2];
uniform float uReds[2];
uniform float uGreens[2];
uniform float uBlues[2];
uniform float uAlpha;

float R;
vec3 color = vec3(0.0, 0.0, 0.0);

in vec2 vTexCoord;
out vec4 outColor;

void main() {
    
    R = float(texture(uTexSampler0, vTexCoord).r);
    clamp(R, uLows[0], uHighs[0]);
    R = (R - uLows[0]) / (uHighs[0] - uLows[0]);

    color = vec3(max(color[0], R * uReds[0]), max(color[1], R * uGreens[0]), max(color[2], R * uBlues[0]));

    R = float(texture(uTexSampler1, vTexCoord).r);
    clamp(R, uLows[1], uHighs[1]);
    R = (R - uLows[1]) / (uHighs[1] - uLows[1]);

    color = vec3(max(color[0], R * uReds[1]), max(color[1], R * uGreens[1]), max(color[2], R * uBlues[1]));

    outColor = vec4(color, 1.0);
}
`

The shader is generated dynamically depending on the number of channels. The format I'm using is this:

const resource = new SingleChannelBufferResource(data, {
    width: metadata.x_pixels,
    height: metadata.y_pixels,
    internalFormat: 'R16UI',
    format: 'RED_INTEGER',
    type: 'UNSIGNED_SHORT'
});

I've attached an example image in which each tile has 2 channels. This has broken batching but I think I understand how to create a batch plugin for my use case if that will improve performance.

I saw your Spector.js recommendation from another post and gave it a try yesterday. It's really helpful!

The tiles are interactive and the point of this application is basically to just align the tiles. Framerate is actually pretty good when the canvas is small, but drops a lot if the canvas is large. The framerate doesn't seem dependent on interaction either, which I don't fully understand. Even if the user is not moving anything, framerate remains low.

Perhaps there are long calculations being run on my display objects being run every tick? Is there a way to optimize so that the larger canvas does not cause any issues? Do you think getting batching to work on my meshes would improve framerate?

Last question, I do need to implement some custom blending on these tiles (just need the max value for each pixel in overlapping regions). Do you think this is possible while still batching? I looked at your pixi-picture plugin but haven't gotten around to trying to implement for my use case.

 

Screen Shot 2021-01-16 at 12.29.43 PM.png

Link to comment
Share on other sites

Browser has special "GPU process" that you can see in shift+escape menu in chrome. It holds all the uploaded textures and if webgl takes too much gpu mem - it somehow juggles the textures. That's how its possible to have 3GB of textures on videocard that has only 2GB total.

PixiJS does not have mechanisms to count used memory. You can iterate through "renderer.texture.managedTextures" and count it. The only solution I made for it is for huge app in production, and its not possible to share it.

PixiJS texture GC is primitive - if texture was used 3 or 4 minutes ago or something (i dont remember , look up the sources for "textureGC") it asks webgl to destroy it. We dont know whether its on gpu or cpu side of webgl, we dont know the limit of memory when whole webgl will crash, that's not possible in html5

Link to comment
Share on other sites

Perhaps there are long calculations being run on my display objects being run every tick?

Run profiler 10 seconds, look at idle %, if its high then its ok, if not - figure out what eats most of the time.

Do you think getting batching to work on my meshes would improve framerate?

How many draw calls and shader switches do you have?

 

Link to comment
Share on other sites

OK so sounds like the specific memory management process is abstracted out. I guess that's one less thing for me to worry about. I think the I may need to disable to GC in that case because there will be situations where I need to toggle between tabs and I wouldn't want the textures from one tab destroyed because of inactivity.

I've run the profiler and it looks like the requestAnimationFrame is eating too much time. I'm using the built-in application instance and after looking into it more, the "Application.render" call on each tick is eating up all the time. So this must mean that rendering all of these tiles is too slow, correct?

For this particular example I have 1 draw call for each tile, so 40 draw calls. Not sure what a shader switch is. Is it just a switch between shaders before each draw call if each mesh uses its own shader? If so, then there would also be 40 shader switches.

Link to comment
Share on other sites

... look in spectorJS how many times drawArrays/drawElements/bindShader are called. 40 is small number.

So this must mean that rendering all of these tiles is too slow, correct?

Yeah, but which part is it, pixel filling or what? did you look in bottom-up? did you look whether its gpu process eating or js?

Link to comment
Share on other sites

OK so the number of draw itself isn't a concern. I'll won't worry about batching for now.

You know, maybe I'm misinterpreting the profiler. If I look at the summary, idle time is quite high. But what doesn't make sense to me is that idle time is so high yet framerate is pegged between 30 and 40 fps. I've attached a few screenshots of the profiler results.

Maybe there's a problem that jumps out at you? Or maybe there's nothing wrong and my laptop is just too shitty.

Screen Shot 2021-01-17 at 7.18.39 PM.png

Screen Shot 2021-01-17 at 7.23.36 PM.png

Screen Shot 2021-01-17 at 7.30.58 PM.png

Link to comment
Share on other sites

OK good!

Just using a generic Application instance for now created with nothing but height and width options of about 1500x1500. No antialiasing.

I'm using a 2012 Macbook Pro with integrated Intel HD 4000 graphics ? (most users will obviously be on much better hardware, but I'd like to optimize as much as possible on lower-end just in case)

Link to comment
Share on other sites

yes, modification of v4 branch of pixi-picture should help, however it works only if everything is drawn in a renderTexture, like filter or something. I know how make it work with main framebuffer now, but i didnt add it to v4 version of pixi-picture. Basically it needs "transparent:true" in renderer params, and remove safeguard from filterManager mixin

 

what blendmode do you need?

Edited by ivan.popelyshev
Link to comment
Share on other sites

  • 5 weeks later...

@ivan.popelyshev

I actually found a solution to my blending issue using native webgl blend modes. This is what I do:

const gl = app.renderer.gl
PIXI.BLEND_MODES.MAX = app.renderer.state.blendModes.push([gl.ONE, gl.ONE, gl.ONE, gl.ONE, gl.MAX, gl.MAX]) - 1

/* ... */

sprite.blendMode = PIXI.BLEND_MODES.MAX

Is there any reason this is not already in PIXI or I should not be doing this?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...