blakjak44

Members
  • Content Count

    15
  • Joined

  • Last visited

About blakjak44

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. No idea, haven't tried other frameworks! 😉 Well this is working fine for me. Just wanted to make sure there wasn't some reason this would break PIXI in some way.
  2. @ivan.popelyshev I actually found a solution to my blending issue using native webgl blend modes. This is what I do: const gl = app.renderer.gl PIXI.BLEND_MODES.MAX = app.renderer.state.blendModes.push([gl.ONE, gl.ONE, gl.ONE, gl.ONE, gl.MAX, gl.MAX]) - 1 /* ... */ sprite.blendMode = PIXI.BLEND_MODES.MAX Is there any reason this is not already in PIXI or I should not be doing this?
  3. Hmm ok are these modifications you plan on adding soon or should I try it myself? For now, it's simple. I just need the max value of each channel, so I was planning on just adding that to the shader parts.
  4. @ivan.popelyshev I actually have one more question. I need to implement custom blending for when tiles are overlapping. Will your pixi-picture plugin work for this? Are there any caveats I should be aware of or is it as simple as just setting the blendMode for the sprite?
  5. Great. I think I just needed confirmation that I am not botching anything. Thanks!
  6. OK good! Just using a generic Application instance for now created with nothing but height and width options of about 1500x1500. No antialiasing. I'm using a 2012 Macbook Pro with integrated Intel HD 4000 graphics 😞 (most users will obviously be on much better hardware, but I'd like to optimize as much as possible on lower-end just in case)
  7. OK so the number of draw itself isn't a concern. I'll won't worry about batching for now. You know, maybe I'm misinterpreting the profiler. If I look at the summary, idle time is quite high. But what doesn't make sense to me is that idle time is so high yet framerate is pegged between 30 and 40 fps. I've attached a few screenshots of the profiler results. Maybe there's a problem that jumps out at you? Or maybe there's nothing wrong and my laptop is just too shitty.
  8. OK so sounds like the specific memory management process is abstracted out. I guess that's one less thing for me to worry about. I think the I may need to disable to GC in that case because there will be situations where I need to toggle between tabs and I wouldn't want the textures from one tab destroyed because of inactivity. I've run the profiler and it looks like the requestAnimationFrame is eating too much time. I'm using the built-in application instance and after looking into it more, the "Application.render" call on each tick is eating up all the time. So this must mean that rendering all of these tiles is too slow, correct? For this particular example I have 1 draw call for each tile, so 40 draw calls. Not sure what a shader switch is. Is it just a switch between shaders before each draw call if each mesh uses its own shader? If so, then there would also be 40 shader switches.
  9. OK that makes sense. And if I do end up exceeding the GPU mem, does PIXI then start swapping textures as necessary? So I've been able to get this working using the Mesh class with a custom fragment shaders (actually a modified version of your demo: https://jsfiddle.net/flek/pct2qugr/175/). Since each texture actually represents a single channel of a multichannel image, I am using the frag shader to generate pixels from multiple textures. Here is my approach: let FS = `#version 300 es precision highp float; uniform highp usampler2D uTexSampler0; uniform highp usampler2D uTexSampler1; uniform float uHighs[2]; uniform float uLows[2]; uniform float uReds[2]; uniform float uGreens[2]; uniform float uBlues[2]; uniform float uAlpha; float R; vec3 color = vec3(0.0, 0.0, 0.0); in vec2 vTexCoord; out vec4 outColor; void main() { R = float(texture(uTexSampler0, vTexCoord).r); clamp(R, uLows[0], uHighs[0]); R = (R - uLows[0]) / (uHighs[0] - uLows[0]); color = vec3(max(color[0], R * uReds[0]), max(color[1], R * uGreens[0]), max(color[2], R * uBlues[0])); R = float(texture(uTexSampler1, vTexCoord).r); clamp(R, uLows[1], uHighs[1]); R = (R - uLows[1]) / (uHighs[1] - uLows[1]); color = vec3(max(color[0], R * uReds[1]), max(color[1], R * uGreens[1]), max(color[2], R * uBlues[1])); outColor = vec4(color, 1.0); } ` The shader is generated dynamically depending on the number of channels. The format I'm using is this: const resource = new SingleChannelBufferResource(data, { width: metadata.x_pixels, height: metadata.y_pixels, internalFormat: 'R16UI', format: 'RED_INTEGER', type: 'UNSIGNED_SHORT' }); I've attached an example image in which each tile has 2 channels. This has broken batching but I think I understand how to create a batch plugin for my use case if that will improve performance. I saw your Spector.js recommendation from another post and gave it a try yesterday. It's really helpful! The tiles are interactive and the point of this application is basically to just align the tiles. Framerate is actually pretty good when the canvas is small, but drops a lot if the canvas is large. The framerate doesn't seem dependent on interaction either, which I don't fully understand. Even if the user is not moving anything, framerate remains low. Perhaps there are long calculations being run on my display objects being run every tick? Is there a way to optimize so that the larger canvas does not cause any issues? Do you think getting batching to work on my meshes would improve framerate? Last question, I do need to implement some custom blending on these tiles (just need the max value for each pixel in overlapping regions). Do you think this is possible while still batching? I looked at your pixi-picture plugin but haven't gotten around to trying to implement for my use case.
  10. Hey Guys, I'm still pretty new at this but I have been playing with PIXI quite a bit this year (which I want to thank the developers for all of your hard work - PIXI is amazing) and, while I have gotten pretty familiar with it, I am trying to dig deeper in an effort to optimize my application. I have been going through the source code to try and wrap my head around how it all works under the hood and it's starting to become more clear. My question is more or less about how WebGL works under the hood and how PIXI manages textures. For example: I need to render around 50 1920x1920 16bit images at a time (I posted on this before: https://www.html5gamedevs.com/topic/45138-what-does-this-sad-face-mean/?tab=comments#comment-250221). PIXI has done a pretty damn good job of this, but I'm sure I have a lot of room to optimize. I understand there are 16+ texture slots available depending on the GPU. Does this mean the flow would be something like this: 16/50 textures are uploaded to the GPU --> 16 textures drawn --> textures purged from the GPU --> repeat until all textures are drawn. If that is the case, are the textures constantly being removed from the GPU and then re-uploaded for each frame? Or do the uploaded textures remain on the GPU and PIXI will just bind the necessary textures to the available slots before each draw call. How does GPU memory impact this process? Hopefully that's clear, but I apologize if it isn't. Thanks!
  11. That did the trick. Thanks!
  12. I have an issue where I am trying to dynamically render multiple UI components in an Electron application. I am using a separate application instance for each of these UI components and eventually I reach the Chromium WebGL context limit. So it looks like the correct approach is to use canvas rendering for all of the components that do not require WebGL. The problem is that PIXI no longer supports forcing the canvas renderer. I read that I can use PIXI.js-legacy for access to the canvas renderer but when I use this in combination with standard PIXI, it creates numerous other issues throughout the application (e.g. the default interaction manager is never initialized). What should I do to fix this? Am I importing the libraries incorrectly and causing some conflict?
  13. @ivan.popelyshev @jonforum @Shukant Pal Sorry for the late reply guys. Didn't have a chance to revisit this issue for some time. So it turns out that I was stupidly creating too many WebGL contexts (as I was using multiple PIXI Application instances for various UI control elements). Eventually I must have hit the WebGL context limit and that caused it to crash. I fixed the issue by making all other PIXI Applications use the canvas renderer. Thanks for the help!
  14. Thanks for the fast response! I'm actually using electron, so chrome-based. And yes, I am rendering a lot of large textures - about 90 1920x1920. My application is for scientific purposes and I need to display 40-100 grayscale images at a time. However, each image contains 2 or more channels so I render each channel as a separate mesh and just use change the alpha. The raw images are 16bit but it doesn't look like pixi supports sampling integer uniforms so, so I convert to 32bit before passing to the mesh. I'm sure I'm not doing this as efficiently as possible. Perhaps you guys can offer some advice on how to optimize? One last thing, the meshes render fine initially. This issue happens after several cycles of clearing and loading new meshes, even though I thought I was correctly destroying the mesh when I'm done with them.
  15. I have seen this sad face pop up from time to time in my application and I can't figure out what causes it. Maybe the renderer is unhappy?