• Content Count

  • Joined

  • Last visited

  1. Great, thanks for the reply. I'll have a look at the compressed texture module.
  2. When an image loads in Pixi, I assume it loads into memory as an HTML object and is copied into a WebGL (or other) texture buffer, meaning it uses 2x the memory of the image itself? A 2k image texture for example is 2048 x 2048 x 32-bit = 16MB. In a WebGL app, would this be 32MB of memory plus overhead? If so, is there a way to get Pixi to flush the source images from memory and just keep the WebGL buffers or does it need to retain the source images, for example, to generate buffers in different WebGL contexts?
  3. If I render blend modes like add or multiply in a render texture, they render with black where the transparency would be if I render them against a background. I don't think I can get the blend mode to affect background layers when rendering into a render texture, I'd probably have to put each layer with the same blend mode in a separate render texture and use the blend mode on the whole texture. Is there a way to have the blend mode layers render transparent instead of black in the render texture though?
  4. I'm only doing 4 masks, not 500. PIXI drops 30FPS with 4 masks on mobile. How much faster is SpriteMaskedrenderer than the PIXI one? If it's a lot faster, why doesn't PIXI use that rendering method?
  5. That's good to know for directly masking textures with rectangles. Unfortunately I need to do shapes e.g circular arc masks and it needs to work on containers with multiple sprites. Can you describe what the masking process is doing in WebGL? If you set a tint value, I assume it just adds a value to every pixel. When you set a mask, can't it just copy the pixels from a mask into the alpha buffer of the masked object very quickly? What is slowing it down so much?
  6. I made up a small demo scene. The problem is on mobile, if you drop it on a server somewhere and load it on any mobile device, you will see the performance issue. On desktop, it supports over 500 masked sprites no problem. On mobile, it drops 30FPS with just 4 sprites. The demo has the number set to 16 (maxSprites = 16) and the FPS drops to under 10FPS. When you turn masking off by commenting out gate.mask = gate_mask, mobile is smooth up to over 100 sprites. If there's a faster way to do masking, even if it's WebGL-only perhaps with a shader, then I could use that to maintain performance on mobile.
  7. I managed to get a slower method working but not usable in real-time for animation as it takes about 0.2-0.5 seconds to generate: - draw sprites into a rendertexture - draw masks into a rendertexture (same size as first rendertexture); - get pixel buffers for each using renderTexture.getPixels() - iterate over the buffer and draw a new PIXI Graphic using beginFill/endFill per pixel (drawRect 1x1) using color from sprite buffer and alpha from the mask buffer - convert the graphic to a texture using renderTexture.generateTexture(Graphic) This lets me manually create a static masked sprite in v3 and hundreds of sprites can run at 60 FPS. The slowest parts seem to be rendering the graphics into the buffer. What I could do for animation is render the whole mask into a larger buffer and then just read a portion of when generating the masked graphic. This would probably get it down to around 0.05-0.1 seconds per frame. This is still too high though, needs to be around 10ms (0.01s) tops . I need a faster way of converting a buffer of pixel values into a texture. Is there a low-level way of doing this like a WebGL call? Is Spine using direct WebGL calls for masking?
  8. These options sound viable. I have managed to extract the pixel values from sprite and mask, how do I create a texture from a buffer of pixel values? There seems to be a BufferResource: Which Pixi version has the fromBuffer option? I can't find BufferResource to be able to create a texture from a buffer: I downloaded the source on dev branch but can't find BufferResource there. "Next" branches are all 404:
  9. Pixi usually performs really well even with heavy scenes. I can put hundreds of objects, even thousands of particle sprites and get 60 FPS on mobile. I typically only use a single masked object in a scene but recently needed to use more and I was surprised at how badly Pixi performed with masked objects. For every masked sprite, I lost about 5 FPS on mobile (both Android and iOS on old and new phones, newer phones actually performed worse in some cases). By the time I added 4 masked sprites, I was down to 40 FPS, adding 10-20 masked sprites dropped to 20 FPS with really bad stuttering. As soon as I switch the mask off by removing the assignment, it goes from 20 FPS to 60 FPS, even with 200 of the same sprites. It doesn't seem like it should take so much resources to do masking given all the other effects that are possible. Tinting for example, adds a value to every pixel and costs nothing. Masking is just checking a pixel in one object and setting the equivalent pixel in the other object. Is there a faster alternative to using object.mask = mask? Is there a graphics buffer I can use to set the pixel values myself, e.g if I could create an array of pixel values and generate a texture buffer from that. Javascript is pretty fast with arrays. The main thing that bothers me is that the low performance happens with static masks. I could understand the performance hit when the sprite is animating relative to the mask but not when they are both static. Why doesn't it buffer the masked sprite and use that over and over like a normal sprite? I found a thread that describes the same issue, unfortunately I'm stuck on v3 for now: I just found the following with a possible alternative using multiply blending: To isolate it, it suggests using a voidfilter. Is this the fastest way to do masking in Pixi? If so, is there example code for this? Say that I did at some point want to draw a texture pixel by pixel, one way would be to draw a tinted 1x1 pixel sprite into a render texture. Is there a better way than this e.g set values in a buffer and convert it to a texture? Is there a way to read a pixel color/alpha value from a sprite or texture. There seems to be an extract function for WebGL and Canvas but it looks like this extracts the viewport. It would be good to be able to render sprites to a rendertexture and be able to read the pixel values of that texture using pixel co-ordinates.