• Content Count

  • Joined

  • Last visited

  • Days Won


Exca last won the day on August 4

Exca had the most liked content!


About Exca

  • Rank
    Advanced Member

Recent Profile Visitors

4314 profile views
  1. Add filter to your main container. Set the filterArea of the filter to renderer.screen if I remember right.
  2. You could do that by creating a shader with two texture inputs which blend between those two. The actual code & math inside the shader is out of my scope in reasonable time. As a starting point I would propably try doing somekind of convolution with previous frame as feedback and then moving towards target image. This is the closest example for blending. In the example a perlin noise image is used to determine the blending. In your case that would be the shader somehow morphing the images. https://pixijs.io/examples/#/mesh-and-shaders/multipass-shader-generated-mesh.js
  3. Exca

    Unlimited FPS

    If you find one, I would also be interested in that, though to other way (forcing browser to do rendering slower). Tried many flags but couldn't get the raf interval to change.
  4. Best way is to make the textures in multiple sizes beforehand and then load only the ones that are your wanted size. In theory you could do resizing and dynamic texture generation in client also, but you would suffer qualitywise when compared to doing the resolution changes with software with better scaling algorithms. Anyways if you want to do without uploading large textures to gpu and while still using original sized textures then here's a short way how you could do that: - Create a temp canvas that is you target resolution size. - Get 2d context from it. - Draw your image to that canvas with scaling to correct size. - Create a basetexture that has that canvas as element. - Create your sprite using that basetexture. - Destroy your original image, basetexture and remove it from loader. I do not recommend using this method, but rather look into how you can have separate asset resolutions and load only the correct one.
  5. Added spinners pull request to examples. You can check that from pr or wait until it gets accepted to see it at examples.
  6. Exca

    Sprite compression

    There's plenty you can do with compression. For png/jpg images the amount of original image doesnt change how much memory gpu takes for that image. As those images get unpacked. But you could use compressed textures to remove decompressing and lower gpu memory usage. https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Compressed_texture_formats One thing I'm not really sure is that if you have 8 bit png does that become 8bit or 32bit image after decoding. When it comes to loading related compression (targeting minimal dl time and no compressed textures needed) my basic workflow is to do the following: - Use spritesheets to group similar colored assets together (without breaking batching). - Group assets that dont need transparency together. - Export all as nonpacked png. - For images without transparency, update the json to use jpg and pack the png with Google Guetzli (https://github.com/google/guetzli) - Pack the png images to 8 bit version and do a visual check if quality drops too much. I use PNGoo for this. (https://pngquant.org/) - Run all png assets trough PNGauntlet ( https://pnggauntlet.com/) It's a tool that combines PNGOut, OptiPNG and DeflOpt and picks the smallest png. And doing lots of profiling between steps to see what actually has a positive impact.
  7. We're currently doing "ie mode" by putting 2d canvas on and game to low-end rendering mode ( limited effects, low quality assets etc). Before getting to play client gets a lot of warnings that you should change your browser as MS no longer supports IE and it's a bad idea securitywise to use it. If some user has problems with IE, then no debugging is done for those, just a generic message that official support for IE has ended and you should change the browser. Also not full ES6 as I'm using Haxe mostly. With typescript projects I use ES5/ES6/ESNext depending on the project and target audience.
  8. https://pixijs.io/examples/#/mesh-and-shaders/multipass-shader-generated-mesh.js That example has a texture that gets passed to next shader. You should do similar but instead of shader rendered texture you would render your heightmap into a texture and then pass that texture to your shader.
  9. The covering element has to have interaction enabled as well as the other item. Only those that are marked as interactive and whose parents dont have interactivechildren are taken into account when checking interactions.
  10. They are using 2d canvas, not 3d. 2d canvas has no limitations on amount as its run on the cpu and basically memory is the limiting factor in that. As a warning if you are going to do lots of canvases then you need to make sure they are updated only when needed. Otherwise your site will be very cpu heavy.
  11. You can destroy the textures after you have used them and create new ones when needed. Or if you know that you are about to reuse them at some point then you could just reuse the old ones and let pixi automatic memory management handle things. I have done one similar solution so that I created a graph that has basically a node with knowledge of what background it has and what are the next positions where it can move to. Then load that nodes bg and the ones next to it. And if distance goes farther away than N (in my case it was 1) then destroy that asset and reload when coming near again. And if loading hadn't happened yet when movement to another stage occurs then show a loading bar.
  12. Easiest way would be to render the rounded rectangle and then use a mask to show only parts of it. There's an example for that (with circle instead of rounded rectangle) here: I'll try to find time to make an example of spinners to pixi examples this week.
  13. Do you need to have the shapes update per frame? If not, you could draw them to a rendertexture and use that on the scene. Drawing that a bit larger than the viewport would alllow movement and then you would need to do update draw whenever enought movement has happened. If the points update on the fly, then that solution wont help as you would still end up drawing them every frame. Another option would be to draw your polygons with meshes. Might be faster or slower depending on how things update. There's also one way to make everything really fast with webgl2 using transform feedback shader. Though that requires plenty of knowledge on webgl/opengl and that the point update can be determined mathematically. With that techinique you could calculate positions for millions of points without an issue.
  14. Correct way would be to create your own version with a flag that does the ignoring. Then make a pull request to make it part of pixi if others see that it adds value.