Exca

Members
  • Content Count

    357
  • Joined

  • Last visited

  • Days Won

    10

Exca last won the day on August 4 2020

Exca had the most liked content!

2 Followers

About Exca

  • Rank
    Advanced Member

Recent Profile Visitors

5018 profile views
  1. Easy way on how to get better looking downscaling is to find out the points where your game starts looking bad and then instead of scaling down the game containers / elements, you keep that good looking resolution and scale down the canvas element. This way the downscaling algorithm is not impeded by webgl limitations but can use the one that browser uses natively. Little bit hacky way, but it's a pretty well working workaround that can be achieved with small effort.
  2. You could make a filter that takes the world texture as input and the light mask as input and then just draw the world if light has value at that same position. And otherwise keep value hidden. Something like this: vec4 world = texture(worldTex,uv); float light = texture(lightTex,uv).a; //Using only one channel for light, this could also be light color + alpha for intensity gl_FragColor = mix( vec4(0,0,0,1), world, light);
  3. Do you have only canvas on the page? Then using lighthouse wont give very much detail as it has no components to analyze canvases. Only the dom-side of things, loadspeeds and stuff like that. Most likely the LCP is for the canvas element and for some reason that fails to be analyzed. It should go similarly as with images. Do you have an example on the site which fails?
  4. How do you use it and what sizes are your textures? How complex is the scene you render to RT? There's no additional hidden costs in rendering to rendertexture vs. rendering to screen, other than the additional memory usage.
  5. Just listen for wheel event on the window/document/element https://developer.mozilla.org/en-US/docs/Web/API/Element/wheel_event
  6. You have basically two options. 1. Render the expensive stuff into a separate rendertexture and use that as you would any other sprite. Rerender the rt when things change. 2. Use two canvases. Update the expensive canvas only when needed. [Edit] For 1 you can use cacheAsBitmap = true to create rt of a container and use that instead of the whole render list. Though I'd suggest using a custom handling with own RT to handle this as debugging cacheAsBitmap can be a nightmare if there's some errors.
  7. I dont think the regular tilingsprite supports mirrored rendering out of the box. You could create a custom shader for it that detects if it's on an odd or even tile and then flips the uv-coordinates depending on that.
  8. Check the network tab if you are still getting the cors error. To fix that you would need to either run the game on the same domain as the images to ignore cors-rules or have the server send a crossOrigin header that applies to your domain or just a wildcard. The renderers null error might just be due to asset not being loaded as cors blocks it.
  9. How many sprites you have? There's a batchsize limit. When that is reached the current batch is rendered and new one is starting. You can change that size with PIXI.Settings.SPRITE_BATCH_SIZE. Default is 4096.
  10. 2048x2048 is always a safe bet. It's supported by virtually all devices that can run webgl. There used to be a site that collected statistics on different device data but it seems to be gone from the internet. You could use gl.getParameter( gl.MAX_TEXTURE_SIZE ) to get largest dimension the device supports. Maximum I would go is 4096x4096 and for cases where device says lower use either a downscaled version or have multiple textures.
  11. The bottleneck when rendering squares is just the amount of squares you would need to render if you used basic sprites. Rendering the squares to rendertexture and then rendering that texture to screen would make the frames where rendering doesnt change faster. But when the rendertexture needs to be rerendered then it would still take some time. And in webgl the whole frame gets repainted every time. If you are sufficient with having pixels be the squares and dont need anything fancy like borders / textures to squares then you could use additional 2d canvas and do the game of life operations on that and then render that canvas inside pixi. That way you could have the game of life update only the canvas data and still have smooth scrolling/zooming if needed.
  12. Very simple optimization. Instead of graphics use sprites with single white rectangle as their basetexture. Then apply tint to them to color the sprite. That way the squares can be rendered as a batch. That should be good enough for 150*200 squares (30k sprites). But for 1000 x 1000 (1M squares) you need to go deep into webgl rendering or have some other optimization strategy. Or would those squares be all visible at the same time? If not, then that would be doable by separating logic from rendering and only rendering a subsection of the whole area. And here's a little rundown about different graphic objects: - Graphics: Dynamically drawn content. Use when you need to draw lines, shapes etc. Be aware that updating graphics every frame might be costly depending on the complexity. - Sprites: Sprites are basically just to tell what texture to draw and where, with this rotation, tint and scale. Sprites are among the cheapest objects in pixi. - Textures: Textures are a region of baseTexture. They tell you what part of a baseTexture should be drawn. When using spritesheets the difference between texture and baseTexture is very noticable. When using just regular images then usually textures point just to a baseTexture and say that I want to render that whole thing. - Basetexture: Basetextures represent a single image in memory. - Mesh: Meshes are renderables that have customizable vertices. You could think that sprite is also a mesh that has 4 vertex points (topleft, topright, bottomright and bottomleft). With Meshes you can control how your polygons get formed. There are some premade mesh classes that provide premade useful meshes: SimpleRope, SimpleMesh and SimplePlane. Those abstract some of the complexity away. And when to use them: Graphics: Dynamic drawn content. Sprites: Images with basic affine transformations (scale, rotation, position) and basic color transformation (tint, alpha). Textures & BaseTexture: Pretty much always if you have some images to use. Very often these get handled automatically. Mesh: When you need deformations. Also here's a short instruction on shaders: Modern computer graphics cards have a pipeline where you tell what program (vertex + fragment shader) you want to use, what vertices it gets as input and what uniforms (+ other stuff that I wont go into at this point). Then for each vertex it runs the vertex shader program. This basically calculates where on the screen should the point be. Then for the polygons formed by these vertices it runs the fragment shader for each pixel that is inside the polygon. The fragment shader returns the color value for that pixel. The uniforms mentioned beforehand are values that stay the same for both vertex and fragment shader on all vertex & pixel values. They are used to give values that are needed to calculate the output values. In sprite fragment shader "tint" is an uniform that is multiplied with the texture value. So basically your gpu renders wegbl like this (simplified): list of points to draw -> vertex shader -> find out what pixels are affected -> fragment shader -> pixel to screen.
  13. Add filter to your main container. Set the filterArea of the filter to renderer.screen if I remember right.
  14. You could do that by creating a shader with two texture inputs which blend between those two. The actual code & math inside the shader is out of my scope in reasonable time. As a starting point I would propably try doing somekind of convolution with previous frame as feedback and then moving towards target image. This is the closest example for blending. In the example a perlin noise image is used to determine the blending. In your case that would be the shader somehow morphing the images. https://pixijs.io/examples/#/mesh-and-shaders/multipass-shader-generated-mesh.js
  15. Exca

    Unlimited FPS

    If you find one, I would also be interested in that, though to other way (forcing browser to do rendering slower). Tried many flags but couldn't get the raf interval to change.