Jump to content

Exca

Members
  • Content Count

    364
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by Exca

  1. Calculate the angle between mouse and the object and then set that to your objects rotation.:
     

    const mouse = renderer.plugin.interaction.mouse;
    const dx = mouse.x - object.x;
    const dy = mouse.y - object.y;
    object.rotation = Math.atan2( dy, dx);

    Written from memory without testing so not 100% sure everything is correct. But that's the basic idea. Also depending on what your objects "forward" direction is, you might need to add an offset to atan2 result.

  2. 12 hours ago, ikaruga said:

    Exactly. So should I be using a plugin? If so, which is the recommended one for this simple use case? As per the examples, I don't see 3D support out of the box

    I was thinking of doing that in 2d, but just doing it so that it looks like 3D. For example if you animate 5 ropes with varying points you can get a semi3d looking set. Or if you create a mesh that you rotate and squash a bit and then move the vertices up/down you can get a mesh that looks like it was orthogonal 3d without doing any real 3D stuff.

  3. You could extract the frames from canvas with toDataUrl or use the extract plugin to get the actual pixel data of a single frame. Then gather all of the frames and encode them into a video with some encoder, dunno if those exist in the browser, most likely someone has made one.

    You can do stacked rendering with multiple different ways.

    - Have 2 containers that move with different speeds.
    - Use overlapping canvases with each having their own renderer and move them.
    - Have each of your object in the scene have a depth value and move everything based on that.
    - Use rendertextures to create the stage and use those with tilingsprite and offset them at different speeds.

    Different methods have different limitations & benefits.

  4. 13 hours ago, brawlflower said:

    the context.audioContext.currentTime is globally available from the PIXI.sound.Sound object, though it doesn't look like it starts at 0 when the song starts playing, so I'm not totally sure what this signifies and I haven't been able to find documentation on it unfortunately

    The currentTime is a value that starts increasing when you start the context. That's why the timing is done with current-start calculation. https://developer.mozilla.org/en-US/docs/Web/API/BaseAudioContext/currentTime

    80ms sounds like it is not calculating correctly. Havent used pixi sound myself so not really sure what might be wrong with the progress event. If the audio is played without webaudio (which shouldnt happen on a modern browser unless explicitly told to do so) then those delays sound possible.

  5. Usually with Webaudio you could use audiocontexts currentTime and store the starting time when song is started. From that you can calculate position = context.currentTime - soundStartedAt. This should have no delay at all.

    Are you using pixi sound? Checking that source code the progress event looks like it should be correct timing. Can you see if you are using webaudio internally or htmlaudio? The later one does not have exact timing available.

  6. Easy way on how to get better looking downscaling is to find out the points where your game starts looking bad and then instead of scaling down the game containers / elements, you keep that good looking resolution and scale down the canvas element. This way the downscaling algorithm is not impeded by webgl limitations but can use the one that browser uses natively. Little bit hacky way, but it's a pretty well working workaround that can be achieved with small effort.

  7. You could make a filter that takes the world texture as input and the light mask as input and then just draw the world if light has value at that same position. And otherwise keep value hidden.

    Something like this:

    vec4 world = texture(worldTex,uv);
    float light = texture(lightTex,uv).a; //Using only one channel for light, this could also be light color + alpha for intensity
    gl_FragColor = mix( vec4(0,0,0,1), world, light);

  8. Do you have only canvas on the page? Then using lighthouse wont give very much detail as it has no components to analyze canvases. Only the dom-side of things, loadspeeds and stuff like that. Most likely the LCP is for the canvas element and for some reason that fails to be analyzed. It should go similarly as with images. Do you have an example on the site which fails?

  9. You have basically two options.

    1. Render the expensive stuff into a separate rendertexture and use that as you would any other sprite. Rerender the rt when things change.

    2. Use two canvases. Update the expensive canvas only when needed.

    [Edit] For 1 you can use cacheAsBitmap = true to create rt of a container and use that instead of the whole render list. Though I'd suggest using a custom handling with own RT to handle this as debugging cacheAsBitmap can be a nightmare if there's some errors.

  10. Check the network tab if you are still getting the cors error. To fix that you would need to either run the game on the same domain as the images to ignore cors-rules or have the server send a crossOrigin header that applies to your domain or just a wildcard. The renderers null error might just be due to asset not being loaded as cors blocks it.

  11. 2 hours ago, ShoemakerSteve said:

    So Sprites definitely helped, but I'm trying to push it a bit farther, and there seems to be way more render calls than there should be. Here's one of my better frames:

    image.thumb.png.8243d3b1ee2573aaf695641f5f55debc.png

    Looks like there's 9 calls to AbstractBatchRenderer.flush(), even though they're all using the same Texture/BaseTexture. The only thing changing in this frame is that drawCells() is changing the tint of a bunch of the Sprites. Does the default batch renderer not automatically know how to handle this or am I doing something wrong here?

    Here's my code for drawCells(): (I'm running the game logic with wasm in a web worker)

    
    const drawCells = async () => {
      const cells = await worker.tick();
    
      for (let row = 0; row < height; row++) {
        for (let col = 0; col < width; col++) {
          const idx = getIndex(row, col);
    
          if (cells[idx] === Cell.Alive) {
            cellSprites[idx].tint = ALIVE_COLOR;
          } else if (cellSprites[idx].tint !== DEAD_COLOR) {
            cellSprites[idx].tint = DEAD_COLOR;
          }
        }
      }
    };

    And my ticker is literally just this:

    
    app.ticker.add(async () => await drawCells());

     

    How many sprites you have? There's a batchsize limit. When that is reached the current batch is rendered and new one is starting. You can change that size with PIXI.Settings.SPRITE_BATCH_SIZE. Default is 4096.

  12. 2048x2048 is always a safe bet. It's supported by virtually all devices that can run webgl. There used to be a site that collected statistics on different device data but it seems to be gone from the internet. You could use gl.getParameter( gl.MAX_TEXTURE_SIZE ) to get largest dimension the device supports.

    Maximum I would go is 4096x4096 and for cases where device says lower use either a downscaled version or have multiple textures.

  13. The bottleneck when rendering squares is just the amount of squares you would need to render if you used basic sprites.

    Rendering the squares to rendertexture and then rendering that texture to screen would make the frames where rendering doesnt change faster. But when the rendertexture needs to be rerendered then it would still take some time.

    And in webgl the whole frame gets repainted every time.

    If you are sufficient with having pixels be the squares and dont need anything fancy like borders / textures to squares then you could use additional 2d canvas and do the game of life operations on that and then render that canvas inside pixi. That way you could have the game of life update only the canvas data and still have smooth scrolling/zooming if needed.

  14. Very simple optimization. Instead of graphics use sprites with single white rectangle as their basetexture. Then apply tint to them to color the sprite. That way the squares can be rendered as a batch. That should be good enough for 150*200 squares (30k sprites). But for 1000 x 1000  (1M squares) you need to go deep into webgl rendering or have some other optimization strategy. Or would those squares be all visible at the same time? If not, then that would be doable by separating logic from rendering and only rendering a subsection of the whole area.

    And here's a little rundown about different graphic objects:

    - Graphics: Dynamically drawn content. Use when you need to draw lines, shapes etc. Be aware that updating graphics every frame might be costly depending on the complexity.
    - Sprites: Sprites are basically just to tell what texture to draw and where, with this rotation, tint and scale. Sprites are among the cheapest objects in pixi.
    - Textures: Textures are a region of baseTexture. They tell you what part of a baseTexture should be drawn. When using spritesheets the difference between texture and baseTexture is very noticable. When using just regular images then usually textures point just to a baseTexture and say that I want to render that whole thing.
    - Basetexture: Basetextures represent a single image in memory.
    - Mesh: Meshes are renderables that have customizable vertices. You could think that sprite is also a mesh that has 4 vertex points (topleft, topright, bottomright and bottomleft). With Meshes you can control how your polygons get formed. There are some premade mesh classes that provide premade useful meshes: SimpleRope, SimpleMesh and SimplePlane. Those abstract some of the complexity away.

    And when to use them:
    Graphics: Dynamic drawn content.
    Sprites: Images with basic affine transformations (scale, rotation, position) and basic color transformation (tint, alpha).
    Textures & BaseTexture: Pretty much always if you have some images to use. Very often these get handled automatically.
    Mesh: When you need deformations.

    Also here's a short instruction on shaders:

    Modern computer graphics cards have a pipeline where you tell what program (vertex + fragment shader) you want to use, what vertices it gets as input and what uniforms (+ other stuff that I wont go into at this point). Then for each vertex it runs the vertex shader program. This basically calculates where on the screen should the point be. Then for the polygons formed by these vertices it runs the fragment shader for each pixel that is inside the polygon. The fragment shader returns the color value for that pixel. The uniforms mentioned beforehand are values that stay the same for both vertex and fragment shader on all vertex & pixel values. They are used to give values that are needed to calculate the output values. In sprite fragment shader "tint" is an uniform that is multiplied with the texture value.

    So basically your gpu renders wegbl like this (simplified): list of points to draw -> vertex shader -> find out what pixels are affected -> fragment shader -> pixel to screen.

  15. You could do that by creating a shader with two texture inputs which blend between those two. The actual code & math inside the shader is out of my scope in reasonable time.

    As a starting point I would propably try doing somekind of convolution with previous frame as feedback and then moving towards target image.

    This is the closest example for blending. In the example a perlin noise image is used to determine the blending. In your case that would be the shader somehow morphing the images.

    https://pixijs.io/examples/#/mesh-and-shaders/multipass-shader-generated-mesh.js

  16. Best way is to make the textures in multiple sizes beforehand and then load only the ones that are your wanted size.

    In theory you could do resizing and dynamic texture generation in client also, but you would suffer qualitywise when compared to doing the resolution changes with software with better scaling algorithms.

    Anyways if you want to do without uploading large textures to gpu and while still using original sized textures then here's a short way how you could do that:

    - Create a temp canvas that is you target resolution size.
    - Get 2d context from it.
    - Draw your image to that canvas with scaling to correct size.
    - Create a basetexture that has that canvas as element.
    - Create your sprite using that basetexture.
    - Destroy your original image, basetexture and remove it from loader.

    I do not recommend using this method, but rather look into how you can have separate asset resolutions and load only the correct one.

  17. There's plenty you can do with compression.

    For png/jpg images the amount of original image doesnt change how much memory gpu takes for that image. As those images get unpacked. But you could use compressed textures to remove decompressing and lower gpu memory usage. https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Compressed_texture_formats

    One thing I'm not really sure is that if you have 8 bit png does that become 8bit or 32bit image after decoding.

    When it comes to loading related compression (targeting minimal dl time and no compressed textures needed) my basic workflow is to do the following:

    - Use spritesheets to group similar colored assets together (without breaking batching).
    - Group assets that dont need transparency together.
    - Export all as nonpacked png.
    - For images without transparency, update the json to use jpg and pack the png with Google Guetzli (https://github.com/google/guetzli)
    - Pack the png images to 8 bit version and do a visual check if quality drops too much. I use PNGoo for this. (https://pngquant.org/)
    - Run all png assets trough PNGauntlet ( https://pnggauntlet.com/) It's a tool that combines PNGOut, OptiPNG and DeflOpt and picks the smallest png.

    And doing lots of profiling between steps to see what actually has a positive impact.

×
×
  • Create New...