Jump to content

Exca

Members
  • Content Count

    364
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by Exca

  1. The problem with that is that the line gets really long as new points are added to it and old points are not removed. Do you need to have it scroll back? If not, then you could keep track of all the points line needs to draw and clear the line graphics every time new line is added and remove the points that have gone past the screen.
  2. Haxe is awesome for people who dont like typescript Though with that there's no proper externs for v5 yet.
  3. Hmm, if the x & y scale are the same, then the aspect should be identical to the basetexture. There seems to be two problems that cause the issue in the example. You need to use the texture.width instead of sprite.width as the sprite.width has scale calculated into it. Second problem is with filter. Having it on makes the image strected, taking if off makes the image look proper. Not really sure what goes wrong in the shader.
  4. I have a method where I have two different changes that can happen in the scene. Tweens and handlers that get run every frame. For tweens I check if amount of active tweens creater than 0, then it means that something is changing. For onrender-handlers I have a custom component that allows for components to register themselves to get onrender -events and their return value will tell if something has updated. If either tween has updated or any of the components requires rendertime, then that frame is rendered. Otherwise no render will occur. In some cases this can be improv
  5. You could scale the sprite. //Calculate scale to cover whole screen area. Use min to make it so that whole image shows and covers as much as it can. var scale = Math.max( window.innerWidth / sprite.texture.width, window.innerHeight / sprite.texture.height); sprite.scale.set(scale,scale); //Center the sprite sprite.x = Math.round( ( window.innerWidth - sprite.width ) / 2); sprite.y = Math.round( ( window.innerHeight - sprite.height ) / 2);
  6. You could get really good looking fonts by using sdf-fonts (signed distance field). Though implementing one requires lot of knowledge on shaders. Luckily there's a plugin https://github.com/PixelsCommander/pixi-sdf-text What I usually do though is to have basic textfield with double the fontsize and then use it at 50% scale and try to avoid going over 125% scaling as then it starts to look pretty bad. With the method of forcing to pot I think you need to say in addition that the texture should use mipmapping.
  7. Would do that if I could. The image generation part is done elsewhere, same thing saves everything client sees on screen, including dom-elements. Or maybe I could create an offscreen canvas from extract -data and use that as a replacement when image is being generated. Need to look into it.
  8. From webgl specification: The WebGLContextAttributes object is only used on the first call to getContext. No facility is provided to change the attributes of the drawing buffer after its creation. So can't do that.
  9. Is it possible to change preserveDrawingBuffer after renderer initialization in v4? I have a case where player can save a screenshot from a button, and the data is rendered to 2d canvas with extra info and that is then offered to player. If I use preserveDrawingBuffer then it works fine. If I toggle it off, then a black screen is saved. Having the preserve on though has a bad impact on performance on certain devices, so it would be better to use it only when needed. Any way to achieve this?
  10. The method was based on this: https://github.com/mattdesl/lwjgl-basics/wiki/2D-Pixel-Perfect-Shadows
  11. I have done similar thingie with v3, though it was heavily optimized with assumptions about what kind of lights there were, what was the world size etc. What I did in short was this: - Build a texture (2048x2048) with all the shadow casters in it. - For each light, do a raycast to 512 directions (on gpu) and build a 1x512 texture with each pixel value telling how long the light can go from that point. - Render lights using that texture (calculate angle, check if distance is over the value in texture, if it is, ignore pixel, otherwise draw light with light settings). - Draw ligh
  12. You can do that with vertex shader by giving each vertice a color and then gpu interpolates between those points. Or you can do the calculation in fragment shader to get more refined look. Here's an example how it could be done with fragment shader https://www.shadertoy.com/view/tls3zS For info on how the color palette is done read this article http://www.iquilezles.org/www/articles/palettes/palettes.htm
  13. You can use safari remote debugging also https://www.lifewire.com/activate-the-debug-console-in-safari-445798 Though that only works on macos.
  14. With ios devices it's pretty hard to get error logs without macos & developer unlocked phone. At least I havent found a way to easily get errors out from ios (where page has complete crash).
  15. Yep, apple related. Havent gotten mac mini to crash properly. Had it for one day only so didn't have much time to test.
  16. I have had this exact same issue and havent found any common factor except resource usage. At first I thought it was about webgl context being lost. But then I managed to get it to happen on a 2d context only page. Also removing amount of sounds used seemed to help in some cases, but not in all. Currently I'm pretty sure it has something to do how much ram the game uses. As I can get it to occur much more easily on ipad mini 1 vs. ipad mini 2 vs mac mini.
  17. You can use touchstart and touchend to track if user is holding down. Or pointerdown and pointerup.
  18. If you have too many canvases so that webgl context amount starts to break, you could use some canvases in 2d context. Also one thing that would need to be kept in mind is that only rerender the contexts that area actually visible / need rerendering, that way you can keep the performance hit at minimum. Custom application/render loop is something that you need in any case, as pointed out already by Ivan.
  19. There's plenty of responsive design threads with more info. But here's a short list of how canvas applications are usually done to make them responsive: - Have fixed canvas resolution. Use css to transform to wanted size. Keep aspect locked. - Resize canvas and scale elements inside to fit wanted area. Game aspect is fixed. - Build some logic inside your game that handles resizing into different aspect ratios and different sizes. What I usually is to have some logic that positions ui elements depending on device resolution. Then have the main game area fill one dimension with fix
  20. Animatedsprite requires a list of textures you wish to use. It should be something like this: var textures = [ PIXI.Texture.fromFrame("..."), ... ] this.symbol = new PIXI.AnimatedSprite( textures );
  21. This thread is an awesome idea. Could this be even sticky? I would love to have an indication on what kind of examples people want.
  22. On the left is the simple method. Sort based on y-coordinate and offset by anchors / pivots. This is basically what you have but your anchors are all on default 0,0. The image shows the classic issue with that method. One way to make it better would be to define a simple shape (yellow lines) that determines where the objects sorting point is at that position. After that you could do a simple line check algorithm where you cast a ray from top and see what line is hit first. That is the object that should be on back, next one on top of that and so on. Do this for all regions and each of you
  23. Basically the problem comes from a case where you have objects that should be behind one layer and in front of another. One way to solve this would be to split the sprites into multiple textures that have a minimal change in sorting areas. Another one would be to have sorting done so that instead of sorting whole container as is you would have multiple overlapping regions (with zindex this is much easier now than before). which would calculate the sorting inside a single limited region. I have done one isometric project with the trivial kind of sorting (that you already have). The way you
  24. You could create a simple test case where you have a canvas, you draw the video there and then read a single pixel from it. If it gives the same error, then the video causes tainting. If not, then it's something else.
  25. Do you draw something else than the video to canvas? If something taints a canvas then it stays tainted no matter what is rendered in the future. Or it might be due to stream becoming unavailable at some point for short duration and that could cause tainting (though the bug report I found on this should be already resolved, it was 5 years ago). Pretty sure it's some kind of edge case in security constraints which causes canvas to become tainted (by something), which causes security error when pixels are read from it.
×
×
  • Create New...