pixelburp

Members
  • Content count

    3
  • Joined

  • Last visited

  1. Apologies if this has either been asked before, or I'm misunderstanding how to use RenderTextures I have a renderTexture that's outputting the contents of a Phaser.Image(), which in turn contains an instance of Phaser.BitmapData(). In my update() loop, I want to update the position of some geometry drawn in the BitMapData, but those changes are never seen in the RenderTexture. What would be the reason for this? Like I said I could be misunderstanding how & why you might want to use a RenderTexture, but this portion of my code will be quite expensive & updated regularly - so it seemed like a good idea to pass this logic out to the GPU if possible, which is what a RenderTexture allows, right? Am open to some better ideas if I do indeed have the wrong idea! Below is the code: it's written in TypeScript and the class extends Phaser.Sprite(), so hopefully it makes sense even to those only familiar with JS. As you can see in my update() I'm redrawing a bitmapData.circle() with a new x position, then rendering back to the RenderTexture again. However, the circle never moves from its original position of 0. If I console.log out the x value, it's clearly update per tick. constructor(game: Phaser.Game, map: any) { const texture = new Phaser.RenderTexture(game, map.widthInPixels, map.heightInPixels, key); const bitmap = new Phaser.BitmapData(game, 'BITMAP', map.widthInPixels, map.heightInPixels); super(game, 0, 0, texture); this.bitmap = bitmap; this.image = new Phaser.Image(game, 0, 0, this.bitmap); this.texture = texture; // Other code to add this .Sprite() to the stage. } private update() { const bitmap = this.bitmap; bitmap.clear(); bitmap.circle(x, 100, 50, 'rgb(255, 255, 255)'); this.texture.render(this.image); x += 10; }
  2. Great, that's interesting @feudalwars thanks for the tips. So am I right in thinking your suggestion is more of a binary "you either see it or don't" mechanic, right? I wonder then how to create the same vision 'blob', only with 3 states of visibility: black => half-opacity => visible, which has been where the performance started to drag. I thought of keeping a cache of sprites' movements, so that I could set the 'last' location as half-opacity once you left the radius, but that because expensive even to render those cached areas. The renderTexture suggestion sounds interesting - would it be more performant than using Phaser.Image() <= Phaser.BitMapData() ?
  3. First time poster, long time lurker. short version: is it possible to chain, or merge the blendModes of multiple display objects? Say: I have Phaser.Image() with blendMode to MULTIPLY, its contents varying rgb() values to either hide or partially-hide what's beneath Above it could be another display object, also set to MULTIPLY, but with rgb(255, 255, 255) - AND - the blendMode chains with the Phaser.Image() so this object is transparent It doesn't feel possible, but I'm not 100% sure, or what a good alternative might be. I'm not even sure I'm explaining this correctly. Why? Well. I'm trying to build a functioning, dynamic 'Fog of War', as seen in RTS games. Ideally I want something like in 'Starcraft', where areas on the map can be: Hidden / unexplored Already explored, but outside any 'unit' radius of visibility Currently without the visibility of a player sprite Yes, in principle this is fairly easily done with a grid of squared tiles, but I want to do something a little ... rounder and more dynamic, like proper 'blobs' of fog rolling back and forth rather than arbitrary square tiles flicking ON/OFF. It seems like something you could do with WebGL Stencil Buffers but ATM that feels beyond my capabilities. So my initial idea was to have a Phaser.Image(), blendMode set to MULTIPLY, within it a BitMapData() grid of circular 'tiles', the fill() different shades of white-black depending on hidden/explored/visible. It's a bit crude & performance isn't great once a lot of player sprites start moving on the map; I'm experimenting on ways to improve speed & one idea was if it was possible to chain blendModes so the bigger fog grid didn't have to make so many re-renders. Hopefully that makes sense?