• Content count

  • Joined

  • Last visited

  1. Thanks @rich for clarifying those two things. I did presume V3 was probably still a fair bit in the future, but I was very curious / anxious that the versioning would be a major revision to the framework's structure. Good to hear that eventually not too much will require refactoring :-D
  2. Just wondering as an aside (and apologies if this was already asked & answered), but what is the expected difficulty curve between updating code from Phaser 2 to Phaser 3? I appreciate v3 is a WIP, but are there likely to be many gotchas but otherwise a fairly simple transition? Or will it be something similar to the angular framework where there was a wholesale top-to-bottom change between version 1 & 2+. If not done already it might be worth an article from the Phaser team themselves. I have a few WIP projects and part of me thinks it might be worth waiting until Phaser 3 arrives, as I'm worried I may have to refactor large chunks of my code...
  3. I see the plugin's a bit dormant ATM, I think, but perhaps you guys have had enough experience to answer what _seems_ like a simple question: Given I have a Phaser.Group() of Isometric.ISOSprite() acting as the 'floor' tile-map, is it possible to constrain the state's bounds based on this Group? The group will be dynamic so the number of sprites will change. I had initially thought simply going: game.world.setBounds(myGroup.x, myGroup.y, myGroup.width, myGroup.height); would work, but of course those coordinates don't accurately reflect the 'true' dimensions of the isometric grid, given the plugin projects the coordinates. I tried a couple of guess/combinations, but I always got 'weird' bounds as a result. Any ideas? Hopefully the answer's simple!
  4. Apologies if this has either been asked before, or I'm misunderstanding how to use RenderTextures I have a renderTexture that's outputting the contents of a Phaser.Image(), which in turn contains an instance of Phaser.BitmapData(). In my update() loop, I want to update the position of some geometry drawn in the BitMapData, but those changes are never seen in the RenderTexture. What would be the reason for this? Like I said I could be misunderstanding how & why you might want to use a RenderTexture, but this portion of my code will be quite expensive & updated regularly - so it seemed like a good idea to pass this logic out to the GPU if possible, which is what a RenderTexture allows, right? Am open to some better ideas if I do indeed have the wrong idea! Below is the code: it's written in TypeScript and the class extends Phaser.Sprite(), so hopefully it makes sense even to those only familiar with JS. As you can see in my update() I'm redrawing a bitmapData.circle() with a new x position, then rendering back to the RenderTexture again. However, the circle never moves from its original position of 0. If I console.log out the x value, it's clearly update per tick. constructor(game: Phaser.Game, map: any) { const texture = new Phaser.RenderTexture(game, map.widthInPixels, map.heightInPixels, key); const bitmap = new Phaser.BitmapData(game, 'BITMAP', map.widthInPixels, map.heightInPixels); super(game, 0, 0, texture); this.bitmap = bitmap; this.image = new Phaser.Image(game, 0, 0, this.bitmap); this.texture = texture; // Other code to add this .Sprite() to the stage. } private update() { const bitmap = this.bitmap; bitmap.clear(); bitmap.circle(x, 100, 50, 'rgb(255, 255, 255)'); this.texture.render(this.image); x += 10; }
  5. Great, that's interesting @feudalwars thanks for the tips. So am I right in thinking your suggestion is more of a binary "you either see it or don't" mechanic, right? I wonder then how to create the same vision 'blob', only with 3 states of visibility: black => half-opacity => visible, which has been where the performance started to drag. I thought of keeping a cache of sprites' movements, so that I could set the 'last' location as half-opacity once you left the radius, but that because expensive even to render those cached areas. The renderTexture suggestion sounds interesting - would it be more performant than using Phaser.Image() <= Phaser.BitMapData() ?
  6. First time poster, long time lurker. short version: is it possible to chain, or merge the blendModes of multiple display objects? Say: I have Phaser.Image() with blendMode to MULTIPLY, its contents varying rgb() values to either hide or partially-hide what's beneath Above it could be another display object, also set to MULTIPLY, but with rgb(255, 255, 255) - AND - the blendMode chains with the Phaser.Image() so this object is transparent It doesn't feel possible, but I'm not 100% sure, or what a good alternative might be. I'm not even sure I'm explaining this correctly. Why? Well. I'm trying to build a functioning, dynamic 'Fog of War', as seen in RTS games. Ideally I want something like in 'Starcraft', where areas on the map can be: Hidden / unexplored Already explored, but outside any 'unit' radius of visibility Currently without the visibility of a player sprite Yes, in principle this is fairly easily done with a grid of squared tiles, but I want to do something a little ... rounder and more dynamic, like proper 'blobs' of fog rolling back and forth rather than arbitrary square tiles flicking ON/OFF. It seems like something you could do with WebGL Stencil Buffers but ATM that feels beyond my capabilities. So my initial idea was to have a Phaser.Image(), blendMode set to MULTIPLY, within it a BitMapData() grid of circular 'tiles', the fill() different shades of white-black depending on hidden/explored/visible. It's a bit crude & performance isn't great once a lot of player sprites start moving on the map; I'm experimenting on ways to improve speed & one idea was if it was possible to chain blendModes so the bigger fog grid didn't have to make so many re-renders. Hopefully that makes sense?