pongstylin

Members
  • Content Count

    33
  • Joined

  • Last visited

  • Days Won

    2

pongstylin last won the day on February 10

pongstylin had the most liked content!

About pongstylin

  • Rank
    Advanced Member

Recent Profile Visitors

807 profile views
  1. Actually, you did have the time to help me with my issue. I was already trying to position the top-level container using bounds with poor results. But placing the top-level filtered and positioned container inside another vanilla container worked like a charm. (I wish I thought of that before posting). And the filters are indeed applying the correct coloring, so I must've missed something before. I didn't end up using your advice on generateTexture, though. I'm just using extract directly. Perhaps your advice is the equivalent of positioning my filtered container using bounds.
  2. pongstylin

    PIXI Extract

    I'm having a surprising amount of difficulty using the extract plugin to render a container object. Does it support filters? Anytime I apply a filter to the top-level container or any nested containers, it results in shifting the position of the contents down and right (resulting image getting truncated as well) and the desired color effect appears to be missing. I'm using PIXI v5.2.1
  3. Yes, earlier you said to cache it by piece type. But caching it by piece is more practical since 2 pieces of the same type can't share the same cached display object. I had considered it but didn't want to use the additional memory unless it is proven to be helpful. For proof, I just need to understand why it would be helpful. After all, when building these display objects, I'm using cached textures.
  4. Remember that they are pre-compiled. So building them has no impact on executing an animation. Are you suggesting that display objects previously seen in stage rendering, but had been removed from the stage, would render faster than fresh display objects when added back to the stage? If so, how does that work?
  5. First, a bit of context. Most of the animations in my app involve moving a piece on a game board from point A to point B or showing a piece attacking another piece. Each frame for a piece is composed of multiple sprites. With a few exceptions, the sprites for a given piece type are all contained within a single spritesheet - one spritesheet per piece type. I made sure the spritesheet itself uses POT dimensions although each frame has arbitrary dimensions. Before running a move or attack sequence that involves one or more pieces on the board, I pre-compile the frames for the entire animation. Conceptually, this involves building an array of PIXI container objects for each piece. Each container in the array is the root of a tree of containers and sprites that provide a complete picture of the piece in a single frame. All tints, color-matrix filters, positions, and scaling are pre-applied. At that point, drawing a frame involves swapping out the root containers for each piece involved in the animation and then rendering the stage. The rest of the stage remains unchanged. Once the animation ends, all of the pre-compiled containers and sprites are destroyed and not reused in whole or in part in subsequent animations. If you have more questions regarding the context, let me know. So while I ensured the stage manipulation during animation execution is very efficient, I want to make sure stage rendering is efficient as well. As things stand right now, rendering the stage can take 10 milliseconds on my laptop even if nothing about the stage has changed. Even though the game runs at 12fps allowing for 83ms of rendering time, I've suffered from intermittent skipped frames when playing the game on my phone. I haven't proven yet that the skipped frames on my phone are caused in part due to long stage rendering times, but I figured I would see if there is an opportunity for improvement there. Any best practices you can think of or short-comings with the approach I'm taking?
  6. And since my canvas renderer was silently failing to render anything, I intuitively figured out 3 more modules I needed. I looked at "canvas-display" and was like hmmmm, maybe I can swap out "display" with "sprite", "graphics", and "text" to make those things show up. Bingo, yay for pattern recognition. But running on intuition isn't very satisfying. There's got to be a guide on this right?
  7. Yes indeed. I've never seen a web framework that I've liked. But this library and web component libraries have pleased me very much. Moving to the consumable model is just the next level of awesome. I see the canvas-display module doesn't have exports and decorates the existing Container class, so added "import '@pixi/canvas-display'" to my code and it works just fine. But my documentation question went unanswered. Is there a list of such components someplace? Just seeing a list would allow me to intuitively find what I need such as in this case. Tired or not, I'm not sure how I could've figured this out. Not actually trying to run PIXI in a server environment. It's just the fact that my modules can't statically import PIXI modules without crashing. The imported modules would go unused in a server context, but used in a client context. It is an unusual case where my modules are used in both a server (NodeJS) and client context. On the server the modules serve as a state engine (e.g. move piece on this square to another square, if allowed). On the client, it also serves as a "local" state engine, but calls rendering methods to make the domain concepts visible (e.g. display the piece on this square). The game board and game pieces are two kinds of classes that are shared between the client and server and make use of PIXI classes only when their rendering methods are called. So, while I understand that I can't use PIXI in a server context, it is just unusual that simply importing them causes the server to blow up. If moving the logic to constructors or at least skipping logic based on "document" existence checking is not a priority, then that's fine. It's just a nice-to-have. Perhaps something to consider as architecture changes are made for other reasons. A new question. So, this "BatchRender" plugin, if omitted, does crash when using the WebGL renderer. It seems to crash for expected reasons - it assumes that a a batch plugin is defined. But when I add the plugin to the CanvasRenderer, it crashes at the point I instantiate the renderer. Perhaps there is no point in using the BatchRenderer in a canvas context since there is no GPU to which textures need to be uploaded. Is there documentation anyplace that lists the various renderer plugins developed by the PIXI team and when I would or wouldn't use a particular plugin?
  8. So I'm refactoring my game to use PIXI v5 consumable components so that, with webpack tree shaking, the JS bundle never sees modules that I don't use and omits exports from modules that I do. Unfortunately, despite being comfortable with the PIXI documentation and looking through PIXI source code, it is proving difficult to navigate the plugins that I need to make basic things work. I've included a work-in-progress code snippet below where I pull in the various PIXI bits that I need. My immediate problem is that the Canvas Renderer fails to render the stage. The stage is a "Container" object. It crashes when it attempts to call the "renderCanvas" method on the Container object here. The method doesn't exist. So, I'm guessing I need to import something and/or register another or different plugin. So here's a summary with my feedback/questions: Is there a list of all of the @pixi modules somewhere with their exports? I've tried to look at the normal PIXI documentation, but each class doesn't seem to call out the @pixi module that contains it. So, a list would help me figure out the right module to import. How do I get past the canvas rendering issue I'm experiencing? Notice my comment in the code snippet below. Is it really necessary to create canvas objects at the point when a module is imported? Perhaps move some of that logic to a constructor? import { Renderer, BatchRenderer, BaseTexture, Texture } from '@pixi/core'; import { CanvasRenderer } from '@pixi/canvas-renderer'; import { InteractionManager } from '@pixi/interaction'; import { Container } from '@pixi/display'; import { Sprite } from '@pixi/sprite'; import { Graphics } from '@pixi/graphics'; import { Text } from '@pixi/text'; import { Rectangle, Polygon, Point } from '@pixi/math'; Renderer.registerPlugin('batch', BatchRenderer); Renderer.registerPlugin('interaction', InteractionManager); CanvasRenderer.registerPlugin('batch', BatchRenderer); CanvasRenderer.registerPlugin('interaction', InteractionManager); /* * Some game classes are shared between the client and server. Some of them have * rendering methods that are only called on the client side, which use PIXI classes. * But, the PIXI classes can not be statically imported since PIXI requires the 'document' * object to load modules successfully. So, provide a global access point which would * exist in the client context, but not server context. */ window.PIXI = { CanvasRenderer, BaseTexture, Texture, Container, Sprite, Graphics, Text, Rectangle, Polygon, Point, };
  9. Not complaining or pushing! If it gets done at all, it would make my day. I continue to be impressed by the evolution of the PIXI project. In the course of my own project, I have had to deal with a lot of maths of my own. The worst being implementing BOUNDED 2-finger panzoom up to my high standards of precision/quality or smoothly animating panning and zooming into another point in 2d space and back again. (Those small phone screens really stretch the imagination on how to make a largish game board usable) Of course, that is all 2d transformation stuff at which you guys are probably pros. I mention this to show my empathy for all the additional maths you guys have to deal with. I'll check out the resources to which you directed me. Thanks!
  10. So, I've been using PIXI.js since v3 and am fairly familiar with drawing gradients using 2d canvas and turning it into an image so that I can use it in a WebGL renderer. But now it's 2020 and we're on PIXI.js v5 with a mid-level API and consumable components. I'm just now making the migration to v5 and am hoping that there is a plugin or feature that I can use to do radial and linear gradients directly in WebGL. You see, I'm porting an old AS1 game over from Flash to HTML5 and the game makes extensive use of radial gradients and morph shapes to create interesting effects like a semi-transparent bubble-shaped shields with a moving/rotating shimmer on it as well as smoke and fire/explosions. To date, I've been converting all of this to spritesheets. But it kinda sucks to use all that extra space. But it would also kinda suck to compile all of it using 2d canvas as a loading step in the game - especially as I consider increasing the framerate for these effects (the game typically runs at 12fps). Theoretically, it would take up a lot less space and time if I did it like it was done in Flash. That is, I define one or more morph shapes and can animate them using the fastest framerate I can up to the screen refresh rate. It's a mobile game (PWA), so framerate can be inconsistent. Rendering the morph shape at the ratio derived from the ticker delta time could provide a smoother animation. That's my goal, so even if I handled the morph shape side of things, just hoping there's a fast/efficient solution for gradients outside of 2d canvas yet. Or, is there any plan to add such a feature with an estimated timeframe for completion?
  11. Seems off topic for this thread. I am not sure how tolerant this forum is about such things, so I'm not sure if I would respond here and permit this practice or make you create a new thread.
  12. Hah! Sure enough. I tested my app to see if my cursor would change to a pointer after an animation completed. It did when I had the InteractionManager plugged into the ticker. It didn't when it wasn't until I moved the mouse one pixel. So let's say I'm fine with the pointer not showing up until I move the mouse. Is that the only reason to run the manager through the ticker?
  13. As far as I can tell from the code, the ticker triggers callbacks every time requestAnimationFrame triggers its callback. So if you're hoping that the ticker can be configured to triggered callbacks 40FPS instead of 60FPS, you can't. But, just because the callback is triggered doesn't mean you have to render. For example, the Interaction Manager adds itself to the shared ticker so that it can figure out if your mouse is hovering over something that can be interacted with and change your cursor to a pointer. But the Interaction Manager only performs this check at 6FPS. See how it does it below: // ticker callback functionInteractionManager.prototype.update = function (deltaTime){ this._deltaTime += deltaTime; // this.interactionFrequency is 10, by default if (this._deltaTime < this.interactionFrequency) { return; } this._deltaTime = 0; // manage some interaction stuff}So it sums up the deltaTime (number of frames to render since last call) until it equals or exceeds 10 (6FPS when PIXI.CONST.TARGET_FPMS is set to 0.06). So even though the update function gets called 60FPS, it only does its manager stuff 6FPS. You could say that it is more flexible this way. You can run some animations at 30FPS and others at 60FPS, using the same ticker object. The only downside is if your entire app runs at 12FPS (this is what I use) then you got a loop running faster than you will ever need. But this is the case if you use requestAnimationFrame manually as well. Yes, since the deltaTime is the number of frames to render since last call, it should be easy to use in interpolation/extrapolation. I kind of already described the example, but here's some code to drive it home. var ticker,sprite,mover;ticker = new PIXI.ticker.Ticker();sprite = PIXI.Sprite.fromImage(imageUrl);mover = function (numFrames) { // Move the sprite to the right by 2 points per frame at 60FPS. // If this callback is called precisely on-time, then numFrames will be 1 and we'll move the prescribed 2 points. // If this callback is called in double the time (30FPS), then numFrames will be 2 and we'll move 4 points. // If this callback is called in the middle (40FPS), then numFrames will be 1.5 and we'll move 3 points. sprite.position.x += numFrames * 2; // stop the animation once we reach our destination. if (sprite.position.x >= 100) { sprite.position.x = 100; ticker.remove(mover); }};ticker.add(mover);I'm not sure if this is interpolation or extrapolation (I'm really not familiar with the theory, I just logically conclude that this might be a normal use-case), but I think it's in the ballpark of what you're talking about.
  14. The Ticker documentation is here: http://pixijs.github.io/docs/PIXI.ticker.Ticker.html And here: (It makes reference to using PIXI.ticker.shared) http://pixijs.github.io/docs/PIXI.ticker.html Although, I tend to prefer to use the code as my documentation: http://pixijs.github.io/docs/core_ticker_Ticker.js.html Essentially, PIXI.ticker.shared is an instance of the PIXI.ticker.Ticker class. This instance is used by the Interaction Manager, although I'm trying to figure out why since it seems unnecessary (nobody has responded to my post yet). Of course, you are free to use the same instance, but keep in mind if you modify how it behaves, anything internal to PIXI that uses it may be affected. The PIXI Ticker is based on requestAnimationFrame, so in function, it is little different from running your render loop manually using requestAnimationFrame. But the object's event interface may be more attractive to use and look at since it involves less code. Beyond this basic function, it offers these features: 1) Usability: You can start and stop it easily. 2) Efficiency: It auto-starts (you can disable this) when there are listeners/callbacks, and auto-stops when there aren't any. 3) Throttling: The requestAnimationFrame callback is passed the current time as an argument. But the ticker callback is the number of frames that should have been rendered since the last call, depending on your target frames per second. By default, PIXI.CONST.TARGET_FPMS (target frames per millisecond) is set to 0.06, which means you are targeting 60 frames per second. In a perfect world, it would always say 1. But if the browser lagged really bad, it might say 60, if it has been a whole second since the last call. One possible use of this argument is to skip frames if it has a value >= 2. Or, let's say you have a sprite moving from A to B. It is supposed to move 2 points per frame, but the callback handed you 1.5 (which, if consistent, equals 40FPS). That means you can move the sprite 3 points this frame instead of 2 to maintain the desired speed. 4) Throttling: The number of frames passed to the callback can be capped to a maximum value. So to use the sprite movement example from #3, you can make sure the sprite doesn't jump too-large a distance in a single frame - even if the browser is lagging badly. 5) Throttling: The number of frames passed to the callback can be multiplied. So to use the sprite movement example from #3, you can increase (i.e. multiply by 2) or decrease (i.e. multiply by 0.5) animation speed. 6) Usability: You can add callbacks that should be called only once by the ticker as opposed to a constant loop. 7) Usability: You can add or remove multiple callbacks at will. All callbacks will be called in the same frame, so that multiple animations may be managed separately, but rendered in sync. requestAnimationFrame supports this also, but keeping track of those request IDs can be more painful. 8) Usability: You could just stop the ticker and trigger callbacks manually. For example, you could allow a user to stop the animations and present a button that allows them to step through frame-by-frame. In essence, you got a bunch of little features here that cater to creative situations, but aren't useful in many scenarios. But the idea of wrapping requestAnimationFrame into an object that may be manipulated carries with it a lot of potential. Feel free to ignore it or use it, depending on your situation.
  15. Flake, I'm pretty sure you can still get 'er done the way you like in WebGL. Reminder, I said that parent/child relationships are OPTIONAL. So if they make things awkward, you don't have to use them. For example, using your latest example: function render (){ var graphics,sprite; // Remove all objects we rendered last time so that we can add a new set. stage.removeChildren(); graphics = new PIXI.Graphics(); // draw poly stage.addChild(graphics); graphics = new PIXI.Graphics(); // draw rect stage.addChild(graphics); sprite = PIXI.Sprite.fromImage(source); stage.addChild(sprite); graphics = new PIXI.Graphics(); // draw line stage.addChild(graphics); graphics = new PIXI.Graphics(); // draw rect stage.addChild(graphics); sprite = PIXI.Sprite.fromImage(source2); stage.addChild(sprite); // about 300 more}Notice that I'm creating a new graphics object for every shape. I'm creating a new sprite object for every image. I add them to the stage in order so that later objects layer on top of earlier objects. Every object is positioned using absolute coordinates since we're not using parent/child relationships. This should more closely match what you're doing already for the canvas. Regardless of whether you are working with 2d canvas manually, or if you are using PIXI, you are drawing the final product layer-by-layer. So what you can do manually, PIXI can do for you on 2d or webgl context. You see, you can't draw images on a graphics object. But you don't NEED to. You just gotta stop thinking of the graphics object as if it were your entire canvas. It's just a layer. You may draw more than one shape on a graphics object to be in that layer, but I fragmented the layers above just in case your code is similarly fragmented.