pongstylin

Members
  • Content Count

    28
  • Joined

  • Last visited

  • Days Won

    2

pongstylin last won the day on February 10

pongstylin had the most liked content!

About pongstylin

  • Rank
    Member

Recent Profile Visitors

680 profile views
  1. And since my canvas renderer was silently failing to render anything, I intuitively figured out 3 more modules I needed. I looked at "canvas-display" and was like hmmmm, maybe I can swap out "display" with "sprite", "graphics", and "text" to make those things show up. Bingo, yay for pattern recognition. But running on intuition isn't very satisfying. There's got to be a guide on this right?
  2. Yes indeed. I've never seen a web framework that I've liked. But this library and web component libraries have pleased me very much. Moving to the consumable model is just the next level of awesome. I see the canvas-display module doesn't have exports and decorates the existing Container class, so added "import '@pixi/canvas-display'" to my code and it works just fine. But my documentation question went unanswered. Is there a list of such components someplace? Just seeing a list would allow me to intuitively find what I need such as in this case. Tired or not, I'm not sure how I could've figured this out. Not actually trying to run PIXI in a server environment. It's just the fact that my modules can't statically import PIXI modules without crashing. The imported modules would go unused in a server context, but used in a client context. It is an unusual case where my modules are used in both a server (NodeJS) and client context. On the server the modules serve as a state engine (e.g. move piece on this square to another square, if allowed). On the client, it also serves as a "local" state engine, but calls rendering methods to make the domain concepts visible (e.g. display the piece on this square). The game board and game pieces are two kinds of classes that are shared between the client and server and make use of PIXI classes only when their rendering methods are called. So, while I understand that I can't use PIXI in a server context, it is just unusual that simply importing them causes the server to blow up. If moving the logic to constructors or at least skipping logic based on "document" existence checking is not a priority, then that's fine. It's just a nice-to-have. Perhaps something to consider as architecture changes are made for other reasons. A new question. So, this "BatchRender" plugin, if omitted, does crash when using the WebGL renderer. It seems to crash for expected reasons - it assumes that a a batch plugin is defined. But when I add the plugin to the CanvasRenderer, it crashes at the point I instantiate the renderer. Perhaps there is no point in using the BatchRenderer in a canvas context since there is no GPU to which textures need to be uploaded. Is there documentation anyplace that lists the various renderer plugins developed by the PIXI team and when I would or wouldn't use a particular plugin?
  3. So I'm refactoring my game to use PIXI v5 consumable components so that, with webpack tree shaking, the JS bundle never sees modules that I don't use and omits exports from modules that I do. Unfortunately, despite being comfortable with the PIXI documentation and looking through PIXI source code, it is proving difficult to navigate the plugins that I need to make basic things work. I've included a work-in-progress code snippet below where I pull in the various PIXI bits that I need. My immediate problem is that the Canvas Renderer fails to render the stage. The stage is a "Container" object. It crashes when it attempts to call the "renderCanvas" method on the Container object here. The method doesn't exist. So, I'm guessing I need to import something and/or register another or different plugin. So here's a summary with my feedback/questions: Is there a list of all of the @pixi modules somewhere with their exports? I've tried to look at the normal PIXI documentation, but each class doesn't seem to call out the @pixi module that contains it. So, a list would help me figure out the right module to import. How do I get past the canvas rendering issue I'm experiencing? Notice my comment in the code snippet below. Is it really necessary to create canvas objects at the point when a module is imported? Perhaps move some of that logic to a constructor? import { Renderer, BatchRenderer, BaseTexture, Texture } from '@pixi/core'; import { CanvasRenderer } from '@pixi/canvas-renderer'; import { InteractionManager } from '@pixi/interaction'; import { Container } from '@pixi/display'; import { Sprite } from '@pixi/sprite'; import { Graphics } from '@pixi/graphics'; import { Text } from '@pixi/text'; import { Rectangle, Polygon, Point } from '@pixi/math'; Renderer.registerPlugin('batch', BatchRenderer); Renderer.registerPlugin('interaction', InteractionManager); CanvasRenderer.registerPlugin('batch', BatchRenderer); CanvasRenderer.registerPlugin('interaction', InteractionManager); /* * Some game classes are shared between the client and server. Some of them have * rendering methods that are only called on the client side, which use PIXI classes. * But, the PIXI classes can not be statically imported since PIXI requires the 'document' * object to load modules successfully. So, provide a global access point which would * exist in the client context, but not server context. */ window.PIXI = { CanvasRenderer, BaseTexture, Texture, Container, Sprite, Graphics, Text, Rectangle, Polygon, Point, };
  4. Not complaining or pushing! If it gets done at all, it would make my day. I continue to be impressed by the evolution of the PIXI project. In the course of my own project, I have had to deal with a lot of maths of my own. The worst being implementing BOUNDED 2-finger panzoom up to my high standards of precision/quality or smoothly animating panning and zooming into another point in 2d space and back again. (Those small phone screens really stretch the imagination on how to make a largish game board usable) Of course, that is all 2d transformation stuff at which you guys are probably pros. I mention this to show my empathy for all the additional maths you guys have to deal with. I'll check out the resources to which you directed me. Thanks!
  5. So, I've been using PIXI.js since v3 and am fairly familiar with drawing gradients using 2d canvas and turning it into an image so that I can use it in a WebGL renderer. But now it's 2020 and we're on PIXI.js v5 with a mid-level API and consumable components. I'm just now making the migration to v5 and am hoping that there is a plugin or feature that I can use to do radial and linear gradients directly in WebGL. You see, I'm porting an old AS1 game over from Flash to HTML5 and the game makes extensive use of radial gradients and morph shapes to create interesting effects like a semi-transparent bubble-shaped shields with a moving/rotating shimmer on it as well as smoke and fire/explosions. To date, I've been converting all of this to spritesheets. But it kinda sucks to use all that extra space. But it would also kinda suck to compile all of it using 2d canvas as a loading step in the game - especially as I consider increasing the framerate for these effects (the game typically runs at 12fps). Theoretically, it would take up a lot less space and time if I did it like it was done in Flash. That is, I define one or more morph shapes and can animate them using the fastest framerate I can up to the screen refresh rate. It's a mobile game (PWA), so framerate can be inconsistent. Rendering the morph shape at the ratio derived from the ticker delta time could provide a smoother animation. That's my goal, so even if I handled the morph shape side of things, just hoping there's a fast/efficient solution for gradients outside of 2d canvas yet. Or, is there any plan to add such a feature with an estimated timeframe for completion?
  6. Seems off topic for this thread. I am not sure how tolerant this forum is about such things, so I'm not sure if I would respond here and permit this practice or make you create a new thread.
  7. Hah! Sure enough. I tested my app to see if my cursor would change to a pointer after an animation completed. It did when I had the InteractionManager plugged into the ticker. It didn't when it wasn't until I moved the mouse one pixel. So let's say I'm fine with the pointer not showing up until I move the mouse. Is that the only reason to run the manager through the ticker?
  8. As far as I can tell from the code, the ticker triggers callbacks every time requestAnimationFrame triggers its callback. So if you're hoping that the ticker can be configured to triggered callbacks 40FPS instead of 60FPS, you can't. But, just because the callback is triggered doesn't mean you have to render. For example, the Interaction Manager adds itself to the shared ticker so that it can figure out if your mouse is hovering over something that can be interacted with and change your cursor to a pointer. But the Interaction Manager only performs this check at 6FPS. See how it does it below: // ticker callback functionInteractionManager.prototype.update = function (deltaTime){ this._deltaTime += deltaTime; // this.interactionFrequency is 10, by default if (this._deltaTime < this.interactionFrequency) { return; } this._deltaTime = 0; // manage some interaction stuff}So it sums up the deltaTime (number of frames to render since last call) until it equals or exceeds 10 (6FPS when PIXI.CONST.TARGET_FPMS is set to 0.06). So even though the update function gets called 60FPS, it only does its manager stuff 6FPS. You could say that it is more flexible this way. You can run some animations at 30FPS and others at 60FPS, using the same ticker object. The only downside is if your entire app runs at 12FPS (this is what I use) then you got a loop running faster than you will ever need. But this is the case if you use requestAnimationFrame manually as well. Yes, since the deltaTime is the number of frames to render since last call, it should be easy to use in interpolation/extrapolation. I kind of already described the example, but here's some code to drive it home. var ticker,sprite,mover;ticker = new PIXI.ticker.Ticker();sprite = PIXI.Sprite.fromImage(imageUrl);mover = function (numFrames) { // Move the sprite to the right by 2 points per frame at 60FPS. // If this callback is called precisely on-time, then numFrames will be 1 and we'll move the prescribed 2 points. // If this callback is called in double the time (30FPS), then numFrames will be 2 and we'll move 4 points. // If this callback is called in the middle (40FPS), then numFrames will be 1.5 and we'll move 3 points. sprite.position.x += numFrames * 2; // stop the animation once we reach our destination. if (sprite.position.x >= 100) { sprite.position.x = 100; ticker.remove(mover); }};ticker.add(mover);I'm not sure if this is interpolation or extrapolation (I'm really not familiar with the theory, I just logically conclude that this might be a normal use-case), but I think it's in the ballpark of what you're talking about.
  9. The Ticker documentation is here: http://pixijs.github.io/docs/PIXI.ticker.Ticker.html And here: (It makes reference to using PIXI.ticker.shared) http://pixijs.github.io/docs/PIXI.ticker.html Although, I tend to prefer to use the code as my documentation: http://pixijs.github.io/docs/core_ticker_Ticker.js.html Essentially, PIXI.ticker.shared is an instance of the PIXI.ticker.Ticker class. This instance is used by the Interaction Manager, although I'm trying to figure out why since it seems unnecessary (nobody has responded to my post yet). Of course, you are free to use the same instance, but keep in mind if you modify how it behaves, anything internal to PIXI that uses it may be affected. The PIXI Ticker is based on requestAnimationFrame, so in function, it is little different from running your render loop manually using requestAnimationFrame. But the object's event interface may be more attractive to use and look at since it involves less code. Beyond this basic function, it offers these features: 1) Usability: You can start and stop it easily. 2) Efficiency: It auto-starts (you can disable this) when there are listeners/callbacks, and auto-stops when there aren't any. 3) Throttling: The requestAnimationFrame callback is passed the current time as an argument. But the ticker callback is the number of frames that should have been rendered since the last call, depending on your target frames per second. By default, PIXI.CONST.TARGET_FPMS (target frames per millisecond) is set to 0.06, which means you are targeting 60 frames per second. In a perfect world, it would always say 1. But if the browser lagged really bad, it might say 60, if it has been a whole second since the last call. One possible use of this argument is to skip frames if it has a value >= 2. Or, let's say you have a sprite moving from A to B. It is supposed to move 2 points per frame, but the callback handed you 1.5 (which, if consistent, equals 40FPS). That means you can move the sprite 3 points this frame instead of 2 to maintain the desired speed. 4) Throttling: The number of frames passed to the callback can be capped to a maximum value. So to use the sprite movement example from #3, you can make sure the sprite doesn't jump too-large a distance in a single frame - even if the browser is lagging badly. 5) Throttling: The number of frames passed to the callback can be multiplied. So to use the sprite movement example from #3, you can increase (i.e. multiply by 2) or decrease (i.e. multiply by 0.5) animation speed. 6) Usability: You can add callbacks that should be called only once by the ticker as opposed to a constant loop. 7) Usability: You can add or remove multiple callbacks at will. All callbacks will be called in the same frame, so that multiple animations may be managed separately, but rendered in sync. requestAnimationFrame supports this also, but keeping track of those request IDs can be more painful. 8) Usability: You could just stop the ticker and trigger callbacks manually. For example, you could allow a user to stop the animations and present a button that allows them to step through frame-by-frame. In essence, you got a bunch of little features here that cater to creative situations, but aren't useful in many scenarios. But the idea of wrapping requestAnimationFrame into an object that may be manipulated carries with it a lot of potential. Feel free to ignore it or use it, depending on your situation.
  10. Flake, I'm pretty sure you can still get 'er done the way you like in WebGL. Reminder, I said that parent/child relationships are OPTIONAL. So if they make things awkward, you don't have to use them. For example, using your latest example: function render (){ var graphics,sprite; // Remove all objects we rendered last time so that we can add a new set. stage.removeChildren(); graphics = new PIXI.Graphics(); // draw poly stage.addChild(graphics); graphics = new PIXI.Graphics(); // draw rect stage.addChild(graphics); sprite = PIXI.Sprite.fromImage(source); stage.addChild(sprite); graphics = new PIXI.Graphics(); // draw line stage.addChild(graphics); graphics = new PIXI.Graphics(); // draw rect stage.addChild(graphics); sprite = PIXI.Sprite.fromImage(source2); stage.addChild(sprite); // about 300 more}Notice that I'm creating a new graphics object for every shape. I'm creating a new sprite object for every image. I add them to the stage in order so that later objects layer on top of earlier objects. Every object is positioned using absolute coordinates since we're not using parent/child relationships. This should more closely match what you're doing already for the canvas. Regardless of whether you are working with 2d canvas manually, or if you are using PIXI, you are drawing the final product layer-by-layer. So what you can do manually, PIXI can do for you on 2d or webgl context. You see, you can't draw images on a graphics object. But you don't NEED to. You just gotta stop thinking of the graphics object as if it were your entire canvas. It's just a layer. You may draw more than one shape on a graphics object to be in that layer, but I fragmented the layers above just in case your code is similarly fragmented.
  11. Sounds like what you need is to have a sprite child for that graphics object. You see, technically, the entire canvas (whether 2d or webgl, doesn't matter) is one big image. PIXI just draws different parts of the image separately to create the whole. So when you render a container, you will draw each child of that container (whether sprites, graphics, or something else) and all of its descendants (children, children of children, etc) in order from top (root) to bottom (leaves). The different parts may overlap, but at the end of the day the entire tree is reduced to an image. So do you want an image loaded from file to be in the middle of your graphics object? Easy. Just position the image object (sprite) so that it is in the middle of your graphics object. The easiest way to do that is to make the sprite a child of the graphics object, so that its position (x and y coordinates) are relative to the top-left corner of the graphics object. Parent/child relationships are not required, mind you, it is just a convenient way to get relative positioning (and also guarantee that the child is drawn after the parent, possibly overlapping). Does the whole paradigm of drawing position and order make more sense now? With all that in mind, I'll answer each question specifically: 1) Is there a way of doing this, similar to the plain/vanilla canvas way: // untested// 2d or webgl, doesn't matter.var renderer = PIXI.autoDetectRenderer(300,300);var canvas = renderer.view;var stage = new PIXI.Container();var sprite = PIXI.Sprite.fromImage(source);var graphics = new PIXI.Graphics();stage.addChild(graphics);graphics.addChild(sprite);// assuming graphics object is 100w x 100h// graphics position is centered horizontally and vertically relative to stage/canvas.// absolute position of the graphics object is 100 points right and 100 points down from the top-left corner of container (parent).graphics.position = new PIXI.Point(100,100);// assuming sprite object is 50w x 50h// sprite position is centered horizontally and vertically relative to graphics object.// absolute position of the sprite object is 125 points right and 125 points down from the top-left corner of container (grandparent).sprite.position = new PIXI.Point(25,25);// draw some lines n stuff on the graphics object.// Only reason I'm putting it in the ticker is just in case the image hasn't loaded yet.// We'll render 60 frames per second until it does... and after it does!PIXI.ticker.shared.add(function () { renderer.render(stage);});// put the canvas somewhere in the DOM. 2) Is the context (2d), unique to the Canvas? Yes. 2d is a context. webgl is a context. PIXI calls the 2d context renderer "Canvas". Both contexts are built on a "canvas" dom element. Confusing? Yes. 3) Would drawing imaged[sic] to the graphics object limit me to the CanvasRenderer... No. PIXI abstracts the WebGL nonsense (it's pretty arcane) away from your eyes so that you can create sprite objects that can be rendered magically using webgl context or rendered boringly using 2d context.
  12. pongstylin

    Resize Event

    DOM elements don't have resize events and PIXI doesn't have any either. Typically, you have to catch resize events for the window and adjust the canvas accordingly.
  13. TL;DR: Why does the interaction manager add itself to the ticker? Events seem to work fine without it - even mouse over/out. First of all, this is some great work. I haven't looked at my pixi-based project since January, and now that I'm resuming work I'm updating to PIXI v3 and am liking what I see in the code. What is especially awesome is disconnecting the event loop from the render loop using this fancy ticker object. This is due to my particular use case where I am not running animations continuously. Animations are triggered by user interaction and may be stopped by the animation completing or by another user interaction. So as long as nothing is happening visually, I want nothing to happen behind the scenes either. So in my obsessed quest for high performance and preserved battery life, in PIXI v2 I triggered a render event every time I detected a mouse move on the canvas. If I didn't do this, then mouse over/out events did not trigger correctly. Otherwise, I only rendered as I changed things and called it a day. The only thing that sucked was using slow browsers (*ahem* Firefox) on poor GPUs. As my animations ran at a modest 12 FPS, I noticed a performance hit as I moved my cursor around, triggering rapid, unthrottled, rendering - Yuck! I never did get around to doing something about that... So now I'm in the beautiful new world of PIXI v3. I see a fancy ticker object and the interaction manager adding itself to it and throttling itself. Theoretically, I could just turn off the ticker and call the interaction manager update myself in my mouse move event handler - taking advantage of doing less work and throttling. But I might want to make use of the ticker. So I commented out the line where the interaction manager added itself to the ticker. But before I did the whole mouse move business, I did a little testing without it entirely and was surprised at what I found. Without the ticker running and the interaction manager update ever getting called, mouse over/out and all other mouse and touch events seem to work just fine! Amazing. So why does it add itself to the ticker? What am I missing? Some information I gleaned in method call chains... update() => processMouseOverOut() onMouseMove() => processMouseMove() => processMouseOverOut() onMouseOut() => processMouseOverOut() All roads lead to processMouseOverOut(), so the update() seems redundant...
  14. Fortunately, I already knew how to use it. But, msha pointed out an interesting fact. Usually, the alpha column can be used as an offset, since the pixel is usually fully opaque (1) or fully transparent (and irrelevant). So, unlike brightening, it can be used for whitening. Only semi-transparent pixels will (albeit uniformly) not whiten as much as they should. But it's close. Very close. I love the cleverness. Maybe this is why the 5th column was skipped in PIXI's implementation (the current ColorMatrixFilter shader should be faster than mine).
  15. Thanks! You've opened my eyes. Now I know what a shader is. And with a little research these vertex and fragment shaders make a bit more sense in the pixi code. So it seems like what I want to do is modify the ColorMatrixFilter class to automatically select the appropriate fragment shader based on the length of the matrix assigned. Still trying to figure out what's going on here in the pixi code (they seem to have a fairly magical shader that supports any of 2x2, 3x3, or 4x4 matrices). But I think I'm armed with enough information to be dangerous. Any tips are welcome, though.