pongstylin

Members
  • Content Count

    23
  • Joined

  • Last visited

  • Days Won

    1

pongstylin last won the day on March 31

pongstylin had the most liked content!

About pongstylin

  • Rank
    Member

Recent Profile Visitors

552 profile views
  1. Seems off topic for this thread. I am not sure how tolerant this forum is about such things, so I'm not sure if I would respond here and permit this practice or make you create a new thread.
  2. Hah! Sure enough. I tested my app to see if my cursor would change to a pointer after an animation completed. It did when I had the InteractionManager plugged into the ticker. It didn't when it wasn't until I moved the mouse one pixel. So let's say I'm fine with the pointer not showing up until I move the mouse. Is that the only reason to run the manager through the ticker?
  3. As far as I can tell from the code, the ticker triggers callbacks every time requestAnimationFrame triggers its callback. So if you're hoping that the ticker can be configured to triggered callbacks 40FPS instead of 60FPS, you can't. But, just because the callback is triggered doesn't mean you have to render. For example, the Interaction Manager adds itself to the shared ticker so that it can figure out if your mouse is hovering over something that can be interacted with and change your cursor to a pointer. But the Interaction Manager only performs this check at 6FPS. See how it does it below: // ticker callback functionInteractionManager.prototype.update = function (deltaTime){ this._deltaTime += deltaTime; // this.interactionFrequency is 10, by default if (this._deltaTime < this.interactionFrequency) { return; } this._deltaTime = 0; // manage some interaction stuff}So it sums up the deltaTime (number of frames to render since last call) until it equals or exceeds 10 (6FPS when PIXI.CONST.TARGET_FPMS is set to 0.06). So even though the update function gets called 60FPS, it only does its manager stuff 6FPS. You could say that it is more flexible this way. You can run some animations at 30FPS and others at 60FPS, using the same ticker object. The only downside is if your entire app runs at 12FPS (this is what I use) then you got a loop running faster than you will ever need. But this is the case if you use requestAnimationFrame manually as well. Yes, since the deltaTime is the number of frames to render since last call, it should be easy to use in interpolation/extrapolation. I kind of already described the example, but here's some code to drive it home. var ticker,sprite,mover;ticker = new PIXI.ticker.Ticker();sprite = PIXI.Sprite.fromImage(imageUrl);mover = function (numFrames) { // Move the sprite to the right by 2 points per frame at 60FPS. // If this callback is called precisely on-time, then numFrames will be 1 and we'll move the prescribed 2 points. // If this callback is called in double the time (30FPS), then numFrames will be 2 and we'll move 4 points. // If this callback is called in the middle (40FPS), then numFrames will be 1.5 and we'll move 3 points. sprite.position.x += numFrames * 2; // stop the animation once we reach our destination. if (sprite.position.x >= 100) { sprite.position.x = 100; ticker.remove(mover); }};ticker.add(mover);I'm not sure if this is interpolation or extrapolation (I'm really not familiar with the theory, I just logically conclude that this might be a normal use-case), but I think it's in the ballpark of what you're talking about.
  4. The Ticker documentation is here: http://pixijs.github.io/docs/PIXI.ticker.Ticker.html And here: (It makes reference to using PIXI.ticker.shared) http://pixijs.github.io/docs/PIXI.ticker.html Although, I tend to prefer to use the code as my documentation: http://pixijs.github.io/docs/core_ticker_Ticker.js.html Essentially, PIXI.ticker.shared is an instance of the PIXI.ticker.Ticker class. This instance is used by the Interaction Manager, although I'm trying to figure out why since it seems unnecessary (nobody has responded to my post yet). Of course, you are free to use the same instance, but keep in mind if you modify how it behaves, anything internal to PIXI that uses it may be affected. The PIXI Ticker is based on requestAnimationFrame, so in function, it is little different from running your render loop manually using requestAnimationFrame. But the object's event interface may be more attractive to use and look at since it involves less code. Beyond this basic function, it offers these features: 1) Usability: You can start and stop it easily. 2) Efficiency: It auto-starts (you can disable this) when there are listeners/callbacks, and auto-stops when there aren't any. 3) Throttling: The requestAnimationFrame callback is passed the current time as an argument. But the ticker callback is the number of frames that should have been rendered since the last call, depending on your target frames per second. By default, PIXI.CONST.TARGET_FPMS (target frames per millisecond) is set to 0.06, which means you are targeting 60 frames per second. In a perfect world, it would always say 1. But if the browser lagged really bad, it might say 60, if it has been a whole second since the last call. One possible use of this argument is to skip frames if it has a value >= 2. Or, let's say you have a sprite moving from A to B. It is supposed to move 2 points per frame, but the callback handed you 1.5 (which, if consistent, equals 40FPS). That means you can move the sprite 3 points this frame instead of 2 to maintain the desired speed. 4) Throttling: The number of frames passed to the callback can be capped to a maximum value. So to use the sprite movement example from #3, you can make sure the sprite doesn't jump too-large a distance in a single frame - even if the browser is lagging badly. 5) Throttling: The number of frames passed to the callback can be multiplied. So to use the sprite movement example from #3, you can increase (i.e. multiply by 2) or decrease (i.e. multiply by 0.5) animation speed. 6) Usability: You can add callbacks that should be called only once by the ticker as opposed to a constant loop. 7) Usability: You can add or remove multiple callbacks at will. All callbacks will be called in the same frame, so that multiple animations may be managed separately, but rendered in sync. requestAnimationFrame supports this also, but keeping track of those request IDs can be more painful. 8) Usability: You could just stop the ticker and trigger callbacks manually. For example, you could allow a user to stop the animations and present a button that allows them to step through frame-by-frame. In essence, you got a bunch of little features here that cater to creative situations, but aren't useful in many scenarios. But the idea of wrapping requestAnimationFrame into an object that may be manipulated carries with it a lot of potential. Feel free to ignore it or use it, depending on your situation.
  5. Flake, I'm pretty sure you can still get 'er done the way you like in WebGL. Reminder, I said that parent/child relationships are OPTIONAL. So if they make things awkward, you don't have to use them. For example, using your latest example: function render (){ var graphics,sprite; // Remove all objects we rendered last time so that we can add a new set. stage.removeChildren(); graphics = new PIXI.Graphics(); // draw poly stage.addChild(graphics); graphics = new PIXI.Graphics(); // draw rect stage.addChild(graphics); sprite = PIXI.Sprite.fromImage(source); stage.addChild(sprite); graphics = new PIXI.Graphics(); // draw line stage.addChild(graphics); graphics = new PIXI.Graphics(); // draw rect stage.addChild(graphics); sprite = PIXI.Sprite.fromImage(source2); stage.addChild(sprite); // about 300 more}Notice that I'm creating a new graphics object for every shape. I'm creating a new sprite object for every image. I add them to the stage in order so that later objects layer on top of earlier objects. Every object is positioned using absolute coordinates since we're not using parent/child relationships. This should more closely match what you're doing already for the canvas. Regardless of whether you are working with 2d canvas manually, or if you are using PIXI, you are drawing the final product layer-by-layer. So what you can do manually, PIXI can do for you on 2d or webgl context. You see, you can't draw images on a graphics object. But you don't NEED to. You just gotta stop thinking of the graphics object as if it were your entire canvas. It's just a layer. You may draw more than one shape on a graphics object to be in that layer, but I fragmented the layers above just in case your code is similarly fragmented.
  6. Sounds like what you need is to have a sprite child for that graphics object. You see, technically, the entire canvas (whether 2d or webgl, doesn't matter) is one big image. PIXI just draws different parts of the image separately to create the whole. So when you render a container, you will draw each child of that container (whether sprites, graphics, or something else) and all of its descendants (children, children of children, etc) in order from top (root) to bottom (leaves). The different parts may overlap, but at the end of the day the entire tree is reduced to an image. So do you want an image loaded from file to be in the middle of your graphics object? Easy. Just position the image object (sprite) so that it is in the middle of your graphics object. The easiest way to do that is to make the sprite a child of the graphics object, so that its position (x and y coordinates) are relative to the top-left corner of the graphics object. Parent/child relationships are not required, mind you, it is just a convenient way to get relative positioning (and also guarantee that the child is drawn after the parent, possibly overlapping). Does the whole paradigm of drawing position and order make more sense now? With all that in mind, I'll answer each question specifically: 1) Is there a way of doing this, similar to the plain/vanilla canvas way: // untested// 2d or webgl, doesn't matter.var renderer = PIXI.autoDetectRenderer(300,300);var canvas = renderer.view;var stage = new PIXI.Container();var sprite = PIXI.Sprite.fromImage(source);var graphics = new PIXI.Graphics();stage.addChild(graphics);graphics.addChild(sprite);// assuming graphics object is 100w x 100h// graphics position is centered horizontally and vertically relative to stage/canvas.// absolute position of the graphics object is 100 points right and 100 points down from the top-left corner of container (parent).graphics.position = new PIXI.Point(100,100);// assuming sprite object is 50w x 50h// sprite position is centered horizontally and vertically relative to graphics object.// absolute position of the sprite object is 125 points right and 125 points down from the top-left corner of container (grandparent).sprite.position = new PIXI.Point(25,25);// draw some lines n stuff on the graphics object.// Only reason I'm putting it in the ticker is just in case the image hasn't loaded yet.// We'll render 60 frames per second until it does... and after it does!PIXI.ticker.shared.add(function () { renderer.render(stage);});// put the canvas somewhere in the DOM. 2) Is the context (2d), unique to the Canvas? Yes. 2d is a context. webgl is a context. PIXI calls the 2d context renderer "Canvas". Both contexts are built on a "canvas" dom element. Confusing? Yes. 3) Would drawing imaged[sic] to the graphics object limit me to the CanvasRenderer... No. PIXI abstracts the WebGL nonsense (it's pretty arcane) away from your eyes so that you can create sprite objects that can be rendered magically using webgl context or rendered boringly using 2d context.
  7. pongstylin

    Resize Event

    DOM elements don't have resize events and PIXI doesn't have any either. Typically, you have to catch resize events for the window and adjust the canvas accordingly.
  8. TL;DR: Why does the interaction manager add itself to the ticker? Events seem to work fine without it - even mouse over/out. First of all, this is some great work. I haven't looked at my pixi-based project since January, and now that I'm resuming work I'm updating to PIXI v3 and am liking what I see in the code. What is especially awesome is disconnecting the event loop from the render loop using this fancy ticker object. This is due to my particular use case where I am not running animations continuously. Animations are triggered by user interaction and may be stopped by the animation completing or by another user interaction. So as long as nothing is happening visually, I want nothing to happen behind the scenes either. So in my obsessed quest for high performance and preserved battery life, in PIXI v2 I triggered a render event every time I detected a mouse move on the canvas. If I didn't do this, then mouse over/out events did not trigger correctly. Otherwise, I only rendered as I changed things and called it a day. The only thing that sucked was using slow browsers (*ahem* Firefox) on poor GPUs. As my animations ran at a modest 12 FPS, I noticed a performance hit as I moved my cursor around, triggering rapid, unthrottled, rendering - Yuck! I never did get around to doing something about that... So now I'm in the beautiful new world of PIXI v3. I see a fancy ticker object and the interaction manager adding itself to it and throttling itself. Theoretically, I could just turn off the ticker and call the interaction manager update myself in my mouse move event handler - taking advantage of doing less work and throttling. But I might want to make use of the ticker. So I commented out the line where the interaction manager added itself to the ticker. But before I did the whole mouse move business, I did a little testing without it entirely and was surprised at what I found. Without the ticker running and the interaction manager update ever getting called, mouse over/out and all other mouse and touch events seem to work just fine! Amazing. So why does it add itself to the ticker? What am I missing? Some information I gleaned in method call chains... update() => processMouseOverOut() onMouseMove() => processMouseMove() => processMouseOverOut() onMouseOut() => processMouseOverOut() All roads lead to processMouseOverOut(), so the update() seems redundant...
  9. Fortunately, I already knew how to use it. But, msha pointed out an interesting fact. Usually, the alpha column can be used as an offset, since the pixel is usually fully opaque (1) or fully transparent (and irrelevant). So, unlike brightening, it can be used for whitening. Only semi-transparent pixels will (albeit uniformly) not whiten as much as they should. But it's close. Very close. I love the cleverness. Maybe this is why the 5th column was skipped in PIXI's implementation (the current ColorMatrixFilter shader should be faster than mine).
  10. Thanks! You've opened my eyes. Now I know what a shader is. And with a little research these vertex and fragment shaders make a bit more sense in the pixi code. So it seems like what I want to do is modify the ColorMatrixFilter class to automatically select the appropriate fragment shader based on the length of the matrix assigned. Still trying to figure out what's going on here in the pixi code (they seem to have a fairly magical shader that supports any of 2x2, 3x3, or 4x4 matrices). But I think I'm armed with enough information to be dangerous. Any tips are welcome, though.
  11. pongstylin

    Fisheye Zoom

    Hey, I don't know much about the fisheye effect, but since you mention doing the distortion in javascript, I wanted to point out that it MIGHT be possible to do the distortion in C and compile the C code with javascript via WebGL. Check out my amazing discovery in my "Sprite Whitening" thread. I point out some JS code that creates convolution and colormatrix filters using compiled C code. Maybe your filter can be done using the same technique and be pretty quick about it. EDIT: Nevermind, I think you were talking about this in your first bullet (putting the distortion in the vertex shader). That seems like a good option.
  12. I'm impatient and I figured out a new way to google for the information. And I found some amazing code: https://github.com/phoboslab/WebGLImageFilter/blob/master/webgl-image-filter.js If you look on line 282, you can see a "colorMatrix" implementation identical to flash ColorMatrixFilter. If you look on lines 301 and 315, you can see C code that implements the pixel color calculations based on the matrix. Apparently, this C code is compiled with WebGL and is very fast. If you look on lines 330, 340, 351, 355, 367, 371, 387, and on and on and on... you can see how the matrix can be used to produce many different effects. This is crazy educational. So it appears that this IS supported by WebGL in a roll-your-own kind of way. I'm seriously excited. So my last question is, does pixi.js already support this in part or whole? Cause I'm thinking about opening an issue ticket... and since I'm impatient, possibly hacking this in myself while I wait for them to get it implemented in a well designed manner.
  13. In pixi.js, we have a ColorMatrixFilter that allows us to manipulate the colors using a 4x4 matrix. But what I'm trying to do is to increase the overall "white" of a sprite. Normally, you might say to increase the "bright". The following illustrations doubles each of RGB. The caveat is that black (zero) will not get brighter. filter.matrix = [ 2,0,0,0 0,2,0,0 0,0,2,0 0,0,0,1 ]; On the other hand, if it were possible to use a 4x5 matrix (like flash ColorMatrixFilter), we can use offsets to achieve true "whitening". The following illustration uses flash's approach where the 5th column uses non-normalized values to increase each of RGB by 128 (or 0.5 when normalized). filter.matrix = [ 1,0,0,0,128 0,1,0,0,128 0,0,1,0,128 0,0,0,1,0 ]; Visually, brightening and whitening can look very different, and I want the latter in my case. As far as I can tell, WebGL does not support 4x5 matrices (please tell me I'm wrong). So, how can I achieve it efficiently in Pixi or WebGL?
  14. Alright, don't worry about it guys. Thanks Agamemnus for offering to help me track it down on IRC. But I was mainly just hoping somebody had an idea of what was going on without too much effort. If effort is required, I'll just spend it myself. So I threw a lot of console logging into the pixi.dev.js to figure out what was different between running the canvas render before vs after the webgl render and figured out what was going on. I've submitted a pixi bug ticket: https://github.com/GoodBoyDigital/pixi.js/issues/1035
  15. Still broken. http://www.taorankings.com/pixi-bug/agamemnus.html