dmko

Members
  • Content count

    70
  • Joined

  • Last visited

  1. sodium frp demo/playground

    I shocked me too! Programming this way is fun! Don't get me wrong - I don't think it could work for the PIXI core since there are some performance sacrifices and gc burps don't play nicely with all the immutable temp variables. But as an approach to use PIXI, it's kinda awesome imho.
  2. sodium frp demo/playground

    Here ya go Source and Notes @ https://github.com/dakom/sodium-typescript-playground Demo @ https://dakom.github.io/sodium-typescript-playground/ Explanation is in the readme and notes and some code comments... basically it's a demo for using sodiumFRP with PIXI to do functional reactive programming stuff. It's highly related to this post and this post Feedback welcome!
  3. Clear Rect in Render Texture

    I forgot to followup on your erase mode update... thanks for doing that btw! Is this the same answer? (will be native in v5 only)
  4. Managing interactions with streams?

    and that looking at my old code I already knew that?
  5. Managing interactions with streams?

    oh, it seems touch events can be intercepted globally - https://github.com/pixijs/pixi.js/pull/2658 ?
  6. Managing interactions with streams?

    v5?! woo! OK... so if making these changes it needs to be done manually via changing the prototype somehow?
  7. Managing interactions with streams?

    @ivan.popelyshev - is this something that's been added? i.e. to automatically wire certain PIXI primitives (e.g. Sprite) to call a top-level function (such as is done at the top of this thread, by changing the prototype?
  8. FRP w/ PIXI?

    Just to add - instead of like a virtual-pixi-graph thing, which is kinda crazy, maybe just pushing the commands down, so like "addSprite(texture, parent)" or a serialized version of that... So basically it'd be like this, where "viewInfo" is probably little more than the stage and some utility helper functions to test touch inputs etc. inputStream .map(input => getCommands(viewInfo, input)) .map(cmd => sendToPixi(viewInfo, cmd)) .observe(newViewInfo => updateViewInfo(newViewInfo));
  9. FRP w/ PIXI?

    This is kindof a continuation of the discussion in this thread Though it could branch off elsewhere (like not using streams, using sodium, whatever) - so I figured I'd open a new one Basically - if one wanted to take the functional programming or functional reactive programming approach, how does PIXI fit in the picture? It seems to me like there are are basically 2 options: 1. Something like Cycle.js - where we hook the "output" of PIXI back into the inputs (also via observables/streams). 2. Have a strict unidirectional flow from input->logic/data->view. The first one is sortof dealt with in that linked thread above - and I'm not so sure I like it... here I'm asking more about the second.... How could the second approach be done cleanly? More specifically: 1. Is it crazy to keep everything in some sort of structure that simply gets re-rendered on every frame? I mean I guess that's what PIXI's doing under the hood... but in this case it wouldn't be to render it would be to drive a single custom "drawGameObjects()" which would effectively do something like removeChildren/addChildren/renderTextures() It could be a little smarter of course and maybe just update the diffs, which wouldn't be too difficult if the data structure is strict... but still, curious if that's just like a really bad idea or not so bad at all 2. Similarly - with this idea we'd lose touch events... would need to sift the global touch events in the logic/data part of the code and wouldn't benefit from PIXI's automated on() event detection stuff. At a glance that's awful - but if sticking to bounding boxes and things, it's actually possible to be much cleaner that way... think for example where clicking one object should change the state of some other ones. All the logic/data needs to have is the layering, position, size, and rotation of each object which it would need to have to pass to the "renderer" anyway, so, not so awful... Is there some other approach I'm missing? If this approach is taken, is it better just to drive WebGL directly? (my guess is no PIXI is way more than just a pixel-pusher / touch listener! handling context, easy to use api for textures, batching stuff, sprite sheets, etc...) FWIW, I found this code sample for RxJS to be super clean and informative... much easier to understand than the elm / haskell / youtube videos / etc.: https://github.com/Lorti/rxjs-breakout/blob/master/app.js (the author claims they are a beginner with this approach but it looks fantastic to me!)