Jump to content

FRP w/ PIXI?


dmko
 Share

Recommended Posts

This is kindof a continuation of the discussion in this thread

 

Though it could branch off elsewhere (like not using streams, using sodium, whatever) - so I figured I'd open a new one :)

Basically - if one wanted to take the functional programming or functional reactive programming approach, how does PIXI fit in the picture?

It seems to me like there are are basically 2 options:

1. Something like Cycle.js - where we hook the "output" of PIXI back into the inputs (also via observables/streams).

2. Have a strict unidirectional flow from input->logic/data->view.

The first one is sortof dealt with in that linked thread above - and I'm not so sure I like it... here I'm asking more about the second....

How could the second approach be done cleanly? More specifically:

1. Is it crazy to keep everything in some sort of structure that simply gets re-rendered on every frame? I mean I guess that's what PIXI's doing under the hood... but in this case it wouldn't be to render it would be to drive a single custom "drawGameObjects()" which would effectively do something like removeChildren/addChildren/renderTextures() It could be a little smarter of course and maybe just update the diffs, which wouldn't be too difficult if the data structure is strict... but still, curious if that's just like a really bad idea or not so bad at all

2. Similarly - with this idea we'd lose touch events... would need to sift the global touch events in the logic/data part of the code and wouldn't benefit from PIXI's automated on() event detection stuff. At a glance that's awful - but if sticking to bounding boxes and things, it's actually possible to be much cleaner that way... think for example where clicking one object should change the state of some other ones. All the logic/data needs to have is the layering, position, size, and rotation of each object which it would need to have to pass to the "renderer" anyway, so, not so awful...

Is there some other approach I'm missing? If this approach is taken, is it better just to drive WebGL directly? (my guess is no PIXI is way more than just a pixel-pusher / touch listener! handling context, easy to use api for textures, batching stuff, sprite sheets, etc...)

FWIW, I found this code sample for RxJS to be super clean and informative... much easier to understand than the elm / haskell / youtube videos / etc.:

https://github.com/Lorti/rxjs-breakout/blob/master/app.js

(the author claims they are a beginner with this approach but it looks fantastic to me!)

 

Link to comment
Share on other sites

Just to add - instead of like a virtual-pixi-graph thing, which is kinda crazy, maybe just pushing the commands down, so like "addSprite(texture, parent)" or a serialized version of that...

So basically it'd be like this, where "viewInfo" is probably little more than the stage and some utility helper functions to test touch inputs etc.

inputStream
  .map(input => getCommands(viewInfo, input))
  .map(cmd => sendToPixi(viewInfo, cmd))
  .observe(newViewInfo => updateViewInfo(newViewInfo));

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...