Jump to content

Advice needed regarding a specific app feature we are powering with Pixi


gordyr
 Share

Recommended Posts

We are using pixi.js for several aspects of our app, one of which is as a base framework for a WebGL photo editing application.

 

Although we are aware pixi was never intended for this, it has proven to be an excellent fit.

 

We have written lots of fragment shaders offering all kinds of interesting photo manipulation effects and have built them as extensions to pixi's excellent filter class. (I have already contributed a convolution filter and intent to contribute the rest of our filters soon)

 

Anyway, on to my question.  One of the benefits of using Pixi and harnessing its simple access to the WebGL API is that we can offer our customers previews of the various effects before they choose them.

 

For instance, if you imagine a user has a selected photograph that they wish to edit and rather than use the advanced control that we offer, would prefer to simply choose from on of our preset filters.

 

In this case, we display a list of images showing the user the various filters we offer applied to their images in real time. We currently have this working fine, although I am concerned that we are not making the best use of Pixi's API (or indeed WebGl for that matter)

 

 

This is our current work flow:

 

  • User selects a photo to edit (2048x2048 texture), it is loaded and displayed in our editor as a sprite on our editor stage.
  • User chooses to add one of our preset filters
  • We load a small version of the photo (256x256), draw it to a 2d canvas
  • Create a separate pixi stage/renderer on which to perform the work, while still showing the original editor stage.
  • add the thumbnail texture to the sprite and then add that to the thumbnail stage.
  • loop through an array of our various presets, creating a 2d canvas for each one.

We are then prepped to do the processing.  So finally for each and every preset we have we do the following:

 

  1. Apply the relevant filters/shaders to the thumbnail sprite
  2. Render the thumbnail stage
  3. grab the presets 2d canvas and perform a drawImage call using the thumbnail renderer's WebGL canvas as the source.
  4. Attach the new 2d canvas complete with a preview of a filter preset to the DOM.

Each call of this we use a 20ms timeout to ensure the UI doesn't block and that these previews are rendered progressively.

 

Using this method we can show around 30 of our presets in about 1000ms total. (about 700ms if we leave out the timeout and block the UI)

 

While this is okay, we are clearly not making the most of WebGL and likely pixi for that matter.  Having profiled the actual time taken to render each preset we can see that we are not even close to pushing the capabilities of even a poor integrated laptop GPU. Each preset takes <1ms to actually render.

 

So my question is to those with a better grasp of Pixi's internal workings and API...

 

Is there a faster or more effective way of achieving what we want?  As a side note we have tried creating one large stage and making a sprite for each filter, then cutting the relevant parts out for each 2d canvas by specifying the source coordinates in the drawImage call, but the end result took 3 times longer.

 

Any advice would be greatly appreciated.  :)

Link to comment
Share on other sites

  • 1 year later...

Why use a separate canvas/stage/renderer for every thumbnail? Why not use another container/sprite in the exact same renderer?

Having a ton of renderers like that is going to cause texture swapping all over the place killing perf.

To make a thumbnail: create a sprite, scale it, add the filter to it, done. I feel like I'm missing something...

Link to comment
Share on other sites

There are just two renderers in my setup. The main stage that is displayed to the user and a smaller one that is the size of the thumbnails.

 

We have defined a suite of tools that are essentially lists of shaders and uniforms to apply to the loaded image. The loaded image is a sprite that is the full size of the stage/renderer.

 

When the user looks at the palette of tools, we generate a thumbnail preview of each image by loading the loaded image into a sprite on the thumbnail renderer, running the list of shaders in order, and doing a canvas.toDataUrl() for each one and assigning that to the src of an Image that gets displayed to the user in the DOM.

 

I had a branch of this work that instantiated a RenderTexture for each thumbnail and did the work in there, but the performance wasn't good.

I also tried re-using the same RenderTexture, but I was getting corrupted textures where what should've been transparent was black.

 

How would you recommend I approach this problem?

 

Related question:

 

How can I get the texture of the sprite after the filters have been applied? It seems like sprite.texture returns the texture without any shaders applied.

Link to comment
Share on other sites

canvas.toDataUrl()

That is really expensive and slow, just a heads up. Why do you need to convert the pixi draw images to DOM elements?

Shaders are not applied to a texture, the texture is an input to the shader and the rendered output is put into a framebuffer (specifically the one tied to yuor renderer if you see the output). A RenderTexture is just a framebuffer that you can pass as input to somewhere else if you want. If you need to have two renderers, just display the result. Render in webgl; don't try to render, capture, and put it in DOM.

Link to comment
Share on other sites

Yeah, toDataUrl is terrible. 

 

We are doing all our rendering in WebGL as you say. However, given the structure of our app, we need the result from the rendering embedded throughout the page in these thumbnails.

 

The UI for our app is DOM while the canvas/art area is a Pixi Stage/Canvas. I guess you're suggesting that we move all our chrome/UI elements into the Pixi Stage/Canvas and not have any DOM-based UI at all, right?

Link to comment
Share on other sites

Yeah, toDataUrl is terrible. 

 

We are doing all our rendering in WebGL as you say. However, given the structure of our app, we need the result from the rendering embedded throughout the page in these thumbnails.

 

The UI for our app is DOM while the canvas/art area is a Pixi Stage/Canvas. I guess you're suggesting that we move all our chrome/UI elements into the Pixi Stage/Canvas and not have any DOM-based UI at all, right?

 

Not necessarily all, just enough to prevent you from doing all that toDataUrl stuff. If an image is being rendered, it should be happening in the renderer. You can layer HTML on top for better interaction support and such as it makes sense.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...