gordyr

Members
  • Content Count

    15
  • Joined

  • Last visited

Everything posted by gordyr

  1. The shader could be easily turned into a PIXI.AbstractFilter which could be applied to whole Containers/DisplayObjects in the standard .filters = ['PIXI.maskShader']; manner. You would simply then send the filter the mask coordinates and internally it would calculate the barycentric coordinates and send them to the filters fragment shader, masking anything outside of the coordinates in an anti-aliased manner. I could of course be greatly misunderstanding how PIXI works, but while working on my PIXI.Photo class i was able to mask whole containers full of objects fine. (I developed the shader as a filter first, then went back and built a whole class/renderer around it) All the shader does is discard pixels outside of the sent in coordinates and anti-aliases those close to the edge of the given barycentric coordinates. I see no reason why it wouldn't work. The mask manager would just simply switch the masking filter on or off.
  2. Just in case anyone is interested, I have a first draft version of my PIXI.Photo and PIXI.PhotoRenderer class completed and they're working wonderfully. As you can see from the example image, when rotated the edges are beautifully anti-aliased even when webgl anti-aliasing and FXAA are turned off. When the imge is not rotated the edges remain perfectly sharp at any zoom level/scale. Furthermore, I can now apply perfectly anti-aliased rectangular masks to the sprite simply by changing the _frame attribute. On mobile devices we were getting slow-downs using the standard PIXI.Mask system extensively. Now we are at a rock solid 60fps. I believe this is a method that the PIXI authors should consider using for masking by default. I have also implemented two replacement classes for PIXI.rectangle and PIXI.Circle using a variations of the method above just with slightly different shaders, these also now render faster than the old Stencil buffer method and with smooth anti-aliased edges at all times. I wont be making a PR for any of this as PIXI v4 is just around the corner, but it would be nice to see some of these techniques, or similar ones make their way in soon. I am of the opinion that all primitives should be rendered via shaders in this manner rather than using the stencil buffer.
  3. Excllent, thanks Ivan. That's exactly what i'll do. I think I will investigate calculating the expanded quad and barycentric coords in the vertex shader first however. Just in case we can preserve batching using that method. Although it will would still be broken in our case because of the other things we are doing to these sprites, but there are plenty of other uses I can think of where this would be preferable.
  4. Yes, we previously used the transparent border technique, but given that we have to constantly zoom in and out and resize sprites, it is not ideal. Leaving the edges too blurry when sprites are not rotated. This method appears preferable for our use case. We are not worried about it breaking batching as there will never be more than 20 or so of these sprites on screen at any given time, which you can probably see from the type of app from the screenshot above. Also, it is already broken in our case as we are applying individual masks (using this new method will be far faster) and individual photo processing effects like brightness, color changes etc to each sprite. We only apply this new method to photos, when they are attached. all other components of our scene run through the normal PIXI spritebatch renderer. Once I get it all working correctly, I will probably just write a PIXI.Photo class that does it all for us. But regardless, I think this technique has some excellent potential uses within main PIXI itself. Perhaps we could calculate the barycentric coordinates of the expanded quad in the vertex shader itself? Surely that would then leave batching intact as we are not passing in uniforms to each sprite? If anyone has any ideas how to go about doing that, I would be very interested.
  5. Yes it could, as can be seen in the example image above. It is essentially applying a mask right now, albeit it transformed incorrectly and rotated in the opposite direction. You would ignore the current stencil buffer mask class in pixi.js altogether and simply pass in the coordinates of your mask frame into the custom shader instead. All the shader is doing is making pixels outside of the given frame transparent, and smoothing the edges based on a given texture coordinates distance to the edge of the frame uniform I am passing the custom shader. It would be far faster than the current stencil buffer method also. EDIT: Sorry I misunderstood, I thought you were referring to using it as a mask. It can still be adapted for primitives. It already makes perfectly anti-aliased rectangles and circles and we can do triangles in the same manner. In this case all the primitives would be rendered by the custom shader rather than the stencil buffer by simply passing in the coordinates and allowing the shader to draw the shape based on a pixels distance to the shapes edge. As shown here: https://rubendv.be/blog/opengl/drawing-antialiased-circles-in-opengl/ And here: https://www.mapbox.com/blog/drawing-antialiased-lines/ These are just slight variations of the same technique. it would just require a different shader for each primitive.
  6. Well I've actually managed to nearly get there... Hopefully someone can give me the final push. What I've done is hooked into the SpriteRenderer class and exposed the four points of the quads, attaching them to the sprite object itself. Then in my application code I calculate the quad and line/edge coefficients as shown in the demo's source code above. I build the EdgeArray and then send them into a custom shader which is almost exactly the same as the one in the demo. This works and produced beautifully anti-aliased edges to my rotated sprites. However currently the transformation appears to be inverted. When I rotate one way, the sprite moves the opposite way, also the x/y coordinates appear to be opposite. Everything else is just wonderful. I can zoom in on the mipmapped sprite and the edges remain perfectly sharp and smooth. You can find a pic of where I am currently at, showing the smooth edged rotated sprite, but of course with the inverted transform for some reason. Incidentally, if I manage to get this technique perfected it will enable us to do smooth edged sprite masking also and could also be adapted to render PIXI.Graphics lines and other primitives, all with smooth edges. Any suggestions regarding flipping the transform back would be greatly appreciated. EDIT: I've just noticed that the sprites transform itself remains correct. It is simply the transform/position of the passed in antialising-mask that are incorrect. In the image below the sprite is rotated correctly, it's smooth edge mask is opposite.
  7. Sorry guys I should have been more clear. The custom vertex/fragment shaders I am fine with. It's more about where and how, in PIXI internals, can I hack into, in order to get the vertices of the quad, manipulate them, and send them on to the shaders. I am quite happy to be hacking around inside PIXI. This is to be expected.
  8. Within our app we are dealing with sprites that are using non transparent, rectangular power of 2 textures only. Each texture is essentially a photograph (stretched to ensure it is pow2 in order to enable mipmapping) Using Webgl Anti-aliasing or FXAA is not an option for us for various reasons. Therefore in order to ensure that these sprites looks anti-aliased when rotated we render the texture to a canvas first leaving a few px wide transparent border around the edge so that the textures bi-linear filtering takes care of smoothing out the edges of these sprites. It works okay, but edges tend to get blurred when the sprites aren't rotated, and this is not ideal. Therefore I have been investigating other solutions and have come across what looks to be an absolutely perfect technique, which Webkit uses to provide fast edge AA on transformed elements. A write up and demo can be found here: http://abandonedwig.info/blog/2013/02/24/edge-distance-anti-aliasing.html This is exactly what we need, and it seems fairly simple. Expand the quad, calculate the coefficients of quad edges, finally pass them into a custom shader, which manipulates the alpha of the edges depending on the distance from the edge. Looking at the source code for the demo looks reasonably simple and I have no problem implementing the custom shader. But the problem I am having is with dissecting and understanding PIXI's internals around the following: 1. Where could I extract the required quad points (as can be seen in the demo code). 2. Where and how can I then pass them into the shader? Basically I am just looking for a few simple pointers for implementing what should be quite a simple technique. Any help, ideas or advice would be greatly appreciated. I've also found another write up of what appears to be essentially the same process. Although they appear to be using smoothstep to adjust the alpha of the edges of each triangle, but it does add further insight as to the process if anyone is interested: http://codeflow.org/entries/2012/aug/02/easy-wireframe-display-with-barycentric-coordinates/
  9. I'm afraid I'm not allowed to show you the app itself at the moment. Not until it's released anyway. I've literally just a had an idea. When the user begins text editing we could blit the stage to a 2d canvas once. Keep this in memory, then as the caret moves, continually sample the pixel data of the area behind the caret on the 2d canvas with getImageData(). This should still be fast as we would only be sampling a few pixels each time. Then draw the caret itself pixel by pixel based on the inverse of the sampled pixel data. Either in a custom fragment shader or by simply replacing the sprites texture each time the caret moves. (The shader would of course be faster) This still feels a bit hacky. But it should work. If anyone has any better ideas, I would love to hear them.
  10. This is a strange title I know. I will try to explain. We have built a complete page layout/drawing/rich text editing application powered by PIXI. Think of a web based Microsoft publisher. Obviously within this we have the need to render a flashing caret/cursor when the user is editing text. Currently this caret is a simple sprite with a tiny single color texture rather than PIXI.Graphics (so that it gets naturally antialiased when the textbox is rotated). This sprite gets scaled vertically to match the size of a given font. Right now the texture is simply black. However we would like to make the caret color dependent on the color behind it. e.g. If the caret is flashing over a dark area, that part of the caret becomes white, or is inverted. When it is over a lighter area it becomes dark/black. In PIXI's WebGL renderer only four blend modes are implemented so we cannot use these to achieve the desired effect. My current thoughts are along the lines of writing a simple custom shader and applying it to the whole stage. This shader would be fed the coordinates and size of the caret and then draw the caret based on each pixels lightness/darkness. However this feels extremely overkill and would be quite difficult to get right when the caret/textbox is rotated etc. There must be a simpler way. So the question is, what would be the best way to achieve this?
  11. First it's best if I give some background... I have a single rectangular sprite. This sprite is a photograph, when loading this photograph as a texture we first draw it to a canvas stretching it to become the nearest power of 2 texture. Then in Pixi, we load the texture from the canvas and set the width/height properties of the sprite in order to return the sprite to the correct aspect ratio for the photograph. This photogaph needs to be rotated around an dynamic origin point which is controlled by the user. In order to achieve this we first began manipulating the sprite.anchor properties. which of course ended up effecting the position of the sprite itself (which we do not want). Therefore we created a DisplayObjectContainer, set the sprite to be a child of that, and change the rotation of the container itself. Meaning that we can correctly manipulate the containers pivot point while leaving the sprite itself in its correct position. This all works perfectly except that when the container is rotated, our sprites height/width change. The attached photos show the issue: Now... we have been able to hack around the problem by changing the scale.x and scale.y of the DisplayObjectContainer to set the photo to its correct aspect ratio, while setting the sprites scale factors to 1. Although this works we are then left with a set of controls (which are not inside the container as they need to remain static) in a totally different transformation/coordinate space than the the photo itself, which makes writing positioning and other calculations extremely difficult and has proved to be a huge headache for us. So.... my question is how can I keep the width/height properties on the sprite static, while rotating its parent DisplayObjectContainer?
  12. Ahhh fantastic thanks... i hadn't seen that
  13. I've just read through the changelog for the recently released version 2.0 of Pixi.js and read with interest about the new ability to use custom shaders on sprites. Since It states that performance is faster than filters I would like to convert some of my custom filters into this format (if it makes sense to do so) I have a few questions regarding this though: How do I use this feature / How do I attach my custom shaders to the sprites? Can I chain multiple shaders to the same sprite? Why is performance better than filters? Are they not just simply difference ways of attaching a fragment shader to a texture?Thanks guys!
  14. It's a very welcome addition in my opinion. Although it's not difficult to roll your own, it's nice to have this sort of stuff built in to Pixi.
  15. We are using pixi.js for several aspects of our app, one of which is as a base framework for a WebGL photo editing application. Although we are aware pixi was never intended for this, it has proven to be an excellent fit. We have written lots of fragment shaders offering all kinds of interesting photo manipulation effects and have built them as extensions to pixi's excellent filter class. (I have already contributed a convolution filter and intent to contribute the rest of our filters soon) Anyway, on to my question. One of the benefits of using Pixi and harnessing its simple access to the WebGL API is that we can offer our customers previews of the various effects before they choose them. For instance, if you imagine a user has a selected photograph that they wish to edit and rather than use the advanced control that we offer, would prefer to simply choose from on of our preset filters. In this case, we display a list of images showing the user the various filters we offer applied to their images in real time. We currently have this working fine, although I am concerned that we are not making the best use of Pixi's API (or indeed WebGl for that matter) This is our current work flow: User selects a photo to edit (2048x2048 texture), it is loaded and displayed in our editor as a sprite on our editor stage.User chooses to add one of our preset filtersWe load a small version of the photo (256x256), draw it to a 2d canvasCreate a separate pixi stage/renderer on which to perform the work, while still showing the original editor stage.add the thumbnail texture to the sprite and then add that to the thumbnail stage.loop through an array of our various presets, creating a 2d canvas for each one.We are then prepped to do the processing. So finally for each and every preset we have we do the following: Apply the relevant filters/shaders to the thumbnail sprite Render the thumbnail stage grab the presets 2d canvas and perform a drawImage call using the thumbnail renderer's WebGL canvas as the source. Attach the new 2d canvas complete with a preview of a filter preset to the DOM.Each call of this we use a 20ms timeout to ensure the UI doesn't block and that these previews are rendered progressively. Using this method we can show around 30 of our presets in about 1000ms total. (about 700ms if we leave out the timeout and block the UI) While this is okay, we are clearly not making the most of WebGL and likely pixi for that matter. Having profiled the actual time taken to render each preset we can see that we are not even close to pushing the capabilities of even a poor integrated laptop GPU. Each preset takes <1ms to actually render. So my question is to those with a better grasp of Pixi's internal workings and API... Is there a faster or more effective way of achieving what we want? As a side note we have tried creating one large stage and making a sprite for each filter, then cutting the relevant parts out for each 2d canvas by specifying the source coordinates in the drawImage call, but the end result took 3 times longer. Any advice would be greatly appreciated.