Sign in to follow this  
gordyr

Need help implementing Quad edge Antialising

Recommended Posts

Within our app we are dealing with sprites that are using non transparent, rectangular power of 2 textures only.  Each texture is essentially a photograph (stretched to ensure it is pow2 in order to enable mipmapping)

Using Webgl Anti-aliasing or FXAA is not an option for us for various reasons.  Therefore in order to ensure that these sprites looks anti-aliased when rotated we render the texture to a canvas first leaving a few px wide transparent border around the edge so that the textures bi-linear filtering takes care of smoothing out the edges of these sprites.  

It works okay, but edges tend to get blurred when the sprites aren't rotated, and this is not ideal.  Therefore I have been investigating other solutions and have come across what looks to be an absolutely perfect technique, which Webkit uses to provide fast edge AA on transformed elements.  

A write up and demo can be found here:

http://abandonedwig.info/blog/2013/02/24/edge-distance-anti-aliasing.html

This is exactly what we need, and it seems fairly simple.  Expand the quad, calculate the coefficients of quad edges, finally pass them into a custom shader, which manipulates the alpha of the edges depending on the distance from the edge.

Looking at the source code for the demo looks reasonably simple and I have no problem implementing the custom shader.  But the problem I am having is with dissecting and understanding PIXI's internals around the following:

1. Where could I extract the required quad points (as can be seen in the demo code).

2. Where and how can I then pass them into the shader?

Basically I am just looking for a few simple pointers for implementing what should be quite a simple technique.  Any help, ideas or advice would be greatly appreciated.

I've also found another write up of what appears to be essentially the same process.  Although they appear to be using smoothstep to adjust the alpha of the edges of each triangle, but it does add further insight as to the process if anyone is interested:

http://codeflow.org/entries/2012/aug/02/easy-wireframe-display-with-barycentric-coordinates/
 

Share this post


Link to post
Share on other sites
1 hour ago, gordyr said:

1. Where could I extract the required quad points (as can be seen in the demo code).

2. Where and how can I then pass them into the shader?

I'm pretty sure that Pixi doesn't let you do your own meshes/vertex shaders, you need to hack the webgl renderer code, or figure out some other ways to do what you want

Share this post


Link to post
Share on other sites

Sorry guys I should have been more clear.  The custom vertex/fragment shaders I am fine with.  It's more about where and how, in PIXI internals, can I hack into, in order to get the vertices of the quad, manipulate them, and send them on to the shaders.  I am quite happy to be hacking around inside PIXI.  This is to be expected.

Share this post


Link to post
Share on other sites

Well I've actually managed to nearly get there... Hopefully someone can give me the final push.

What I've done is hooked into the SpriteRenderer class and exposed the four points of the quads, attaching them to the sprite object itself.

Then in my application code I calculate the quad and line/edge coefficients as shown in the demo's source code above. I build the EdgeArray and then send them into a custom shader which is almost exactly the same as the one in the demo.

This works and produced beautifully anti-aliased edges to my rotated sprites.  However currently the transformation appears to be inverted.  When I rotate one way, the sprite moves the opposite way, also the x/y coordinates appear to be opposite.  Everything else is just wonderful.  I can zoom in on the mipmapped sprite and the edges remain perfectly sharp and smooth.

You can find a pic of where I am currently at, showing the smooth edged rotated sprite, but of course with the inverted transform for some reason.

Incidentally, if I manage to get this technique perfected it will enable us to do smooth edged sprite masking also and could also be adapted to render PIXI.Graphics lines and other primitives, all with smooth edges.

Any suggestions regarding flipping the transform back would be greatly appreciated.

EDIT:  I've just noticed that the sprites transform itself remains correct.  It is simply the transform/position of the passed in antialising-mask that are incorrect.  In the image below the sprite is rotated correctly, it's smooth edge mask is opposite.

 

example.jpg

Share this post


Link to post
Share on other sites

@gordyr

Did you try to enlarge your textures by one pixel? TexturePacker can do that for you. On the edge of your objects pixels will blend with transparent ones.

About edge distance - that shader will break batch optimization, because it required uniforms that depend on particular sprite.

Places to look:

src/core/sprites/webgl/SpriteRenderer

src/core/renderers/webgl/shaders/TextureShader

Share this post


Link to post
Share on other sites
1 minute ago, gordyr said:

Incidentally, if I manage to get this technique perfected it will enable us to do smooth edged sprite masking also and could also be adapted to render PIXI.Graphics lines and other primitives, all with smooth edges.

No, it can't... because they are rendered to the stencil buffer for compositing

Share this post


Link to post
Share on other sites
16 minutes ago, chg said:

No, it can't... because they are rendered to the stencil buffer for compositing

Yes it could, as can be seen in the example image above.  It is essentially applying a mask right now, albeit it transformed incorrectly and rotated in the opposite direction. You would ignore the current stencil buffer mask class in pixi.js altogether and simply pass in the coordinates of your mask frame into the custom shader instead.

All the shader is doing is making pixels outside of the given frame transparent, and smoothing the edges based on a given texture coordinates distance to the edge of the frame uniform I am passing the custom shader.

It would be far faster than the current stencil buffer method also.

EDIT: Sorry I misunderstood, I thought you were referring to using it as a mask.  It can still be adapted for primitives.  It already makes perfectly anti-aliased rectangles and circles and we can do triangles in the same manner.  In this case all the primitives would be rendered by the custom shader rather than the stencil buffer by simply passing in the coordinates and allowing the shader to draw the shape based on a pixels distance to the shapes edge.

As shown here:

https://rubendv.be/blog/opengl/drawing-antialiased-circles-in-opengl/

And here: 

https://www.mapbox.com/blog/drawing-antialiased-lines/

These are just slight variations of the same technique. it would just require a different shader for each primitive. 

Share this post


Link to post
Share on other sites
30 minutes ago, ivan.popelyshev said:

@gordyr

Did you try to enlarge your textures by one pixel? TexturePacker can do that for you. On the edge of your objects pixels will blend with transparent ones.

About edge distance - that shader will break batch optimization, because it required uniforms that depend on particular sprite.

Places to look:

src/core/sprites/webgl/SpriteRenderer

src/core/renderers/webgl/shaders/TextureShader

Yes, we previously used the transparent border technique, but given that we have to constantly zoom in and out and resize sprites, it is not ideal.  Leaving the edges too blurry when sprites are not rotated.  This method appears preferable for our use case.  We are not worried about it breaking batching as there will never be more than 20 or so of these sprites on screen at any given time, which you can probably see from the type of app from the screenshot above.  Also, it is already broken in our case as we are applying individual masks (using this new method will be far faster) and individual photo processing effects like brightness, color changes etc to each sprite.  We only apply this new method to photos, when they are attached.  all other components of our scene run through the normal PIXI spritebatch renderer.

Once I get it all working correctly, I will probably just write a PIXI.Photo class that does it all for us.  But regardless, I think this technique has some excellent potential uses within main PIXI itself.

Perhaps we could calculate the barycentric coordinates of the expanded quad in the vertex shader itself?  Surely that would then leave batching intact as we are not passing in uniforms to each sprite?  If anyone has any ideas how to go about doing that, I would be very interested.

Share this post


Link to post
Share on other sites

Ok, got it. 

1) Make object that extends Sprite.

2) override renderWebGL so it will call different renderer

3) create renderer plugin. Look at SpriteRenderer as an example.

 

Your renderer will be much easier than SpriteRenderer because you dont have to use batches.

Dont forget

WebGLRenderer.registerPlugin('photo', PhotoRenderer);

 

Share this post


Link to post
Share on other sites
5 minutes ago, ivan.popelyshev said:

Ok, got it. 

1) Make object that extends Sprite.

2) create renderer plugin. Look at SpriteRenderer as an example.

3) override renderWebGL so it will call that renderer

Your renderer will be much easier than SpriteRenderer because you dont have to use batches.

Excllent, thanks Ivan.  That's exactly what i'll do.  I think I will investigate calculating the expanded quad and barycentric coords in the vertex shader first however.  Just in case we can preserve batching using that method.  Although it will would still be broken in our case because of the other things we are doing to these sprites, but there are plenty of other uses I can think of where this would be preferable.

Share this post


Link to post
Share on other sites

Just in case anyone is interested, I have a first draft version of my PIXI.Photo and PIXI.PhotoRenderer class completed and they're working wonderfully.

As you can see from the example image, when rotated the edges are beautifully anti-aliased even when webgl anti-aliasing and FXAA are turned off.  When the imge is not rotated the edges remain perfectly sharp at any zoom level/scale.

Furthermore, I can now apply perfectly anti-aliased rectangular masks to the sprite simply by changing the _frame attribute.  On mobile devices we were getting slow-downs using the standard PIXI.Mask system extensively.  Now we are at a rock solid 60fps.  I believe this is a method that the PIXI authors should consider using for masking by default.

I have also implemented two replacement classes for PIXI.rectangle and PIXI.Circle using a variations of the method above just with slightly different shaders, these also now render faster than the old Stencil buffer method and with smooth anti-aliased edges at all times.

I wont be making a PR for any of this as PIXI v4 is just around the corner, but it would be nice to see some of these techniques, or similar ones make their way in soon.  I am of the opinion that all primitives should be rendered via shaders in this manner rather than using the stencil buffer.

 

example.jpg

Share this post


Link to post
Share on other sites

@gordyr Make PR anyway, how do you think we'll add it to Pixiv4? 

This method will work only for single image/graphics object. For groups we'll have to use stencil for masking anyway, right?

UPD. I actually dont know how pixi masking works yet.

UPD2. I need this stuff for implementing blend-modes too.

Share this post


Link to post
Share on other sites

The shader could be easily turned into a PIXI.AbstractFilter which could be applied to whole Containers/DisplayObjects in the standard .filters = ['PIXI.maskShader']; manner.  You would simply then send the filter the mask coordinates and internally it would calculate the barycentric coordinates and send them to the filters fragment shader, masking anything outside of the coordinates in an anti-aliased manner.

I could of course be greatly misunderstanding how PIXI works, but while working on my PIXI.Photo class i was able to mask whole containers full of objects fine.  (I developed the shader as a filter first, then went back and built a whole class/renderer around it) All the shader does is discard pixels outside of the sent in coordinates and anti-aliases those close to the edge of the given barycentric coordinates.  I see no reason why it wouldn't work.

The mask manager would just simply switch the masking filter on or off.

Share this post


Link to post
Share on other sites

The problem with these kinds of approaches are that the normals and extrusion direction have to be uploaded as part of the vertex data. As well it means computing the normals on the CPU if they are not known (i.e. drawing a triangle based on coordinates). It works fine for cases like this (single quad) but trying to apply it to complex data (i.e. PIXI.Graphics) could add a lot of overhead.

The stencil buffer is useful for complex polygons with holes. In those cases triangulation can become expensive. The downside is that you lose the backbuffer anti-aliasing and have to apply FXAA or similar.

Either way the hope is that WebGL2.0 should eliminate the need for these anti-aliasing workarounds due to the presence of multisampled renderbuffers (hopefully faster than these workarounds for complex shapes).

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

  • Recently Browsing   0 members

    No registered users viewing this page.