Jump to content

renderable = false when objects are not in camera


ForgeableSum
 Share

Recommended Posts

Simple question. Are there performance benefits to setting object renderable = false; when they are not in the camera or does Phaser perform these optimizations automatically?

 

function onlyRenderInCamera() {    gameObjects.forEach(function(object) {        if (object.inCamera == false) {            object.renderable = false;            object.cacheAsBitmap = true;         } else {            object.renderable = true;        }    });}
Link to comment
Share on other sites

I'd think there were performance detriments instead. If you do a bunch of checks trying to intersect a sprite with the camera that'll drag things down vs. letting the graphics hardware throw away pixels that couldn't render.

 

The comments on the property "autoCull" bear this out:

A Game Object with autoCull set to true will check its bounds against the World Camera every frame. If it is not intersecting the Camera bounds at any point then it has its renderable property set to false. This keeps the Game Object alive and still processing updates, but forces it to skip the render step entirely.

This is a relatively expensive operation, especially if enabled on hundreds of Game Objects. So enable it only if you know it's required, or you have tested performance and find it acceptable.

 

Link to comment
Share on other sites

  • 3 months later...

Can this really be? What is the point of autoCull in this case?

I have a large side-scrolling world where the camera moves left to right. I have enemies positioned at launch positions that are 'asleep'. They self-activate when near the camera bounds. I'm worried I'm wasting a lot of power by 'rendering' these 100-200 objects offscreen. A slightly different question, but does renderable = false really not offer any performance benefit in this scenario?

Link to comment
Share on other sites

30 minutes ago, drhayes said:

I'll admit upfront that what happens in a GPU after all the JS runs is largely a mystery to me. That said, I bet you could profile this both ways and see which one performs better.

Vertex shader runs, offscreen tris are discarded (quickly)... thus there should be very little saving from not sending the verts for offscreen geometry to the GPU given that we assume (atleast for sprites) that the vertex shader has little to do to transform each vertex and the cpu and memory doesn't have to do much work for it either (and that discarding at least per quad might take us quite a few CPU cycles esp. given this is JS)... there are factors that may make not drawing the geometry faster (such as WebGL having to validate the index buffer, and the vertex data being transferred) but unless one can exclude a large amount of sprites with relatively few tests, then it isn't obvious that this will be worth while (imho)

Link to comment
Share on other sites

Again, betraying my ignorance of all things GPU-related in Phaser and PIXI:

Why wouldn't a 2d game essentially draw two triangles the size of the viewing rectangle? Then all the images/sprites/what-have-you are textures mapped to the two triangles? Is that not what PIXI does? Does every display object get its own two triangles?

Interphase 2 cannot come soon enough.

Link to comment
Share on other sites

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...