Hi all, I have been working on a game. It's unoptimized in many respects, but I noticed something truly strange about its performance:
On my Pixel 2 phone, the performance is acceptable (30-60 fps), but on my HP Spectre x360 ultrabook, the performance is a virtual slideshow, around 5 fps. The performance is better on my Nexus 5X, and on my powerful desktop PC it's fine as well. The slowness on the Ultrabook is irrespective of browser (though Chrome performs best).
The ultrabook has Intel HD Graphics 620. Other WebGL demos seem to run fine (pixi, 2D and 3D). It profiles much higher than my Pixel 2 for WebGL.
About the game:
It is a roguelike style RPG with tiled maps with a 1-2 sprites for each tile. Because of this I initially expected number of draw calls as the culprit, so I set all tiles that are not yet seen or are offscreen to `visible = false`. However, the ultrabook was still a slideshow.
Next I downloaded the extension WebGL Insight. I noticed that under Resources -> Buffers -> Buffer2, the Buffer is an array with 24,000 elements. I don't know the inner workings of WebGL, and so I don't know the exact significance of this array, but the length seemed suspicious.
Under Resources -> Textures, there are 205 textures.
In the Chrome profiler, 87% of total time (20% of self time) is used by `Function Call -> Animation Frame Fired`. Next is Composite Layers (6.6% self / 6.6% total). Then renderWebGL (5.3% self / 27.5% total). Then updateTransform (5.2% self / 5.8% total).
Does anyone have any ideas? Is there a precedent for these kinds of platform inconsistencies?