Search the Community

Showing results for tags 'Performance'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • HTML5 Game Coding
    • News
    • Game Showcase
    • Facebook Instant Games
    • Coding and Game Design
  • Frameworks
    • Phaser 3
    • Phaser 2
    • Pixi.js
    • Babylon.js
    • Panda 2
    • melonJS
    • Haxe JS
    • Kiwi.js
  • General
    • General Talk
  • Business
    • Collaborations (un-paid)
    • Jobs (Hiring and Freelance)
    • Services Offered

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Twitter


Skype


Location


Interests

Found 250 results

  1. I'm rendering the content of some PNG's via a 4096 by 4096 RenderTexture, to cram all of it in the GPU memory, for scrolling usage. As each column should be 1024 pixels wide I can store a maximum of 16384 pixels of height to scroll through. I use 4096 as width and height because of http://webglstats.com/webgl/parameter/MAX_TEXTURE_SIZE But what to do if I want more than 16384 pixels to scroll through in one go? Suddenly realised: Should I just use some extra texture(s) of 4096 by 4096? It looks like a maximum of 8 textures is a safe bet: http://webglstats.com/webgl/parameter/MAX_TEXTURE_IMAGE_UNITS Or is there a better approach?
  2. Hello everyone What is the most powerful and performance way to extends a PIXI class? Example from my side, I need the native PIXI.Container class but with more methods and properties, and here's how i proceed. PIXI.ContainerData = (function () { class ContainerData extends PIXI.Container { constructor() { super(); this.Sprites = {}; // others custom proprety for game obj }; get d() { return this.Sprites.d }; // return diffuse sprites get n() { return this.Sprites.n }; // return normals sprites //TODO: spine normal are arrays }; ContainerData.prototype.createJson = function(dataValues) { }; ContainerData.prototype.asignJson = function(dataValues) { }; //END return ContainerData; })(); I also know another way to extends functional prototyping. example: function ContainerData() { PIXI.Container.call(this); this.initialize(); }; ContainerData.prototype = Object.create(PIXI.Container.prototype); ContainerData.prototype.constructor = ContainerData; ContainerData.prototype.initialize = function () { this.Sprites = {}; }; I wondering about the cleanest and most efficient techniques to properly extend a class pixi? But especially on the most optimal way, knowing that JS need scan all the prototypes from the bottom to the top! The first technique worries me a little because in my console, Is look like that all the prototypes are cloned by reference and duplicate! So, i am interested in different techniques and advice. I am therefore looking for the most equitable approach between performance (speed) but also the readability for debugging. Thanks in advance.
  3. Hi, I wanted to know if someone ever faced difficulties using Babylon.js to make Facebook Instant Games. When I boot my game using a custom python server, the games run well and is very fluid. However, when I start my game on Messenger App using the same build (uploaded to the Facebook Developer Platform), the game starts to lag, and has continuous lag spikes. My issue only happens with Messenger App, does not happen with Messenger in Chrome or Facebook. Anyone has an idea? Best regards!
  4. phaselock

    GUI controls limits

    Hi again ! So with the recent fix on executeOnAllControls, I wanted to push the boundaries but hit a wall almost immediately. Not entirely sure which is the culprit here but I repro'ed in PG: https://www.babylonjs-playground.com/#XCPP9Y#581 On my screen, the perf is still fairly smooth with 100 textblocks. If I set it to 1000 in the for loop, the pg slows down incredibly. Is this the case for html5 ? I'm just curious where the bottleneck might be and how can I still obtain smooth redraw past 1000 gui elements ?
  5. I have written a simple demo to render 8000 cubes in BabylonJS. But found out it's quite slower than in ThreeJS. Demo in BabylonJs: https://jsfiddle.net/6ng7usmj/ Demo in ThreeJS: https://jsfiddle.net/pofq4827/ It does not make sense, because BabylonJS supports more performance related features like vao. Any help would be greatly appreciated.
  6. Let's say I make a game in Phaser that: contains just a single scene on that scene there is a giant tilemap loaded with collisions set with ,setCollisionBetween() or similiar there are multiple sprites scattered all across the map, they collide with the player How does Phaser handle the tilemap, the sprites and collision in this case? Does it update the entire tilemap wherever the player is or only the chunk that is currently within the camera bounds? Does it check collisions with tiles and sprites that are only within the camera bounds or everywhere on the map despite the player not even being there? Is everything built for me internally or do I have to implement my own tile,sprite and collision loading to not waste performance?
  7. Hi, Which is the most efficient way when you need to deal with many text to display (mobile/pc) ? Because i spent times on forum and didnt find a clear answer (sometimes Phaser.Text, but often BitmapText) : Thanks
  8. I am working on a project where I am trying to create an animation of a watercolor effect to 'paint' in an image. The image is dynamic, so rather than using a simple video, I have created a video to use as an alpha mask to apply to the image via Pixi. The area around the visible image needs to be transparent, so I needed to use an alpha mask rather than just covering the image with white pixels. I have attached a full example file with all of the code that I have set up for this, but here's a quick summary of what I've done: Create a texture from video Create a sprite from image Add the image sprite to a container Set 'mask' property of container to be the video texture This worked beautifully, but the one concern that I have is of performance, as there will be other things going on on the page at the same time. A performance audit of the page while the animation is running shows that the main thread is being used for scripting for almost 100% of the time of the entire duration of the animation, which is slightly degrading the performance of things such as scrolling on the page while that is happening. I'm certainly willing to accept that this is simply a very performance intensive animation and that it doesn't get much better than what I've got, however this is my first time using Pixi, so I wanted to seek out some advice about whether I have done this the best way that I can, or if there is anything that I can do to help make this a bit more efficient. Thanks in advance for any help that anyone can offer! If I need to provide any more information or anything like that, just let me know, and I will do my best watercolor-test.html
  9. How do we improve performance for collisions versus a heightmap or mesh or just large numbers of floors+walls? I'm not looking for true physics (or at least, I don't think that I am). I'm looking to be able to support a few hundred entities all of whom are in contact with a large terrain mesh. Somewhere in the realm of 30-120 players, and 50-1000 npcs or items that had been dropped on the ground would be the ballpark performance goal. Here is a demo that shows the desired behavior more or less: http://www.babylon.actifgames.com/moveCharacter/ (use mouse look and and Z to walk around). Both this demo and my own attempts at something very similar use `moveWithCollisions` which seems to eat quite a bit of CPU. One to three players is enough to max out the cpu. I presume it results in a collision check between the moving meshes and every triangle of the terrain mesh (just guessing though, not sure how it actually works). Left to my own devices I'd probably compare the entity positions against the heightmap value at their x,z and just always make sure the y of their feet never slips under the heightmap -- I imagine this optimization could scale to several hundreds or thousands of entities and wouldn't involve a true mesh collision. It also wouldn't work for overhangs or caves, but I don't have any of those yet. As for collisions with walls or floors or large numbers of other objects that aren't part of the heightmap.. I'm not sure what I'd do. Maybe I'd partition the world up into large cubes and then dice up the existing collision code such that objects were only checked against their nearer spatial neighbors...(the other meshes that occupy the same cube as them)? But before I go reinventing wheels, I'd figure I'd ask here.
  10. I have two questions. 1. On the first screenshot, why do I have such enormous "idle" time? It seems that it is even longer than the "render" function itself. How can I improve that? 2. On the second screenshot, it seems that PBRBaseMaterial is too much time-consuming. How can I reduce that? Is this time related to frequent shader program switch?
  11. I have a game with a lot of containers and sprites. I used GPU-Z for testing. After I replaced all containers and sprites with 2D, GPU load decreased on 20% What is the reason of that? What is the difference between container / sprites and container2d / sprite2d?
  12. NoxWings

    Possible performance issue?

    Going through the 101 tutorials on the docs, I just opened the one weighted animations example and I happened to open it on my phone. https://www.babylonjs-playground.com/#IQN716 I got really poor performance considering the simplicity of the scene about ~36fps both on chrome and firefox I tried gradually removing elements from the scene, ui, weighted animations etc and I left just the imported model both lights and a basic floor. Even after that fps the scene runs at ~40fps. Then I found a really similar scene on threejs, same lighting (1 hemi, 1 directional), same model (or at least it looks the same), but it runs at butter smooth 60fps. https://www.babylonjs-playground.com/#IQN716#20 https://threejs.org/examples/webgl_loader_fbx.html Any ideas why this could be happening? Edit PS: tested on OnePlus5T
  13. Outfire

    Spine performance

    I have canvas 1920 x 1080 and 5 different Spine animations that uses a lot of textures. Add 20 animations of every type all over the screen Test performance in two ways. For this I use GPU-Z and check GPU load parameter 1) Every animation has its own atlas with images and json file - GPU Load 47% 2) Every animation has its own json file but uses the same atlas with images - GPU Load 61% Made the same tests with sprites as well, total amount of sprites is 180, 10 of each type. 1) Each sprite has its own download link - GPU Load 90 % 2) All textures are in the same spritesheet - GPU Load 63% Why tests with Spine have opposite results? PC config is Google Chrome v65, Windows 10, AMD Radeon HD 7450, Intel Xeon x5650
  14. Hi all, Just wanted to share a workaround I made today while trying to increase the performance of tilemaps in my game. My game has maps that could be 2048px X 2048px (64x64 32px X 32px tiles) or maybe even bigger down the road, and contain up to 8 layers for applying detail. The problem I ran into was, when scrolling the camera, I was seeing huge performance drops, even on powerful computers. Turns out, Phaser was redrawing the tilemap every frame you move the camera. My tilemap, after being loaded, never really changes. So I wanted to find an easy way to force Phaser to draw the entire tilemap once and then use it for a static sprite after that. I tried using https://photonstorm.github.io/phaser-ce/Phaser.TilemapLayer.html#cacheAsBitmap, but it didn't seem to work. Not sure if I was missing something there. So my solution was to resize the layer before getting the texture: myLayer.resize(game.world.width, game.world.height); myLayerTexture = myLayer.generateTexture(1, PIXI.scaleModes.DEFAULT, game.renderer); myLayer.visible = false; game.add.sprite(0, 0, myLayerTexture); this way, instead of the tilemap being added as a dynamic thing that gets redrawn constantly, it functions as a static image. But because each layer is still treated as a separate image, I preserve rendering order. In the background, I keep the actual layers hidden to preserve collision data. Unfortunately, if the tilemap were to change ever beyond initial creation, the texture would need to be updated, which would be expensive. But since I'm not really planning on doing that, it seemed like a reasonable solution to me. Hope this helps someone else who has this problem.
  15. Hi ! I precise that my question is only about meshes Should we prefer to load small objects (between 0 and 10 units for example, with a high precision like 5.554) or it does not impact performance at all and we can work with huge numbers as well (10k, 100k) ? I speak about width/height of a box for example. I guess the more important things are the number of vertices/edges/facets and how impostors are used (mesh vs box), but I would like to know regardless of that, if the size of meshes matters. Same for textures : should we prefer a texture repeated 1000 times on a plane or a bigger texture repeated 10 times for example ? Thank you
  16. I'm aware that the client's computer GPU might affect the game's performance (smoothness and freezing). But the game I'm creating now is being affected differently. The client's GPU is literally affecting his player movement speed globally (even on the other player's views, not only locally for him). If you check other .io games like agar.io and diep.io, even with a slow computer, you will notice that the player movement speed is is the same (based on the same player level). It skips a few frames and it isn't smooth at all. But the movement speed is the same. Every player on my game needs to have the same movement speed (that's one of the most important features of my game). I've also noticed that if I'm using the maximized window, the game slows down. But if I use like half browser screen, it comes back to being fast again: https://gyazo.com/59b72ae5d9e2d3e9611a41e9ac8a3f39 It wasn't supposed to happen. If you need further information from me, please let me know. Please help. Thanks in advance.
  17. Hi everyone, i was stuck in performance of game DETAIL: the above picture shows the game with screen zoom is 100 percentage this picture show the screen zoom size with 25 percentage when the zoom size is 100 % the game performance is good but when i change the zoom size of screen to below 50 percentage there is lack of performance(the player will move very very slowly) when screen zoom size is 67 - 100 percentage fps : 60 when screen zoom size is 50 - 67 percentage fps : 45 - 50 when screen zoom size is less than 50 percentage fps : 15 - 20 when on igcnito mode when screen zoom size is 200 - 500 percentage fps : 55 when screen zoom size is 110 - 200 percentage fps : 50 when screen zoom size is 67 - 100 percentage fps : 45 when screen zoom size is 50 - 67 percentage fps : 35 when screen zoom size is less than 50 percentage fps : not running anyone know what is the problem??? and how to make fps 60 for all zoom size of screen??
  18. Hi community, I want to propose you some addition to assetManager and sceneOptimizer. If the solution already exist, said me assetManager : 1. Get loading percent, not on x file loaded but on stream data loaded when it's possible. > for more precision and to show state if you load some big files. 2. Add "abort" function and "onAbort" callback to cancel running tasks and loading files. > Currently "clean" function delete tasks in assetManager but not abort current loads. That mean if you have a big file in loading, it don't stop. sceneOptimizer : 1. Add a starter level to try optimizing render : > I tested the current sceneOptimizer and the problem is it try the best render first. If you have an older device, the website crash and the browser reload page before that sceneOptimizer can downgrade the render. So, it's a loop without ending. 2. Create two steps to optimize render : > First : upgrading. The sceneOptimizer try to reach "x" FPS with the starter level. If it's ok, it upgrade render again until when it can't reach "x" FPS. > Second : dowgrading. If the last try (or the first try with the starter level ) not reach FPS, the sceneOptimizer downgrade until when it can reach FPS. If the sceneOptimizer reach "x" FPS, it stop. > Of course, we keep "trackerDuration" : time in milliseconds between passes. > You will can restart the sceneOptimizer when you add or change something like the current version What do you think about this ? Have a nice day !
  19. Hey! Did anybody compare Phaser and pixi for performance? We're going to make isometric game. bottom line is about objects you could render per screen. What do you think? What to choose? Phaser 2, PIXI, or maybe Phaser 3? Need the canvas renderer of course, not webgl. Update: I've the same question in the phaser slack channel. It seems that the PIXI would be better for isometric game then Phaser 2.
  20. I am working a brand new application. Not a game. I am pushing Babylon in ways that may be unique. I could really use some advice on high mesh-count situations. Please! Here is the deal. I generate graphics about crime data on SVG. WebGL can look a lot better but it is still kinda balky. The first two images are Attempted Murder in San Francisco for the past three years. What you cannot see in those images but can see in the next two (aggravated assault) is that the whole city is tiled in hexagons. About 40K for San Francisco, more for Chicago. All results are delivered by presenting color, opacity (and now elevation) in hexes. Image 5 shows increases and decreases of theft from Motor Vehicle between two six-month periods. Right now, I create a scene as follows: ~ Generate individual meshes to correspond with all "visible" hexes ~ Pattern-match materials and merge the hexes down from 30K meshes to about 200. ~ Dispose of the unmerged meshes ~ Show the scene ~ Gradually dispose of the meshes These unused meshes are consuming a lot of time. Especially since, except for a handful that are not merged, none of them are shown to the user! It would be conceptually far more ideal to create the hexagonal geometries and group them directly rather than creating meshes. I lack the chops to do it but am reasonably sure it is possible. I welcome advice, snippets or . . . consulting offers?
  21. Hi Folks, I have several questions regarding rendering performance. Question 1: about the acceleration structures. As we know that in Babylon we have Octree, it seems to be used for both frustum culling and picking. But I saw that we had also an interface called IActiveMeshProvider. Is it used for implementing other acceleration structure like BSP or BVH? I've seen some rendering engine which use BVH to do frustum culling. What are the advantages of Octree against BVH? @Deltakosh told me that we have a complete frustum culling stuff internally. How does it work? Does it work by using the Octree? Question 2: About sorting. Do we have bulit-in pre-sorting process so that we can avoid useless shader binding? Question 3: About mesh merging. I think I can use MergeMeshes to merge the meshes which share the same shader. However I need to keep the original one for picking. How can I distinguish the "Visual Tree" and the "Selection Tree"? Thanks guys
  22. FunFetched

    Watch out for gl.clear()!

    Hello, everyone! I just wanted to pass along some very important information that I just discovered regarding optimization that has been completely overlooked in optimization articles that I've read, and can have a huge impact on your apps! I have a game that pushes a pretty similar number of triangles and commands as the Sponza demo, but I wasn't enjoying anywhere near the same frame rate on slower systems. I fired up SpectorJS, cleaned up as much as I could in terms of draw calls and such, and wound up with a game that had fewer calls, fewer commands, and yet STILL ran a good deal slower than Sponza. However, there was one more metric that I had yet to take a close look at, and that was "clear()". I had 6 of them, while Sponza was using only 1. My game is a first-person-shooter that uses an additional camera to draw UI components, and on a different layer, so game geometry doesn't interfere with it. The player's hands and weapons are also drawn in a different rendering group for the same reason. It turns out, however, that for every new layer and group, Babylon kicks off a gl.clear() (which makes sense), and on systems with poor fill rates, this can absolutely wipe you out. In my case, the engine was clearing various buffers 6 times per frame, including multiple times per group/layer. Disabling Babylon's engine.prototype.clear() entirely made a world of difference in terms of FPS, though I do need at least 1 or 2 gl.clear() calls to accommodate my layering requirements. 6 is excessive and unnecessary, however, and I'm currently investigating ways to minimize that. Hopefully, I can do it all by simply modifying my code, but I suspect I might have to make some small engine modifications as well so I have greater control over when gl.clear() gets called, and just how many times. Edit: Just discovered scene.autoClear and scene.autoClearDepthAndStencil. Setting both to false takes care of... well... one of them!
  23. I had a huge loading performance a couple days ago. I have gotten past it now but it seems like something you should know about. I was trying to load a *lot* of meshes, say up to 20,000, and ran into a serious wall below that. The time for the first render completion was over a minute. I did the most practical thing, hit pause a few times to see what the system was doing. My system was spending all of its time in a FOR loop in Material.prototype._markAllSubMeshesAsDirty. (Attach 1) getScene().meshes is all of my meshes, so this was a long FOR loop. The plot thickens because it was a downstream side-effect of another FOR loop Scene._evaluateActiveMeshes. This FOR loop was also the length of all of my meshes. So, my first render was blocked by an n-squared algorithm that evaluating dirtiness. Lots and lots of dirtiness. I got past this with a hack, initially. I disabled Material.prototype._markAllSubMeshesAsDirty for the *first* *render* only. No ill effects. Later, I started merging meshes. Let me tell you, the mesh merging is the bomb! It changed everything for my application. Instances, yawn. Clone, snore. Merging killed it. I do 3-d graphs and there are a lot of similar meshes (everything is a hexagon, for starters). In my tests, I could reduce independent mesh count by a factor of twenty. Now I am loading 150K hexes fast and with good frame rate. I archived the original problem child at http://www.brianbutton.com/chart3d/carthagevirgin.html
  24. Ericky14

    GUI Performance

    Hello, I am trying to use the GUI TextBlock and Image components to display an icon and a text underneath it (number count) on top of a hexagon. In the scene, there can be hundreds of hexagons at once. Without instantiating the GUI components, the scene runs with no much problem at all, 200~ hexagons at 60fps... With it, the fps drops considerably. Is there any GUI optimization advice you guys can give me? I would appreciate it. constructor(parentHex, text, sizeScale, icon) { if (global.Image) { const GUI = require('babylonjs-gui'); const materialType = MATERIAL_TYPES.DYNAMIC_LABEL; const name = `hex_label_plane_${sizeScale}`; let plane = Main.scene.getMeshByName(name); if (!plane) { plane = BABYLON.MeshBuilder.CreatePlane(name, { size: 1.1 * sizeScale }, Main.scene); Util.setMaterial(plane, materialType); Util.setScalingToZero(plane); } plane = plane.clone('clone'); plane.position = Util.getNewVector(parentHex.position); plane.position.y += (parentHex.extents.y * Config.HEX_Y_SCALE) + 0.01; plane.rotation.x = Math.PI / 2; Util.setScalingToOne(plane); plane.setParent(parentHex); plane.originalScaling = Util.getNewVector(plane.scaling); plane.setVisibility = (visible) => { visible ? Util.setScalingToZero(plane) : Util.resetScaling(plane); }; const panel = new GUI.StackPanel(); panel.verticalAlignment = GUI.Control.VERTICAL_ALIGNMENT_CENTER; this._panel = panel; if (icon) { const image = new GUI.Image('hexLabelIcon', icon); image.height = `${Config.HEX_LABEL_ICON_HEIGHT / sizeScale}px`; image.paddingBottom = `${Config.HEX_LABEL_ICON_PADDING_BOTTOM / sizeScale}px`; image.verticalAlignment = GUI.Control.VERTICAL_ALIGNMENT_BOTTOM; image.stretch = GUI.Image.STRETCH_UNIFORM; panel.addControl(image); } const textBlock = new GUI.TextBlock(`hex_label_${text}`, text); textBlock.height = `${Config.HEX_LABEL_FONT_SIZE / sizeScale}px`; textBlock.color = 'black'; textBlock.fontSize = `${Config.HEX_LABEL_FONT_SIZE / sizeScale}px`; textBlock.textVerticalAlignment = GUI.Control.VERTICAL_ALIGNMENT_TOP; panel.addControl(textBlock); this.advancedTexture = GUI.AdvancedDynamicTexture.CreateForMesh(plane); this.advancedTexture.addControl(panel); return plane; } }
  25. Hi everyone I read this about gltf : https://pissang.github.io/qtek-model-viewer/ and test it with "Adam head " : https://sketchfab.com/features/gltf The render system is the same than Sketchfab for post processes. It's really cool because when there is no animations, the engine calculate step by step a better render quality. Post processes like ambiant occlusion look really great and not drop fps. This allow us to run scene with 60fps with animation and to return a beautiful image when all meshes are fixe. All post processes and shadows seem to be calculate with a rough noise at the beginning and with more precision at the end. ( It's look like a trick to get more performance ...) So, I open this post to discuss about that and to see if it will be a great idea to add it in Babylon What do you think ? Have a nice day ! PS : I ask this because I'm working on this :