Jump to content

Search the Community

Showing results for tags 'gpu'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • HTML5 Game Coding
    • News
    • Game Showcase
    • Facebook Instant Games
    • Web Gaming Standards
    • Coding and Game Design
    • Paid Promotion (Buy Banner)
  • Frameworks
    • Pixi.js
    • Phaser 3
    • Phaser 2
    • Babylon.js
    • Panda 2
    • melonJS
    • Haxe JS
    • Kiwi.js
  • General
    • General Talk
    • GameMonetize
  • Business
    • Collaborations (un-paid)
    • Jobs (Hiring and Freelance)
    • Services Offered
    • Marketplace (Sell Apps, Websites, Games)

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Twitter


Skype


Location


Interests

Found 9 results

  1. I'm looking for any tips in regards to performance with the GPU and composite layers specifically. ^.^ My performance now is pretty good on web ~55-60 fps and it also looks pretty good on mobile, but it is using ~70% iPad's GPU power and only ~20-30% CPU. The code is in PIXI 5.1.0 and I have two renderers with one main RAF that renders both of them. Before the RAF starts, I'm using the PIXI.loader.shared to load in all the images and then I use the prepare plugin on all the PIXI containers that contain sprites. Offscreen sprites are all .visible = false. My main animations are constant x position subtracting to ~15-30 individual sprites on screen at a time and tinting menu sprites that are always on screen and aren't moving. Main Question: Looking at causes of composite layer paint complexity, I'm seeing that my PIXI renderers caused two layers for each of them. Is this normal for each renderer to have two layers with one saying n/a for compositing reasons? n/a layer: https://imgur.com/a/IGjPmTs Compositing due to the element being a <canvas> element: https://imgur.com/a/o4zZ21R Chrome devtools normal performance frame: https://imgur.com/a/iW3quHn I also have a lot of overlapping containers with sprites in the actual PIXI code which could be causing a higher memory estimate on the GPU's work. Is it worse to have each of the sprites in Containers when they overlap? Overlapping sprites: https://imgur.com/a/I61nBMo Thank you so much for any advice, you guys are the best! ^.^
  2. I'm rendering the content of some PNG's via a 4096 by 4096 RenderTexture, to cram all of it in the GPU memory, for scrolling usage. As each column should be 1024 pixels wide I can store a maximum of 16384 pixels of height to scroll through. I use 4096 as width and height because of http://webglstats.com/webgl/parameter/MAX_TEXTURE_SIZE But what to do if I want more than 16384 pixels to scroll through in one go? Suddenly realised: Should I just use some extra texture(s) of 4096 by 4096? It looks like a maximum of 8 textures is a safe bet: http://webglstats.com/webgl/parameter/MAX_TEXTURE_IMAGE_UNITS Or is there a better approach?
  3. I'm aware that the client's computer GPU might affect the game's performance (smoothness and freezing). But the game I'm creating now is being affected differently. The client's GPU is literally affecting his player movement speed globally (even on the other player's views, not only locally for him). If you check other .io games like agar.io and diep.io, even with a slow computer, you will notice that the player movement speed is is the same (based on the same player level). It skips a few frames and it isn't smooth at all. But the movement speed is the same. Every player on my game needs to have the same movement speed (that's one of the most important features of my game). I've also noticed that if I'm using the maximized window, the game slows down. But if I use like half browser screen, it comes back to being fast again: https://gyazo.com/59b72ae5d9e2d3e9611a41e9ac8a3f39 It wasn't supposed to happen. If you need further information from me, please let me know. Please help. Thanks in advance.
  4. Hello, everyone. I've been playing around a lot with Pixi.js trying to find the best ways for memory optimization. Using Pixi's loader, I load my images. Some images are very large and for the first time creating and adding them to the stage, my game freezes for a moment. After reading around, I realized that freeze is Pixi uploading the texture to the GPU. Now, my question is, would it be ideal to add in a method to pixi's loader that after the texture loads, it uploads it to the GPU? That would stop the brief freeze. I have already used Pixi's built in method to upload to the GPU and the freeze is gone. What would be the pros and cons of doing this for every texture loaded? Thank you!
  5. hi. i'm developing MMORPG MadWorld it can play on pc,mobile you can see movie on under link https://twitter.com/jandisoft but mobile has few memory. specially ios under iphone 6(including 6, not 6s) our game png and jpg to draw image with webgl. but it need a lot of memory.. is there any tips to reduce memory? i'm considering compressed texture like etc1, pvr,...etc.. but it can't control easy with multi platform. if you have tips please tell me..
  6. Hey. Was planning to do something big with babylon, but realized that chrome has got some problem with using all the computer's specifications, and may lag twice as much than a normal appliaction, so I wanted to kow how to (if you can) modify the render distance. thanks.
  7. Hi guys! I'm working on realistic ocean simulation for a browser game at the moment. The best known way to simulate ocean waves is Jerry Tessendorf's method with statistical model. I won't paste any formulas here for simplification, so here is core problem: calculations are expensive and I don't want to compute water heightmap by CPU in browser because the algorithm may be paralleled very well and GPU is able to compute the grid much faster. Is there any way to use GPU computing from babylon.js? I'm thinking about using shader with texture renderTarget to generate heightmap and then use the results in physics simulation in javascript and pass it to the shader material for rendering water surface. Is it worth or not? Can anyone suggest any other methods? Thanks!
  8. Hi. BackgroundI have a Sandy Bridge based PC Windows 7 laptop that has two gpu's: dedicated nVidia gpu and an integrated Intel HD gpu. If I've undestood correctly the Sandy Bridge is close to a SoC style of architecture and the Intel gpu is inside the same chip with the cpu. The laptop is using the Intel gpu with all tasks that are not considered graphically intensive and the nVidia kicks in only when gaming etc. The idea behind this is to save energy and it does do a brilliant job. The problemBy default web browsers are considered as non-graphically intensive apps. It generates one massive problem with Phaser (and probably with other gpu rendered web content as well): overheating. The performance is not an issue, usually cpu load is below 20% and fps stays easily at 60 with the Intel gpu, but probably because of the architecture the temp of the whole chip including cpu's starts going slowly up and the system cannot handle this very well. After some minutes at 80 degrees Celsius the system goes into some limp mode, I really do not know what happens but I'm guessing the gpu clocks are dropped and / or the rendering is moved to cpu because cpu load jumps to 70-100% and the fps drops to less than 10. You basically have to shut down the web page and continue after a while, but it always does the same of course. I can repoduce this with Phaser examples as well. Quick workaround is to force the system to use the nVidia gpu with browser, but it's not a solution. Out there are zillion Sandy Bridge computers with non-techy users. The questionAre there ways to restrict the gpu usage when it's not necessary? E.g. on menu screens and so on it's not necessary to keep on drawing 60fps when nothing is moving etc. All other ideas are welcome too, this is really quite a big of an issue for us :/ Thanks a lot in advance!
  9. Hi! So here's something that has been bothering me for a while... Can we somehow "unload" textures/texture atlases/assets? I'm working on a game that has multiple levels. At the start of each level, I preload all of the assets the level requires using the AssetLoader. So at the start of the first level I have something like: loader = new PIXI.AssetLoader(["level1_assets.json"]);loader.onComplete = startLevelloader.load();While at the start of the second level I have something like: loader = new PIXI.AssetLoader(["level2_assets.json"]);loader.onComplete = startLevelloader.load();The point is, once the first level is over, I will never again need the texture atlas used to store its assets (resp. "level1_assets.json"). So there's no need for it to linger in my precious GPU memory anymore! Can I somehow dispose of it?
×
×
  • Create New...