Search the Community

Showing results for tags 'gpu'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • HTML5 Game Coding
    • News
    • Game Showcase
    • Facebook Instant Games
    • Coding and Game Design
  • Frameworks
    • Phaser 3
    • Phaser 2
    • Pixi.js
    • Babylon.js
    • Panda 2
    • melonJS
    • Haxe JS
    • Kiwi.js
  • General
    • General Talk
  • Business
    • Collaborations (un-paid)
    • Jobs (Hiring and Freelance)
    • Services Offered

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Twitter


Skype


Location


Interests

Found 7 results

  1. I'm aware that the client's computer GPU might affect the game's performance (smoothness and freezing). But the game I'm creating now is being affected differently. The client's GPU is literally affecting his player movement speed globally (even on the other player's views, not only locally for him). If you check other .io games like agar.io and diep.io, even with a slow computer, you will notice that the player movement speed is is the same (based on the same player level). It skips a few frames and it isn't smooth at all. But the movement speed is the same. Every player on my game needs to have the same movement speed (that's one of the most important features of my game). I've also noticed that if I'm using the maximized window, the game slows down. But if I use like half browser screen, it comes back to being fast again: https://gyazo.com/59b72ae5d9e2d3e9611a41e9ac8a3f39 It wasn't supposed to happen. If you need further information from me, please let me know. Please help. Thanks in advance.
  2. Hello, everyone. I've been playing around a lot with Pixi.js trying to find the best ways for memory optimization. Using Pixi's loader, I load my images. Some images are very large and for the first time creating and adding them to the stage, my game freezes for a moment. After reading around, I realized that freeze is Pixi uploading the texture to the GPU. Now, my question is, would it be ideal to add in a method to pixi's loader that after the texture loads, it uploads it to the GPU? That would stop the brief freeze. I have already used Pixi's built in method to upload to the GPU and the freeze is gone. What would be the pros and cons of doing this for every texture loaded? Thank you!
  3. GBear

    how reduce gpu memory?

    hi. i'm developing MMORPG MadWorld it can play on pc,mobile you can see movie on under link https://twitter.com/jandisoft but mobile has few memory. specially ios under iphone 6(including 6, not 6s) our game png and jpg to draw image with webgl. but it need a lot of memory.. is there any tips to reduce memory? i'm considering compressed texture like etc1, pvr,...etc.. but it can't control easy with multi platform. if you have tips please tell me..
  4. Hey. Was planning to do something big with babylon, but realized that chrome has got some problem with using all the computer's specifications, and may lag twice as much than a normal appliaction, so I wanted to kow how to (if you can) modify the render distance. thanks.
  5. elessar.perm

    GPU Computing

    Hi guys! I'm working on realistic ocean simulation for a browser game at the moment. The best known way to simulate ocean waves is Jerry Tessendorf's method with statistical model. I won't paste any formulas here for simplification, so here is core problem: calculations are expensive and I don't want to compute water heightmap by CPU in browser because the algorithm may be paralleled very well and GPU is able to compute the grid much faster. Is there any way to use GPU computing from babylon.js? I'm thinking about using shader with texture renderTarget to generate heightmap and then use the results in physics simulation in javascript and pass it to the shader material for rendering water surface. Is it worth or not? Can anyone suggest any other methods? Thanks!
  6. Hi. BackgroundI have a Sandy Bridge based PC Windows 7 laptop that has two gpu's: dedicated nVidia gpu and an integrated Intel HD gpu. If I've undestood correctly the Sandy Bridge is close to a SoC style of architecture and the Intel gpu is inside the same chip with the cpu. The laptop is using the Intel gpu with all tasks that are not considered graphically intensive and the nVidia kicks in only when gaming etc. The idea behind this is to save energy and it does do a brilliant job. The problemBy default web browsers are considered as non-graphically intensive apps. It generates one massive problem with Phaser (and probably with other gpu rendered web content as well): overheating. The performance is not an issue, usually cpu load is below 20% and fps stays easily at 60 with the Intel gpu, but probably because of the architecture the temp of the whole chip including cpu's starts going slowly up and the system cannot handle this very well. After some minutes at 80 degrees Celsius the system goes into some limp mode, I really do not know what happens but I'm guessing the gpu clocks are dropped and / or the rendering is moved to cpu because cpu load jumps to 70-100% and the fps drops to less than 10. You basically have to shut down the web page and continue after a while, but it always does the same of course. I can repoduce this with Phaser examples as well. Quick workaround is to force the system to use the nVidia gpu with browser, but it's not a solution. Out there are zillion Sandy Bridge computers with non-techy users. The questionAre there ways to restrict the gpu usage when it's not necessary? E.g. on menu screens and so on it's not necessary to keep on drawing 60fps when nothing is moving etc. All other ideas are welcome too, this is really quite a big of an issue for us :/ Thanks a lot in advance!
  7. BasomtiKombucha

    Can we dispose of an asset?

    Hi! So here's something that has been bothering me for a while... Can we somehow "unload" textures/texture atlases/assets? I'm working on a game that has multiple levels. At the start of each level, I preload all of the assets the level requires using the AssetLoader. So at the start of the first level I have something like: loader = new PIXI.AssetLoader(["level1_assets.json"]);loader.onComplete = startLevelloader.load();While at the start of the second level I have something like: loader = new PIXI.AssetLoader(["level2_assets.json"]);loader.onComplete = startLevelloader.load();The point is, once the first level is over, I will never again need the texture atlas used to store its assets (resp. "level1_assets.json"). So there's no need for it to linger in my precious GPU memory anymore! Can I somehow dispose of it?