Jump to content

Search the Community

Showing results for tags 'memory instancing performance'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • HTML5 Game Coding
    • News
    • Game Showcase
    • Facebook Instant Games
    • Web Gaming Standards
    • Coding and Game Design
    • Paid Promotion (Buy Banner)
  • Frameworks
    • Pixi.js
    • Phaser 3
    • Phaser 2
    • Babylon.js
    • Panda 2
    • melonJS
    • Haxe JS
    • Kiwi.js
  • General
    • General Talk
    • GameMonetize
  • Business
    • Collaborations (un-paid)
    • Jobs (Hiring and Freelance)
    • Services Offered
    • Marketplace (Sell Apps, Websites, Games)

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Twitter


Skype


Location


Interests

Found 1 result

  1. I'd like to show some positive results I've gotten trying to reduce general overhead of instanced meshes. The main reason being I probably won't be able to finish the work; Hopefully this will come in handy for someone else. Code changes https://github.com/ncoder/Babylon.js/commit/c123fcec5ddccf82c406bc4c4d81c73aabcca0d6 Attached files; they show a reduction of memory usage of instanced meshes from 25 mb down to 6mb. With these improvements, I've been able to substantially increase the maximum number of instances, where previously I was just running out of memory in the browser. (FYI: i went up to 200k instances). The key points to look at is how I moved all the members with "immutable" types (numbers, booleans, strings) in Node and AbstractMesh to the .prototypes, so that they don't have to be repeated in each instance unless they are changed. This is IMHO one key interesting properties of javascript. This cannot be done safely for mutable types, so I added a "lite" parameter that I turned on just for instanced meshes to reduce memory there. It is possible there is a good reason why things were not done this way, hence why i'm opening a discussion here. I'm not suggesting this specific implementation for production, as it is pretty much just hacked together at the moment. But I do believe that the general philosophy of only paying for the features you use is a good one. It would be better if the responsibilities of the objects was more composable; We could make greater use of interfaces and mixins, for things like collisions, for example. Then of course there is the matter of increasing the performance in general of the mesh collection phase, as well as setting of the position vertex attributes. For my application, the existing method of checking visibility at every frame is not workable. I'll also require some more fundamental batch/buffer generation and maintenance that lasts over multiple frames, instead of this method of generating the position buffers every frame like is done here. Let me propose that I think that all visibility culling and buffer generation results should last across frames, since in most applications these will be similar from one frame to the next. It's an easy argument to make for buffer generation, but a harder one for visibility culling. Thoughts?
×
×
  • Create New...