• Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by timetocode

  1. If the tilemap isn't mutatable (or if only a few layers are mutable) one can also get away with: keeping the map data in some simple form, like an array per layer containing ints for tiletype generating megatiles by drawing sections of the map (e.g. 16x16 tiles) onto a rendertexture add the megatiles to the game instead of individual tiles (maybe) hide megatiles that are offscreen (if your player only sees a small subsection of a much larger world) Baking a large number of tiles into a megatile takes a little bit of time (milliseconds, usually) but baking an entire map depending on the size can take < 1s on a gaming rig and multiple seconds on a chromebook -- so just keep in mind that while this may help reach and maintain maximum frames per second it does so at the expense of potentially lengthy operation on slower machines as the game starts up. If you need it to generate these megatiles on the fly as the player moves around (which would not be the case for 5000 sprites, but might be the case for a super big map) then it becomes prohibitive as it makes the game choppy unless one can bake the megatile without freezing the game for a few frames. I've got a few games that work this way.. generally speaking they're able to get 60 fps on a low end chromebook... and this approach added ~4-12 seconds of 'bake time' on low end devices for a map that was ~150,000 sprites and resulted in 8096x8096 pixels worth of render texture (baked into smaller textures like 1024x1024).
  2. Nevermind! The bug was that indices are an Int32Array and I had turned them into Float32s.
  3. Thanks @bubamara and @ivan.popelyshev I've been able to mutate my mesh a bit now. I'm stuck trying to change the indices, any idea what I'm doing wrong? const vertBuffer = geometry.getBuffer('aVertexPosition') vertBuffer.update(new Float32Array(vertices)) // works const colorBuffer = geometry.getBuffer('aVertexColor') colorBuffer.update(new Float32Array(colors)) // works const indexBuffer = geometry.getIndex() indexBuffer.update(new Float32Array(indices)) // makes my mesh disappear All of this except changing the indices seems to work. If I change the indices my object vanishes. The `vertices`, `colors` and `indices` aren't actually changing -- i'm just creating the same circle over and over again and trying to update the buffers.. so maybe I have a more basic issues that is making my shape disappear. Here is the shape that I'm drawing (black lines and green points added for debugging, the mesh is the whole lighting overlay of a radial gradient within the circle and shadows in the other triangles).
  4. I'm trying to create a mesh that consists of a few dozen triangles that are going to change every frame (hopefully performs okay....) How do I actually do that though? Create mesh: const shader = Shader.from(vertexSrc, fragSrc) const geometry = new Geometry() .addAttribute('aVertexPosition', [initialVerts]) .addAttribute('aVertexColor', [initialColors]) const mesh = new Mesh(geometry, shader, null, DRAW_MODES.TRIANGLES) Attempt at changing aVertexPosition and aVertexColor each frame: mesh.geometry.addAttribute('aVertexPosition', newVertices) mesh.geometry.addAttribute('aVertexColor', newColors) Error: Cannot read property 'updateID' of undefined; originating from GeometrySystem.updateBuffers.
  5. Okay so I only ended up making two fully functioning implementations, though I did go down a few paths just to benchmark varying techniques. I have not yet done any techniques with instancing (I've only been doing webgl for a few days and need to study some more). Hopefully someone finds this useful. So a few details about the underlying map data before getting into the specific techniques. The game is networked (nengi.js) and sends chunks to the game client as the player nears them (like minecraft more or less, but just 2D). Each chunk is an array of tiles, it happens to be a 1D array with some math to turn it back into 2D, but this detail probably doesn't matter. The map as a whole is sparse collection of chunks, which also doesn't matter too much but means that the chunks can be generated one at a time -- the whole map doesn't have to be filled in and exist when the game starts. The clientside chunk graphics generator, for both techniques below, would only generate one chunk per frame and would queue them so as to avoid ever generating too many graphics in a single frame and experiencing a noticeable hitch (sounds fancier than it is). Let's say that the chunks are 8x8 tiles, and each tile is 16x16 pixels (I tested many variants). The network data for receiving a chunk then contains the chunk coordinates and 64 bytes. If not using a network or using different dimensions then this would vary, but I'm going to stick with these numbers for the examples. They were also benchmarked on two computers which I will call the chromebook (acer spin 11, Intel HD 500) and the gaming rig (ryzen 1700, 1070 ti). The first experiment uses render textures. So it receives the 64 tiles, creates 64 sprites according to the tile types, and then takes the whole thing and bakes it into a single texture. That chunk sprite is then positioned as needed (probably at x = chunkX * chunkWidthInPixels, etc for y). On the gaming rig many varieties of chunk and tile sizes and multiple layers of tiles could be baked without any hitches. The chromebook was eventually stable at 8x8 chunks with 3 layers of tiles but anything bigger than that was producing notable hitches while generating the chunk graphics. It is also worth mentioning that the above technique *minus baking the tiles* is probably what everyone makes first -- it's just rendering every single tile as a sprite without any optimization beyond what pixi does by default. On a fast computer this was actually fine as is! Where this one runs into trouble is just on the regular rendering time for the chromebook-level device... it's simply too many sprites to keep scrolling them around every frame. The second experiment was to produce a single mesh per chunk with vertices and uvs for 64 tiles. The geometry is created after receiving the network data and the tiles in the mesh are mapped to their textures in a texture atlas. I feel like, as far as webgl options go, this one was relatively simple. The performance on this was roughly 4-6x faster than the render texture scenario (which already worked on a chromebook, barely) so altogether I was happy with this option. I was reading that there would be issues with the wrapping mode and a need to create gutters in the texture atlas to remove artifacts from the edges of the tiles as they rendered, and I'm not sure if I will need to address these later. My technique for now was to make sure the pixi container's coordinates were always integers (Math.floor) and this removed artifacts (stripes mostly) that appeared at the edge of the tiles due to their texture sampling. That's all I've tried so far, but I'm pretty satisfied with both the render texture technique and the webgl technique. I'll probably stick to the mesh+webgl version as I'm trying to use webgl more in general.
  6. I think I'll try making one of each of those for the webgl practice. Thanks Ivan!
  7. I was testing a little more, and while creating a new shader every time is a bit expensive, creating new geometry from the save vertices + uvs is cheap, and then I can put the data in an attribute on that geometry instead of as a uniform. I'm new to webgl so not sure if that is the right way, but seems promising.
  8. Hello! Loving v5 it's great! I've been trying to render an extremely large map of tiles in smaller chunks that load in as the player moves around. I've done this with sprites and a render texture per chunk and it works fine, but now to learn some webgl I'm porting the project to use pixi Mesh + Shader. I've gotten to the point where I have the vertices and uv coords for a 16x16 chunk of tiles (skipping the code, nothing special). I then create a mesh per chunk of tiles and position them around. Code looks like this: const shader = PIXI.Shader.from(vertexSrc, fragmentSrc, uniforms); const chunk = new PIXI.Mesh(geometry, shader); // etc for many chunks and then I just change the chunk.x and chunk.y Now what I'm trying to do next is actually show different textures for each tile within each chunk, for which I'm using a collection of tileTypes which is either going to be an array or texture uniform with the data for the 256 tiles that comprise the 16x16 chunk. I hope that makes sense. In any case, because all of the chunks have the same geometry and the same shader if i change the `chunk.shader.unforms.tileType` it changes all of the chunks at the same time. If I create a new shader for each chunk so they have a unique uniforms object each, it ends up being an expensive operation which creates a visual hitch. I could probably create a pool of meshes and shaders and reuse them such that I wouldn't have to actually create new shader objects at run time as chunks load into the game, but before going down that path I wanted to know if there was an easier way. Can I somehow create meshes that share the geometry, vertexShader, fragmentShader, but have a *unique* uniform per instance of the mesh? Thanks
  9. GC in javascript is frequently undergoing changes, but for the last few years (if this hasn't just changed...) one of the main problems is that it isn't occurring "in the background when the program is idle" like most people assume. In truth the biggest GC hit occurs while creating a new object -- i know that sounds unbelievably bad but unless it has recently changed that's what is happening. The reason GC is invoked while creating a new object is that the trigger for GC is based off of trying to allocate new memory but discovering too much memory is already used. So right in the middle of important game code it'll periodically perform a round of garbage collection, and then finish creating that new object. This is how one can end up with relatively low CPU usage but still GC-related hitches... the GC is happening at basically the worst time every time. So that detail of GC may change in the future, but if experiencing GC problems here are the two tricks I use most frequently... As @ivan.popelyshev noted a local variable can prevent the GC. We can take a loop that used to create a new Point() (or several) as part of its internal logic. If that object is declared outside of the loop and has its values changed within the loop, then it'll only be created once in total. The second is pooling... but it is important to really understand pooling's benefit else we simply re-invent the GC lag spikes. If a pool dynamically grows and shrinks much then the GC issue will be very similar. Also pools in the past were used to speed up object creation which I'm just going to say is rarely a benefit nowadays. The best pools for mitigating GC are pools that never (or very rarely) free anything at all. For the things that actually benefit from pooling, such as projectiles in bullet-hell game, or particles, this is somewhat reasonable feature. Just create 10,000 particles (or w/e) and recycle them as needed.
  10. Some good stuff, and yea the canvas-punch-out style lighting is nearly identical to what I have. I'll try and learn enough about meshes to make the same thing out of a pixi mesh instead of out of canvas + compositing triangles. I already have all of the triangles, and a mesh is just indices and uvs...how hard could it be right??? Famous last words.. That shadertoy lighting is sooo nice looking lol. I'm not sure if I could legitimately go down that path as the triangles for me are generated in javascript and used within the game logic (not just for rendering). Though that looks so nice it is tempting to try to figure out an alternative.
  11. On a visual level pixi-shadows is very similar, the main issue is the performance. I'm definitely doing a form of raycasting, but with heavy optimizations. I'm targeting integrated gpus and mid-level chromebook cpus I've already got a huge speed increase (maybe 1000x) via an algo that generates triangles by casting rays from the center of the player to the known vertices of obstacles in a spatially-culled sub-section of the whole game world. Here's what the triangles look like: The challenge is using these triangles to get the right visual result in a way that is also fast. The triangles unfortunately are the visible area as opposed to being the shaded area which in the past meant that instead of drawing the triangles directly onto the game, I ended up subtracting the triangles from a darkened overlay to create this effect: Here's it a little fancier with dithered look, and also some shadows long the edge of walls generated a different way: Everything above was made as a hybrid of canvas compositing operations generating textures that then were combined with pixi and it run quickly but creates too much GC pressure due to generating textures every frame. Maybe I should try a mesh....? Or manual addition of triangles via webgl? I need to draw 10-50 triangles per frame, dynamically and potentially entirely different one frame to the next. I also need a sane way to place these triangles in the scene such that they line up with the obstacles in a game with a giant world and a moving camera. I have no idea how to line up the vertices of a mesh in gpu coordinates with the x,y coordinates of the stuff in my pixi containers. It would also be ideal if the triangles could be used for masking or reverse-masking (e.g. an object that is partially covered by the shadow is partially cropped by the shadow) but I could live without that as I do this manually at the moment. I'm not shy about math or learning a bit of webgl, so I am open to suggestions.
  12. Congrats on v5 you awesome people! So a long time ago I was working on top down 2D game that had line of sight, but I ran into performance problems problems that sound like they can be more easily solved in v5. For some reason the forum is not letting me upload images so here are some links. Screenshots: https://timetocode.tumblr.com/post/164071887541/much-progress-screenshots-show-in-game-terrain Video of line of sight https://timetocode.tumblr.com/post/163227386513/11-second-video-tree-blocking-line-of-sight It's just one of those pseudo-roguelike line of sight effects where you can't see around corners (real-time though!). It involved lots of triangles, and the algos were based on Amit Patel's https://www.redblobgames.com/articles/visibility/ To achieve this effect I created a canvas separate from pixi with the shadow color filling the screen, and then I would mask/subtract the triangles generated by the visibility algo. I would then take that resulting image with a littl ebit of blur and put it in a container that was rendered underneath the wall sprites. That was the fast version, I also made another version based on pixi graphics that had other issue. Even the fast version had all sorts of garbage collection problems, probably because it was creating a texture the size of the whole screen every frame and then discarding it. It sounds like v5 can probably handle this feature without generating a texture every frame. Could someone suggest a way to accomplish this in v5? I don't have to put the shadows under the wall sprites if that is particularly hard.. they could be ontop of everything instead -- just whatever stands a chance of hitting 60 fps on a chromebook. TY TY
  13. Link: JigsawPuzzles.io - multiplayer cooperative jigsaw puzzles There are always a few ongoing public games on the homepage. Anyone can join these. If you want a more controlled experienced, signing in allows the creation of private games. You can invite friends into your private games. Tech stuff: The game is made in a hybrid of canvas and Pixi v4. The multiplayer is node.js + websockets via nengi.js. The game is only online multiplayer, even when playing alone. I'm primarily a network programmer and the whole project began as strange experiment of programming a casual puzzle game as if it were a shooter (movement prediction, lag compensation, reconciliation). I usually work on shooters. The hope was that the controls would come out feeling like the game was single player up to 500 ms of latency and 5% packet loss. The project has changed a little since then (the servers are no longer 30 tick) but the essence remains. It uses 2K and 4K images depending on the puzzle size, which is kinda fun because of how much it is possible to zoom in/out. The big puzzles are primarily aimed at desktop users, but are decent on touch tablets. Mobile isn't supported yet... but it does kinda work on some phones just by chance. We've got a few crazy modes that we're considering in the future. For now the only mode is a cooperative recreation of the table top experience. Thank you for checking out the game Here's a 1148 piece puzzle: Here's a smaller puzzle: What the catalog/setup page looks like (gotta sign in first, so that progress we can save): Social accounts for the game: https://www.facebook.com/JigsawPuzzles.io/ https://www.instagram.com/jigsawpuzzles.io/ https://twitter.com/JigsawPuzzlesIO https://www.reddit.com/r/jigsawpuzzlesio/ https://discordapp.com/invite/axT9bRw
  14. I do something just like this for a rewind + collision check feature. The trick is to use mesh.computeWorldMatrix(true) on the meshes that are being checked for collisions after being moved. This is the internal operation being performed by scene render that makes the collisions work (among other things, such as rendering). I think this is needed whenever moving and positioning objects in-between frames, esp if the object is being moved multiple times and checked for collisions before the next render. In the case of my rewind collision check, I never actually wanted to render the object in the rewound position -- I just needed the object to be in the correct state so that babylon could do the math for me. For me that meant copying its original position, moving it to a new position, invoking computeWorldMatrix, doing the intersection test, and then restoring it to the original position. While not identical, this does come out somewhat similar to the notion of moving an object (or a copy of that object) until it collides, uncolliding it, and then having it in the uncollided position all before actually rendering anything. Here's the PG w/o scene render in the collision checks: https://playground.babylonjs.com/#1UK40Z#8 If I understand what you have made, then this in theory will never turn the torus white.
  15. It looks awesome!! I took your advice and tried to make sure that babylon was the same version everywhere that I was using it. I'm not sure what I really did, but somewhere along the process of trying to change out 3.3 for 4, and then undoing it, little things started working. In the end I made a new custom deploy of babylon 4.0, and then I copied and pasted *just* the water material from that directly above my own code. This gave the water reflections, but no animation. I couldn't really figure out a way to get 4.0 running NullEngine (I'll just wait until it is released on npm, and then untangle things). As for the water animation, that part does make sense to me now, I was missing this: engine.runRenderLoop(() => {}) ^ my previous code only called scene.render(), which might explain why some pg code doesn't work the same for me... i'm not sure what invoking runRenderLoop with an empty function does, but i'm guessing it triggers some important core-level update logic..? in this case passing variables to the water shader maybe. I also learned that I *can* setGlobalVolume if I do it inside of setTimeout -- I'm not sure what mess I've made, but I'm either accidentally overwriting the audioEngine or things aren't initialized quite when I think they are.
  16. Everything seems to work in the PG. Multiple times I've taken some seemingly encapsulated function out of a PG and pasted into my game and not had it do quite the same thing. I'm not sure how to reproduce it. I'm starting to think I may have a more fundamental issue, like perhaps I accidentally have multiple instances of BABYLON's global scope, engine, or scene. Maybe this is a hint: BABYLON.Engine.audioEngine.setGlobalVolume(0.1) does not change the volume of sounds in my game. I have to invoke setVolume on every sound individually to change volume. Does that point at any general type of issue? I'm using const BABYLON = require('babylonjs') in most files instead of an html script tag, as most of the code runs in node... i wonder if that is related.
  17. I must be doing something incorrect with the water, as it is rendering black: I pasted the code from this playground, and included the files for the waterbump.png and skybox: https://www.babylonjs-playground.com/#1SLLOJ#17 Here's my code, although its very similar to the PG. My guess is the error isn't in this code, though I'm not sure what else would affect the water. It is as if it has nothing in its render list -- but I can definitely see the sky, and the sky is added to the list. this.canvasEle = document.getElementById('main-canvas') this.engine = new BABYLON.Engine(this.canvasEle, true) this.engine.enableOfflineSupport = false this.engine.setHardwareScalingLevel(1) this.scene = new BABYLON.Scene(this.engine) //this.scene.freezeActiveMeshes() //this.scene.collisionsEnabled = true //this.scene.clearColor = new BABYLON.Color4(0.3, 0.5, 0.75, 1.0) this.scene.fogMode = BABYLON.Scene.FOGMODE_EXP this.scene.fogDensity = 0.0005 this.camera = new BABYLON.TargetCamera('camera', new BABYLON.Vector3(0, 0.5, 0), this.scene) this.camera.fov = 0.8 this.camera.minZ = 0.1 //this.scene.autoClear = false //this.scene.autoClearDepthAndStencil = false var light = new BABYLON.DirectionalLight("dir01", new BABYLON.Vector3(0.66, -0.75, 1.25), this.scene) light.position = new BABYLON.Vector3(1, 40, 10) light.intensity = 0.5 var light3 = new BABYLON.DirectionalLight("dir01", new BABYLON.Vector3(-0.5, 0.75, -1.25), this.scene) light3.position = new BABYLON.Vector3(1, 40, 1) light3.intensity = 0.5 var light2 = new BABYLON.HemisphericLight('h', new BABYLON.Vector3(0, 1, 0), this.scene) light2.intensity = 0.6 var skybox = BABYLON.Mesh.CreateBox("skyBox", 5000.0, this.scene) var skyboxMaterial = new BABYLON.StandardMaterial("skyBox", this.scene) skyboxMaterial.backFaceCulling = false skyboxMaterial.reflectionTexture = new BABYLON.CubeTexture("images/TropicalSunnyDay", this.scene) skyboxMaterial.reflectionTexture.coordinatesMode = BABYLON.Texture.SKYBOX_MODE skyboxMaterial.diffuseColor = new BABYLON.Color3(0, 0, 0) skyboxMaterial.specularColor = new BABYLON.Color3(0, 0, 0) skyboxMaterial.disableLighting = true skybox.material = skyboxMaterial var waterMesh = BABYLON.Mesh.CreateGround("waterMesh", 2048, 2048, 16, this.scene, false) var water = new BABYLON.WaterMaterial("water", this.scene, new BABYLON.Vector2(512, 512)) water.backFaceCulling = true water.bumpTexture = new BABYLON.Texture("images/waterbump.png", this.scene) water.windForce = -10 water.waveHeight = 1.7 water.bumpHeight = 0.1 water.windDirection = new BABYLON.Vector2(1, 1) water.waterColor = new BABYLON.Color3(0, 0, 221 / 255) water.colorBlendFactor = 0.0 water.addToRenderList(skybox) waterMesh.material = water Any ideas?
  18. Now that I have this reproduced, I think I'm realizing that this idea was never going to work due to the number of sounds. I did a little bit more testing with a 500 KB wav file vs a 15 KB mp3, and the degradation seems more related to the number of sounds than their size. In my game each weapon has like 5-20 sounds, and a player can hold a few weapons. The walk/run cycle has like 16 sounds per material (concrete, wood, grass, etc). It can add up to about ~100 possible sounds per player, though 99% of them aren't playing at any given point. BJS seems totally fine with that many sounds, but it looks like attachToMesh is not designed for this. I read the source code, and it looks like it *might* be viable if it would check if the sound is playing before rebuilding the matrices. Currently it does some fairly expensive work, even for non-playing sounds (setPosition and computeWorldMatrix fill the profiler when stress tested with cloned sounds on moving meshes). I'm going to test positioning the sounds manually at the time that they are played, and not having them move along with the mesh. If that doesn't work I guess I'll pool them in addition. Edit: definitely need a pool Edit#2 at 21 players firing the same automatic rifle, the pool brought the active number of sound instances from 210 down to 59 Here's an ultra simple auto-expanding pool if anyone wants. Usage is just to use 'get' for short-lived sounds and it will handle allocation and releasing on its own. It never deallocs. class SoundPool { constructor() { this.scene = null this.sounds = {} } init(scene) { this.scene = scene } allocate(name) { const sound = BABYLON.Sound.FromAtlas(name, name, this.scene) sound.onEndedObservable.add(() => { this.release(sound) }) this.sounds[name].push(sound) } get(name) { if (!this.sounds[name]) { this.sounds[name] = [] } if (this.sounds[name].length === 0) { this.allocate(name) } return this.sounds[name].pop() } release(obj) { this.sounds[obj.name].push(obj) } } const singleton = new SoundPool() module.exports = singleton BABYLON.Sound.FromAtlas is a little wrapper that clones sounds without making additional xhrs (e.g. 'sounds/foo.mp3', clones it and applies new options)
  19. I was finally able to reproduce it in a PG. https://www.babylonjs-playground.com/#8B9YRN#1 This is based off of the playground where violin music is playing inside of spheres. As the camera enters the sphere, the music becomes audible. In this playground I have one purple dome with music playing on loop, and then 100 purple domes that have a clone of the music linked to their mesh, but the music is intentionally not being played. The purple dome that is playing music is separate from all of the others. It is the only one playing music. If we keep increasing the number of silent domes (see the for loop around line ~45) eventually the sound will degrade, even though we're only playing a single wav on loop. I left the PG on 100 silent domes b/c I don't want to crash people's computers. Personally I have to increase this number to 1200(!!) before I get the sound problems that are occurring in my game. So to reproduce, change the count on line 45 and then walk into the isolated sphere. That 1200 is a big number but this problem happens in my game at around 300 sounds (even though 298 of the sounds aren't playing). These are just sounds that are attached to meshes and aren't playing, but they can get triggered later.
  20. This is probably a browser thing and not a babylon thing... but I'm getting this thing where the game sound effects degrade the more players I add to a game. I have a player character and I attach ~20 sounds to its mesh. It is a first person shooter and each gun has about that many sounds between gunshots, reloading, and a handful of variants of each. In the development version these are all wav files which are far larger than whatever I'll convert them to later. As I load more players, the gun of the first player starts to sound worse and worse. The weird part is none of the other players are making any sound.. they're all holding still and not firing their gun. At first it just gets a little tinny or crunchy, but as the player count goes up the sound crunches so much that it eventually goes almost almost silent. It sounds like a game lagging very badly, but the FPS is remaining 120+ Any idea why this is happening? All that is occurring on the sound level is that I'm cloning more and more of the gun sounds, I'm not actually playing anything more than one player worth of a sounds. I tried to reproduce this on the babylon playground but even cloning the violin music 3000 times didn't change a thing -- it sounds the same no matter how many clones are around. Maybe I'm causing some memory problem in my audio hardware with the wavs...? The biggest is about 475 KB.
  21. Thanks Raggar that looks good! I wonder if anyone would be interested in having this behavior added to the api. Or maybe just tacked on by including a file. The following is all pseudo code, but I will probably implement it for real on Monday. Usage: BABYLON.SoundAtlas.add('filename.ext') // etc for all sounds BABYLON.SoundAtlas.load(scene, callback) // starts loading // using a sound from the atlas, while mimicking the original BJS api const gunshot = BABYLON.Sound.FromAtlas('gunshot', 'filename.ext', scene, { volume: 1 }) Possible extension to babylon Sound: BABYLON.Sound.prototype.FromAtlas = (name, filepath, scene, options) => { const cachedSound = BABYLON.SoundAtlas.get(filepath) if (!cachedSound) { throw new Error('Sound not found in SoundAtlas') } const sound = cachedSound.clone() sound.name = name sound.updateOptions(options) // autoplay won't work without some minor modification } Presumably would not work with multiple scenes. Everything I've made is all single scene with programmatically spawned meshes, so I'm not sure what would have to change. Sound atlas: /* Sound Atlas */ const BABYLON = require('babylonjs') const basePath = './sounds/' // should be modifiable instead const sounds = new Map() const soundFilenames = [] const queueSounds = (assetsManager, scene) => { soundFilenames.forEach(soundFilename => { const path = `${basePath}` const name = `${basePath}${soundFileName}` const binaryTask = assetsManager.addBinaryFileTask(name '', path, soundFilename) binaryTask.onSuccess = (task) => { const sound = new BABYLON.Sound(task.name, task.data, scene, null, {}) sounds.set(task.name, sound) } }) } const add = (soundFilename) => { soundFilenames.push(soundFilename) } const load = (scene, cb) => { const assetsManager = new BABYLON.AssetsManager(scene) queueSounds(assetsManager, scene) assetsManager.onFinish = () => { cb() } assetsManager.onTaskError = (task) => { console.log('error loading task', task) } assetsManager.load() } module.exports.add = add module.exports.load = load Haven't tried it yet, but that's the gist of it.
  22. Hello there. What is intended method of playing the same sound multiple times *concurrently* without re-downloading the sound file from the server? const gunshot = new BABYLON.Sound('foo', "sounds/gun_semi_etc.wav", scene, null, { volume: 1 }) If I have multiple players or bots in my game, any of them can produce gunfire. As each of them shoots, I end up with a separate web request per entity: I'm accustomed to howler, where the first time a sound is created, like 'foo.wav' it will load it from the server. Any subsequent sound objects created that play 'foo.wav' will build themselves from the same sound data without an xhr. How do I accomplish the equivalent? TY TY
  23. Social logins work for identifying a player. The APIs typically provide a special id to store in your own database, and this id will be provided via api each time the player comes back. A single player type of a game played in a browser means that the entire game's state is not in control of the developer. This is running on the player's computer, and they can alter the code and do whatever they want. Scores submitted by a game like this cannot ever be trusted. That doesn't stop developers from making plenty of games like this, but inherently there is no way to know that any submitted score is valid without doing something vastly more sophisticated. Usually people just periodically clean out their leaderboards of all the fake scores. An example of something more sophisticated would be a game that submitted all of its player input. This is uncommon (maybe non-existent) in single player games, but for example a game like Tetris could be made where the game records every move the player makes. At the end of the round the game client could submit this long list of moves to the server, and the server could re-simulate the same game and generate the player's score. The player could still be a hacker submitting fraudulent moves, so the server would still have to validate some things... but this provides a basis for knowing whether submitted information is potentially valid whereas receiving a json message like { playerId: 219, score: 9999999 } leaves very little to validate. All we can really do with that data is perhaps guess that they are a cheater. Meanwhile to hack a game that submits a move list that then gets simulated on the server is very difficult -- it requires essentially writing a bot that can play the game like a human (but better). I'm not saying anyone should make a system like this, but this "deterministic simulation built from player input" is the element that prevents cheating in real time strategy games. A similar concept reduces the cheating in server-authoritative first person shooters (and is why aim bots exist).
  24. Found mesh.flipFaces(false) which seems to fix all of the meshes I've had problems with so far. I also tried the full scene-wide flip (scene.useRightHandSystem = true) which worked as well and allowed me to not have any negative scales. I'm going back to left hand though, not sure if it is more intuitive or if I just have too much code that relies on it, but flipping a voxel collision system and maps was not so easy.
  25. On some objs that I import, I'm having to do foo.scaling = new BABYLON.Vector3(-1, 1, 1). For example this zombie needed the torso scaling set (-1, 1, 1) before the left and right hands appeared on the correct sides of the body among other details. The arms are parented to the torso, so I think that is how the scale is factoring into which side they appear on. The head is actually flipped incorrectly at the moment as well, but that doesn't really matter. I'm fine with scaling(-1,1,1) as a solution, but for some reason the gun now has its mesh inside out or something. So my question is two parts: 1) why is the gun inside out? The parent order (child to parent) is weapon -> leftHand -> lowerLeftArm -> upperLeftArm -> torso. In theory everything is inheriting its scale from torso. Each body part is a separate mesh made the same way, yet only the weapon is inside out, not the torso or any of the arm parts (maybe I just have an error in my code, unless there is some reason that this would naturally by the case). 2) is there a babylon function to fix only an individual mesh? I searched the forum and found many pgs dating back up to 3 years, and some mention of including righthand vs lefthand modes in bjs... but I wasn't sure what the current day method would be to fix just the weapon mesh below. It may be a little hard to see that it is in fact inside-out, so here's another picture of what the weapon is supposed to look like: I cannot use blender+babylon-exporter to make the full model at this point (some complex issues with networking bone animations exported from blender to babylon, a topic for another time). I am still using blender to pose my character as a visual reference before converting the rotations of each body part over to a custom animation system -- so this is perhaps where the conflicting notion of left/right arms is coming from originally. Thanks