timetocode

Members
  • Content Count

    114
  • Joined

  • Last visited

  • Days Won

    2

timetocode last won the day on November 11

timetocode had the most liked content!

2 Followers

About timetocode

  • Rank
    Advanced Member

Contact Methods

  • Website URL
    http://timetocode.tumblr.com/
  • Twitter
    bennettor

Profile Information

  • Gender
    Not Telling
  • Location
    Seattle, WA
  • Interests
    game development, pixel art, procedural content generation, game music

Recent Profile Visitors

1767 profile views
  1. Okay so I only ended up making two fully functioning implementations, though I did go down a few paths just to benchmark varying techniques. I have not yet done any techniques with instancing (I've only been doing webgl for a few days and need to study some more). Hopefully someone finds this useful. So a few details about the underlying map data before getting into the specific techniques. The game is networked (nengi.js) and sends chunks to the game client as the player nears them (like minecraft more or less, but just 2D). Each chunk is an array of tiles, it happens to be a 1D array with some math to turn it back into 2D, but this detail probably doesn't matter. The map as a whole is sparse collection of chunks, which also doesn't matter too much but means that the chunks can be generated one at a time -- the whole map doesn't have to be filled in and exist when the game starts. The clientside chunk graphics generator, for both techniques below, would only generate one chunk per frame and would queue them so as to avoid ever generating too many graphics in a single frame and experiencing a noticeable hitch (sounds fancier than it is). Let's say that the chunks are 8x8 tiles, and each tile is 16x16 pixels (I tested many variants). The network data for receiving a chunk then contains the chunk coordinates and 64 bytes. If not using a network or using different dimensions then this would vary, but I'm going to stick with these numbers for the examples. They were also benchmarked on two computers which I will call the chromebook (acer spin 11, Intel HD 500) and the gaming rig (ryzen 1700, 1070 ti). The first experiment uses render textures. So it receives the 64 tiles, creates 64 sprites according to the tile types, and then takes the whole thing and bakes it into a single texture. That chunk sprite is then positioned as needed (probably at x = chunkX * chunkWidthInPixels, etc for y). On the gaming rig many varieties of chunk and tile sizes and multiple layers of tiles could be baked without any hitches. The chromebook was eventually stable at 8x8 chunks with 3 layers of tiles but anything bigger than that was producing notable hitches while generating the chunk graphics. It is also worth mentioning that the above technique *minus baking the tiles* is probably what everyone makes first -- it's just rendering every single tile as a sprite without any optimization beyond what pixi does by default. On a fast computer this was actually fine as is! Where this one runs into trouble is just on the regular rendering time for the chromebook-level device... it's simply too many sprites to keep scrolling them around every frame. The second experiment was to produce a single mesh per chunk with vertices and uvs for 64 tiles. The geometry is created after receiving the network data and the tiles in the mesh are mapped to their textures in a texture atlas. I feel like, as far as webgl options go, this one was relatively simple. The performance on this was roughly 4-6x faster than the render texture scenario (which already worked on a chromebook, barely) so altogether I was happy with this option. I was reading that there would be issues with the wrapping mode and a need to create gutters in the texture atlas to remove artifacts from the edges of the tiles as they rendered, and I'm not sure if I will need to address these later. My technique for now was to make sure the pixi container's coordinates were always integers (Math.floor) and this removed artifacts (stripes mostly) that appeared at the edge of the tiles due to their texture sampling. That's all I've tried so far, but I'm pretty satisfied with both the render texture technique and the webgl technique. I'll probably stick to the mesh+webgl version as I'm trying to use webgl more in general.
  2. I think I'll try making one of each of those for the webgl practice. Thanks Ivan!
  3. I was testing a little more, and while creating a new shader every time is a bit expensive, creating new geometry from the save vertices + uvs is cheap, and then I can put the data in an attribute on that geometry instead of as a uniform. I'm new to webgl so not sure if that is the right way, but seems promising.
  4. Hello! Loving v5 it's great! I've been trying to render an extremely large map of tiles in smaller chunks that load in as the player moves around. I've done this with sprites and a render texture per chunk and it works fine, but now to learn some webgl I'm porting the project to use pixi Mesh + Shader. I've gotten to the point where I have the vertices and uv coords for a 16x16 chunk of tiles (skipping the code, nothing special). I then create a mesh per chunk of tiles and position them around. Code looks like this: const shader = PIXI.Shader.from(vertexSrc, fragmentSrc, uniforms); const chunk = new PIXI.Mesh(geometry, shader); // etc for many chunks and then I just change the chunk.x and chunk.y Now what I'm trying to do next is actually show different textures for each tile within each chunk, for which I'm using a collection of tileTypes which is either going to be an array or texture uniform with the data for the 256 tiles that comprise the 16x16 chunk. I hope that makes sense. In any case, because all of the chunks have the same geometry and the same shader if i change the `chunk.shader.unforms.tileType` it changes all of the chunks at the same time. If I create a new shader for each chunk so they have a unique uniforms object each, it ends up being an expensive operation which creates a visual hitch. I could probably create a pool of meshes and shaders and reuse them such that I wouldn't have to actually create new shader objects at run time as chunks load into the game, but before going down that path I wanted to know if there was an easier way. Can I somehow create meshes that share the geometry, vertexShader, fragmentShader, but have a *unique* uniform per instance of the mesh? Thanks
  5. GC in javascript is frequently undergoing changes, but for the last few years (if this hasn't just changed...) one of the main problems is that it isn't occurring "in the background when the program is idle" like most people assume. In truth the biggest GC hit occurs while creating a new object -- i know that sounds unbelievably bad but unless it has recently changed that's what is happening. The reason GC is invoked while creating a new object is that the trigger for GC is based off of trying to allocate new memory but discovering too much memory is already used. So right in the middle of important game code it'll periodically perform a round of garbage collection, and then finish creating that new object. This is how one can end up with relatively low CPU usage but still GC-related hitches... the GC is happening at basically the worst time every time. So that detail of GC may change in the future, but if experiencing GC problems here are the two tricks I use most frequently... As @ivan.popelyshev noted a local variable can prevent the GC. We can take a loop that used to create a new Point() (or several) as part of its internal logic. If that object is declared outside of the loop and has its values changed within the loop, then it'll only be created once in total. The second is pooling... but it is important to really understand pooling's benefit else we simply re-invent the GC lag spikes. If a pool dynamically grows and shrinks much then the GC issue will be very similar. Also pools in the past were used to speed up object creation which I'm just going to say is rarely a benefit nowadays. The best pools for mitigating GC are pools that never (or very rarely) free anything at all. For the things that actually benefit from pooling, such as projectiles in bullet-hell game, or particles, this is somewhat reasonable feature. Just create 10,000 particles (or w/e) and recycle them as needed.
  6. Some good stuff, and yea the canvas-punch-out style lighting is nearly identical to what I have. I'll try and learn enough about meshes to make the same thing out of a pixi mesh instead of out of canvas + compositing triangles. I already have all of the triangles, and a mesh is just indices and uvs...how hard could it be right??? Famous last words.. That shadertoy lighting is sooo nice looking lol. I'm not sure if I could legitimately go down that path as the triangles for me are generated in javascript and used within the game logic (not just for rendering). Though that looks so nice it is tempting to try to figure out an alternative.
  7. On a visual level pixi-shadows is very similar, the main issue is the performance. I'm definitely doing a form of raycasting, but with heavy optimizations. I'm targeting integrated gpus and mid-level chromebook cpus I've already got a huge speed increase (maybe 1000x) via an algo that generates triangles by casting rays from the center of the player to the known vertices of obstacles in a spatially-culled sub-section of the whole game world. Here's what the triangles look like: The challenge is using these triangles to get the right visual result in a way that is also fast. The triangles unfortunately are the visible area as opposed to being the shaded area which in the past meant that instead of drawing the triangles directly onto the game, I ended up subtracting the triangles from a darkened overlay to create this effect: Here's it a little fancier with dithered look, and also some shadows long the edge of walls generated a different way: Everything above was made as a hybrid of canvas compositing operations generating textures that then were combined with pixi and it run quickly but creates too much GC pressure due to generating textures every frame. Maybe I should try a mesh....? Or manual addition of triangles via webgl? I need to draw 10-50 triangles per frame, dynamically and potentially entirely different one frame to the next. I also need a sane way to place these triangles in the scene such that they line up with the obstacles in a game with a giant world and a moving camera. I have no idea how to line up the vertices of a mesh in gpu coordinates with the x,y coordinates of the stuff in my pixi containers. It would also be ideal if the triangles could be used for masking or reverse-masking (e.g. an object that is partially covered by the shadow is partially cropped by the shadow) but I could live without that as I do this manually at the moment. I'm not shy about math or learning a bit of webgl, so I am open to suggestions.
  8. Congrats on v5 you awesome people! So a long time ago I was working on top down 2D game that had line of sight, but I ran into performance problems problems that sound like they can be more easily solved in v5. For some reason the forum is not letting me upload images so here are some links. Screenshots: https://timetocode.tumblr.com/post/164071887541/much-progress-screenshots-show-in-game-terrain Video of line of sight https://timetocode.tumblr.com/post/163227386513/11-second-video-tree-blocking-line-of-sight It's just one of those pseudo-roguelike line of sight effects where you can't see around corners (real-time though!). It involved lots of triangles, and the algos were based on Amit Patel's https://www.redblobgames.com/articles/visibility/ To achieve this effect I created a canvas separate from pixi with the shadow color filling the screen, and then I would mask/subtract the triangles generated by the visibility algo. I would then take that resulting image with a littl ebit of blur and put it in a container that was rendered underneath the wall sprites. That was the fast version, I also made another version based on pixi graphics that had other issue. Even the fast version had all sorts of garbage collection problems, probably because it was creating a texture the size of the whole screen every frame and then discarding it. It sounds like v5 can probably handle this feature without generating a texture every frame. Could someone suggest a way to accomplish this in v5? I don't have to put the shadows under the wall sprites if that is particularly hard.. they could be ontop of everything instead -- just whatever stands a chance of hitting 60 fps on a chromebook. TY TY
  9. Link: JigsawPuzzles.io - multiplayer cooperative jigsaw puzzles There are always a few ongoing public games on the homepage. Anyone can join these. If you want a more controlled experienced, signing in allows the creation of private games. You can invite friends into your private games. Tech stuff: The game is made in a hybrid of canvas and Pixi v4. The multiplayer is node.js + websockets via nengi.js. The game is only online multiplayer, even when playing alone. I'm primarily a network programmer and the whole project began as strange experiment of programming a casual puzzle game as if it were a shooter (movement prediction, lag compensation, reconciliation). I usually work on shooters. The hope was that the controls would come out feeling like the game was single player up to 500 ms of latency and 5% packet loss. The project has changed a little since then (the servers are no longer 30 tick) but the essence remains. It uses 2K and 4K images depending on the puzzle size, which is kinda fun because of how much it is possible to zoom in/out. The big puzzles are primarily aimed at desktop users, but are decent on touch tablets. Mobile isn't supported yet... but it does kinda work on some phones just by chance. We've got a few crazy modes that we're considering in the future. For now the only mode is a cooperative recreation of the table top experience. Thank you for checking out the game Here's a 1148 piece puzzle: Here's a smaller puzzle: What the catalog/setup page looks like (gotta sign in first, so that progress we can save): Social accounts for the game: https://www.facebook.com/JigsawPuzzles.io/ https://www.instagram.com/jigsawpuzzles.io/ https://twitter.com/JigsawPuzzlesIO https://www.reddit.com/r/jigsawpuzzlesio/ https://discordapp.com/invite/axT9bRw
  10. I do something just like this for a rewind + collision check feature. The trick is to use mesh.computeWorldMatrix(true) on the meshes that are being checked for collisions after being moved. This is the internal operation being performed by scene render that makes the collisions work (among other things, such as rendering). I think this is needed whenever moving and positioning objects in-between frames, esp if the object is being moved multiple times and checked for collisions before the next render. In the case of my rewind collision check, I never actually wanted to render the object in the rewound position -- I just needed the object to be in the correct state so that babylon could do the math for me. For me that meant copying its original position, moving it to a new position, invoking computeWorldMatrix, doing the intersection test, and then restoring it to the original position. While not identical, this does come out somewhat similar to the notion of moving an object (or a copy of that object) until it collides, uncolliding it, and then having it in the uncollided position all before actually rendering anything. Here's the PG w/o scene render in the collision checks: https://playground.babylonjs.com/#1UK40Z#8 If I understand what you have made, then this in theory will never turn the torus white.
  11. It looks awesome!! I took your advice and tried to make sure that babylon was the same version everywhere that I was using it. I'm not sure what I really did, but somewhere along the process of trying to change out 3.3 for 4, and then undoing it, little things started working. In the end I made a new custom deploy of babylon 4.0, and then I copied and pasted *just* the water material from that directly above my own code. This gave the water reflections, but no animation. I couldn't really figure out a way to get 4.0 running NullEngine (I'll just wait until it is released on npm, and then untangle things). As for the water animation, that part does make sense to me now, I was missing this: engine.runRenderLoop(() => {}) ^ my previous code only called scene.render(), which might explain why some pg code doesn't work the same for me... i'm not sure what invoking runRenderLoop with an empty function does, but i'm guessing it triggers some important core-level update logic..? in this case passing variables to the water shader maybe. I also learned that I *can* setGlobalVolume if I do it inside of setTimeout -- I'm not sure what mess I've made, but I'm either accidentally overwriting the audioEngine or things aren't initialized quite when I think they are.
  12. Everything seems to work in the PG. Multiple times I've taken some seemingly encapsulated function out of a PG and pasted into my game and not had it do quite the same thing. I'm not sure how to reproduce it. I'm starting to think I may have a more fundamental issue, like perhaps I accidentally have multiple instances of BABYLON's global scope, engine, or scene. Maybe this is a hint: BABYLON.Engine.audioEngine.setGlobalVolume(0.1) does not change the volume of sounds in my game. I have to invoke setVolume on every sound individually to change volume. Does that point at any general type of issue? I'm using const BABYLON = require('babylonjs') in most files instead of an html script tag, as most of the code runs in node... i wonder if that is related.
  13. I must be doing something incorrect with the water, as it is rendering black: I pasted the code from this playground, and included the files for the waterbump.png and skybox: https://www.babylonjs-playground.com/#1SLLOJ#17 Here's my code, although its very similar to the PG. My guess is the error isn't in this code, though I'm not sure what else would affect the water. It is as if it has nothing in its render list -- but I can definitely see the sky, and the sky is added to the list. this.canvasEle = document.getElementById('main-canvas') this.engine = new BABYLON.Engine(this.canvasEle, true) this.engine.enableOfflineSupport = false this.engine.setHardwareScalingLevel(1) this.scene = new BABYLON.Scene(this.engine) //this.scene.freezeActiveMeshes() //this.scene.collisionsEnabled = true //this.scene.clearColor = new BABYLON.Color4(0.3, 0.5, 0.75, 1.0) this.scene.fogMode = BABYLON.Scene.FOGMODE_EXP this.scene.fogDensity = 0.0005 this.camera = new BABYLON.TargetCamera('camera', new BABYLON.Vector3(0, 0.5, 0), this.scene) this.camera.fov = 0.8 this.camera.minZ = 0.1 //this.scene.autoClear = false //this.scene.autoClearDepthAndStencil = false var light = new BABYLON.DirectionalLight("dir01", new BABYLON.Vector3(0.66, -0.75, 1.25), this.scene) light.position = new BABYLON.Vector3(1, 40, 10) light.intensity = 0.5 var light3 = new BABYLON.DirectionalLight("dir01", new BABYLON.Vector3(-0.5, 0.75, -1.25), this.scene) light3.position = new BABYLON.Vector3(1, 40, 1) light3.intensity = 0.5 var light2 = new BABYLON.HemisphericLight('h', new BABYLON.Vector3(0, 1, 0), this.scene) light2.intensity = 0.6 var skybox = BABYLON.Mesh.CreateBox("skyBox", 5000.0, this.scene) var skyboxMaterial = new BABYLON.StandardMaterial("skyBox", this.scene) skyboxMaterial.backFaceCulling = false skyboxMaterial.reflectionTexture = new BABYLON.CubeTexture("images/TropicalSunnyDay", this.scene) skyboxMaterial.reflectionTexture.coordinatesMode = BABYLON.Texture.SKYBOX_MODE skyboxMaterial.diffuseColor = new BABYLON.Color3(0, 0, 0) skyboxMaterial.specularColor = new BABYLON.Color3(0, 0, 0) skyboxMaterial.disableLighting = true skybox.material = skyboxMaterial var waterMesh = BABYLON.Mesh.CreateGround("waterMesh", 2048, 2048, 16, this.scene, false) var water = new BABYLON.WaterMaterial("water", this.scene, new BABYLON.Vector2(512, 512)) water.backFaceCulling = true water.bumpTexture = new BABYLON.Texture("images/waterbump.png", this.scene) water.windForce = -10 water.waveHeight = 1.7 water.bumpHeight = 0.1 water.windDirection = new BABYLON.Vector2(1, 1) water.waterColor = new BABYLON.Color3(0, 0, 221 / 255) water.colorBlendFactor = 0.0 water.addToRenderList(skybox) waterMesh.material = water Any ideas?
  14. Now that I have this reproduced, I think I'm realizing that this idea was never going to work due to the number of sounds. I did a little bit more testing with a 500 KB wav file vs a 15 KB mp3, and the degradation seems more related to the number of sounds than their size. In my game each weapon has like 5-20 sounds, and a player can hold a few weapons. The walk/run cycle has like 16 sounds per material (concrete, wood, grass, etc). It can add up to about ~100 possible sounds per player, though 99% of them aren't playing at any given point. BJS seems totally fine with that many sounds, but it looks like attachToMesh is not designed for this. I read the source code, and it looks like it *might* be viable if it would check if the sound is playing before rebuilding the matrices. Currently it does some fairly expensive work, even for non-playing sounds (setPosition and computeWorldMatrix fill the profiler when stress tested with cloned sounds on moving meshes). I'm going to test positioning the sounds manually at the time that they are played, and not having them move along with the mesh. If that doesn't work I guess I'll pool them in addition. Edit: definitely need a pool Edit#2 at 21 players firing the same automatic rifle, the pool brought the active number of sound instances from 210 down to 59 Here's an ultra simple auto-expanding pool if anyone wants. Usage is just to use 'get' for short-lived sounds and it will handle allocation and releasing on its own. It never deallocs. class SoundPool { constructor() { this.scene = null this.sounds = {} } init(scene) { this.scene = scene } allocate(name) { const sound = BABYLON.Sound.FromAtlas(name, name, this.scene) sound.onEndedObservable.add(() => { this.release(sound) }) this.sounds[name].push(sound) } get(name) { if (!this.sounds[name]) { this.sounds[name] = [] } if (this.sounds[name].length === 0) { this.allocate(name) } return this.sounds[name].pop() } release(obj) { this.sounds[obj.name].push(obj) } } const singleton = new SoundPool() module.exports = singleton BABYLON.Sound.FromAtlas is a little wrapper that clones sounds without making additional xhrs (e.g. 'sounds/foo.mp3', clones it and applies new options)
  15. I was finally able to reproduce it in a PG. https://www.babylonjs-playground.com/#8B9YRN#1 This is based off of the playground where violin music is playing inside of spheres. As the camera enters the sphere, the music becomes audible. In this playground I have one purple dome with music playing on loop, and then 100 purple domes that have a clone of the music linked to their mesh, but the music is intentionally not being played. The purple dome that is playing music is separate from all of the others. It is the only one playing music. If we keep increasing the number of silent domes (see the for loop around line ~45) eventually the sound will degrade, even though we're only playing a single wav on loop. I left the PG on 100 silent domes b/c I don't want to crash people's computers. Personally I have to increase this number to 1200(!!) before I get the sound problems that are occurring in my game. So to reproduce, change the count on line 45 and then walk into the isolated sphere. That 1200 is a big number but this problem happens in my game at around 300 sounds (even though 298 of the sounds aren't playing). These are just sounds that are attached to meshes and aren't playing, but they can get triggered later.