timetocode

Members
  • Content Count

    118
  • Joined

  • Last visited

  • Days Won

    2

timetocode last won the day on November 11 2019

timetocode had the most liked content!

2 Followers

About timetocode

  • Rank
    Advanced Member

Contact Methods

  • Website URL
    http://timetocode.tumblr.com/
  • Twitter
    bennettor

Profile Information

  • Gender
    Not Telling
  • Location
    Seattle, WA
  • Interests
    game development, pixel art, procedural content generation, game music

Recent Profile Visitors

1951 profile views
  1. If the tilemap isn't mutatable (or if only a few layers are mutable) one can also get away with: keeping the map data in some simple form, like an array per layer containing ints for tiletype generating megatiles by drawing sections of the map (e.g. 16x16 tiles) onto a rendertexture add the megatiles to the game instead of individual tiles (maybe) hide megatiles that are offscreen (if your player only sees a small subsection of a much larger world) Baking a large number of tiles into a megatile takes a little bit of time (milliseconds, usually) but baking an entire map depending on the size can take < 1s on a gaming rig and multiple seconds on a chromebook -- so just keep in mind that while this may help reach and maintain maximum frames per second it does so at the expense of potentially lengthy operation on slower machines as the game starts up. If you need it to generate these megatiles on the fly as the player moves around (which would not be the case for 5000 sprites, but might be the case for a super big map) then it becomes prohibitive as it makes the game choppy unless one can bake the megatile without freezing the game for a few frames. I've got a few games that work this way.. generally speaking they're able to get 60 fps on a low end chromebook... and this approach added ~4-12 seconds of 'bake time' on low end devices for a map that was ~150,000 sprites and resulted in 8096x8096 pixels worth of render texture (baked into smaller textures like 1024x1024).
  2. Nevermind! The bug was that indices are an Int32Array and I had turned them into Float32s.
  3. Thanks @bubamara and @ivan.popelyshev I've been able to mutate my mesh a bit now. I'm stuck trying to change the indices, any idea what I'm doing wrong? const vertBuffer = geometry.getBuffer('aVertexPosition') vertBuffer.update(new Float32Array(vertices)) // works const colorBuffer = geometry.getBuffer('aVertexColor') colorBuffer.update(new Float32Array(colors)) // works const indexBuffer = geometry.getIndex() indexBuffer.update(new Float32Array(indices)) // makes my mesh disappear All of this except changing the indices seems to work. If I change the indices my object vanishes. The `vertices`, `colors` and `indices` aren't actually changing -- i'm just creating the same circle over and over again and trying to update the buffers.. so maybe I have a more basic issues that is making my shape disappear. Here is the shape that I'm drawing (black lines and green points added for debugging, the mesh is the whole lighting overlay of a radial gradient within the circle and shadows in the other triangles).
  4. I'm trying to create a mesh that consists of a few dozen triangles that are going to change every frame (hopefully performs okay....) How do I actually do that though? Create mesh: const shader = Shader.from(vertexSrc, fragSrc) const geometry = new Geometry() .addAttribute('aVertexPosition', [initialVerts]) .addAttribute('aVertexColor', [initialColors]) const mesh = new Mesh(geometry, shader, null, DRAW_MODES.TRIANGLES) Attempt at changing aVertexPosition and aVertexColor each frame: mesh.geometry.addAttribute('aVertexPosition', newVertices) mesh.geometry.addAttribute('aVertexColor', newColors) Error: Cannot read property 'updateID' of undefined; originating from GeometrySystem.updateBuffers.
  5. Okay so I only ended up making two fully functioning implementations, though I did go down a few paths just to benchmark varying techniques. I have not yet done any techniques with instancing (I've only been doing webgl for a few days and need to study some more). Hopefully someone finds this useful. So a few details about the underlying map data before getting into the specific techniques. The game is networked (nengi.js) and sends chunks to the game client as the player nears them (like minecraft more or less, but just 2D). Each chunk is an array of tiles, it happens to be a 1D array with some math to turn it back into 2D, but this detail probably doesn't matter. The map as a whole is sparse collection of chunks, which also doesn't matter too much but means that the chunks can be generated one at a time -- the whole map doesn't have to be filled in and exist when the game starts. The clientside chunk graphics generator, for both techniques below, would only generate one chunk per frame and would queue them so as to avoid ever generating too many graphics in a single frame and experiencing a noticeable hitch (sounds fancier than it is). Let's say that the chunks are 8x8 tiles, and each tile is 16x16 pixels (I tested many variants). The network data for receiving a chunk then contains the chunk coordinates and 64 bytes. If not using a network or using different dimensions then this would vary, but I'm going to stick with these numbers for the examples. They were also benchmarked on two computers which I will call the chromebook (acer spin 11, Intel HD 500) and the gaming rig (ryzen 1700, 1070 ti). The first experiment uses render textures. So it receives the 64 tiles, creates 64 sprites according to the tile types, and then takes the whole thing and bakes it into a single texture. That chunk sprite is then positioned as needed (probably at x = chunkX * chunkWidthInPixels, etc for y). On the gaming rig many varieties of chunk and tile sizes and multiple layers of tiles could be baked without any hitches. The chromebook was eventually stable at 8x8 chunks with 3 layers of tiles but anything bigger than that was producing notable hitches while generating the chunk graphics. It is also worth mentioning that the above technique *minus baking the tiles* is probably what everyone makes first -- it's just rendering every single tile as a sprite without any optimization beyond what pixi does by default. On a fast computer this was actually fine as is! Where this one runs into trouble is just on the regular rendering time for the chromebook-level device... it's simply too many sprites to keep scrolling them around every frame. The second experiment was to produce a single mesh per chunk with vertices and uvs for 64 tiles. The geometry is created after receiving the network data and the tiles in the mesh are mapped to their textures in a texture atlas. I feel like, as far as webgl options go, this one was relatively simple. The performance on this was roughly 4-6x faster than the render texture scenario (which already worked on a chromebook, barely) so altogether I was happy with this option. I was reading that there would be issues with the wrapping mode and a need to create gutters in the texture atlas to remove artifacts from the edges of the tiles as they rendered, and I'm not sure if I will need to address these later. My technique for now was to make sure the pixi container's coordinates were always integers (Math.floor) and this removed artifacts (stripes mostly) that appeared at the edge of the tiles due to their texture sampling. That's all I've tried so far, but I'm pretty satisfied with both the render texture technique and the webgl technique. I'll probably stick to the mesh+webgl version as I'm trying to use webgl more in general.
  6. I think I'll try making one of each of those for the webgl practice. Thanks Ivan!
  7. I was testing a little more, and while creating a new shader every time is a bit expensive, creating new geometry from the save vertices + uvs is cheap, and then I can put the data in an attribute on that geometry instead of as a uniform. I'm new to webgl so not sure if that is the right way, but seems promising.
  8. Hello! Loving v5 it's great! I've been trying to render an extremely large map of tiles in smaller chunks that load in as the player moves around. I've done this with sprites and a render texture per chunk and it works fine, but now to learn some webgl I'm porting the project to use pixi Mesh + Shader. I've gotten to the point where I have the vertices and uv coords for a 16x16 chunk of tiles (skipping the code, nothing special). I then create a mesh per chunk of tiles and position them around. Code looks like this: const shader = PIXI.Shader.from(vertexSrc, fragmentSrc, uniforms); const chunk = new PIXI.Mesh(geometry, shader); // etc for many chunks and then I just change the chunk.x and chunk.y Now what I'm trying to do next is actually show different textures for each tile within each chunk, for which I'm using a collection of tileTypes which is either going to be an array or texture uniform with the data for the 256 tiles that comprise the 16x16 chunk. I hope that makes sense. In any case, because all of the chunks have the same geometry and the same shader if i change the `chunk.shader.unforms.tileType` it changes all of the chunks at the same time. If I create a new shader for each chunk so they have a unique uniforms object each, it ends up being an expensive operation which creates a visual hitch. I could probably create a pool of meshes and shaders and reuse them such that I wouldn't have to actually create new shader objects at run time as chunks load into the game, but before going down that path I wanted to know if there was an easier way. Can I somehow create meshes that share the geometry, vertexShader, fragmentShader, but have a *unique* uniform per instance of the mesh? Thanks
  9. GC in javascript is frequently undergoing changes, but for the last few years (if this hasn't just changed...) one of the main problems is that it isn't occurring "in the background when the program is idle" like most people assume. In truth the biggest GC hit occurs while creating a new object -- i know that sounds unbelievably bad but unless it has recently changed that's what is happening. The reason GC is invoked while creating a new object is that the trigger for GC is based off of trying to allocate new memory but discovering too much memory is already used. So right in the middle of important game code it'll periodically perform a round of garbage collection, and then finish creating that new object. This is how one can end up with relatively low CPU usage but still GC-related hitches... the GC is happening at basically the worst time every time. So that detail of GC may change in the future, but if experiencing GC problems here are the two tricks I use most frequently... As @ivan.popelyshev noted a local variable can prevent the GC. We can take a loop that used to create a new Point() (or several) as part of its internal logic. If that object is declared outside of the loop and has its values changed within the loop, then it'll only be created once in total. The second is pooling... but it is important to really understand pooling's benefit else we simply re-invent the GC lag spikes. If a pool dynamically grows and shrinks much then the GC issue will be very similar. Also pools in the past were used to speed up object creation which I'm just going to say is rarely a benefit nowadays. The best pools for mitigating GC are pools that never (or very rarely) free anything at all. For the things that actually benefit from pooling, such as projectiles in bullet-hell game, or particles, this is somewhat reasonable feature. Just create 10,000 particles (or w/e) and recycle them as needed.
  10. Some good stuff, and yea the canvas-punch-out style lighting is nearly identical to what I have. I'll try and learn enough about meshes to make the same thing out of a pixi mesh instead of out of canvas + compositing triangles. I already have all of the triangles, and a mesh is just indices and uvs...how hard could it be right??? Famous last words.. That shadertoy lighting is sooo nice looking lol. I'm not sure if I could legitimately go down that path as the triangles for me are generated in javascript and used within the game logic (not just for rendering). Though that looks so nice it is tempting to try to figure out an alternative.
  11. On a visual level pixi-shadows is very similar, the main issue is the performance. I'm definitely doing a form of raycasting, but with heavy optimizations. I'm targeting integrated gpus and mid-level chromebook cpus I've already got a huge speed increase (maybe 1000x) via an algo that generates triangles by casting rays from the center of the player to the known vertices of obstacles in a spatially-culled sub-section of the whole game world. Here's what the triangles look like: The challenge is using these triangles to get the right visual result in a way that is also fast. The triangles unfortunately are the visible area as opposed to being the shaded area which in the past meant that instead of drawing the triangles directly onto the game, I ended up subtracting the triangles from a darkened overlay to create this effect: Here's it a little fancier with dithered look, and also some shadows long the edge of walls generated a different way: Everything above was made as a hybrid of canvas compositing operations generating textures that then were combined with pixi and it run quickly but creates too much GC pressure due to generating textures every frame. Maybe I should try a mesh....? Or manual addition of triangles via webgl? I need to draw 10-50 triangles per frame, dynamically and potentially entirely different one frame to the next. I also need a sane way to place these triangles in the scene such that they line up with the obstacles in a game with a giant world and a moving camera. I have no idea how to line up the vertices of a mesh in gpu coordinates with the x,y coordinates of the stuff in my pixi containers. It would also be ideal if the triangles could be used for masking or reverse-masking (e.g. an object that is partially covered by the shadow is partially cropped by the shadow) but I could live without that as I do this manually at the moment. I'm not shy about math or learning a bit of webgl, so I am open to suggestions.
  12. Congrats on v5 you awesome people! So a long time ago I was working on top down 2D game that had line of sight, but I ran into performance problems problems that sound like they can be more easily solved in v5. For some reason the forum is not letting me upload images so here are some links. Screenshots: https://timetocode.tumblr.com/post/164071887541/much-progress-screenshots-show-in-game-terrain Video of line of sight https://timetocode.tumblr.com/post/163227386513/11-second-video-tree-blocking-line-of-sight It's just one of those pseudo-roguelike line of sight effects where you can't see around corners (real-time though!). It involved lots of triangles, and the algos were based on Amit Patel's https://www.redblobgames.com/articles/visibility/ To achieve this effect I created a canvas separate from pixi with the shadow color filling the screen, and then I would mask/subtract the triangles generated by the visibility algo. I would then take that resulting image with a littl ebit of blur and put it in a container that was rendered underneath the wall sprites. That was the fast version, I also made another version based on pixi graphics that had other issue. Even the fast version had all sorts of garbage collection problems, probably because it was creating a texture the size of the whole screen every frame and then discarding it. It sounds like v5 can probably handle this feature without generating a texture every frame. Could someone suggest a way to accomplish this in v5? I don't have to put the shadows under the wall sprites if that is particularly hard.. they could be ontop of everything instead -- just whatever stands a chance of hitting 60 fps on a chromebook. TY TY
  13. Link: JigsawPuzzles.io - multiplayer cooperative jigsaw puzzles There are always a few ongoing public games on the homepage. Anyone can join these. If you want a more controlled experienced, signing in allows the creation of private games. You can invite friends into your private games. Tech stuff: The game is made in a hybrid of canvas and Pixi v4. The multiplayer is node.js + websockets via nengi.js. The game is only online multiplayer, even when playing alone. I'm primarily a network programmer and the whole project began as strange experiment of programming a casual puzzle game as if it were a shooter (movement prediction, lag compensation, reconciliation). I usually work on shooters. The hope was that the controls would come out feeling like the game was single player up to 500 ms of latency and 5% packet loss. The project has changed a little since then (the servers are no longer 30 tick) but the essence remains. It uses 2K and 4K images depending on the puzzle size, which is kinda fun because of how much it is possible to zoom in/out. The big puzzles are primarily aimed at desktop users, but are decent on touch tablets. Mobile isn't supported yet... but it does kinda work on some phones just by chance. We've got a few crazy modes that we're considering in the future. For now the only mode is a cooperative recreation of the table top experience. Thank you for checking out the game Here's a 1148 piece puzzle: Here's a smaller puzzle: What the catalog/setup page looks like (gotta sign in first, so that progress we can save): Social accounts for the game: https://www.facebook.com/JigsawPuzzles.io/ https://www.instagram.com/jigsawpuzzles.io/ https://twitter.com/JigsawPuzzlesIO https://www.reddit.com/r/jigsawpuzzlesio/ https://discordapp.com/invite/axT9bRw
  14. I do something just like this for a rewind + collision check feature. The trick is to use mesh.computeWorldMatrix(true) on the meshes that are being checked for collisions after being moved. This is the internal operation being performed by scene render that makes the collisions work (among other things, such as rendering). I think this is needed whenever moving and positioning objects in-between frames, esp if the object is being moved multiple times and checked for collisions before the next render. In the case of my rewind collision check, I never actually wanted to render the object in the rewound position -- I just needed the object to be in the correct state so that babylon could do the math for me. For me that meant copying its original position, moving it to a new position, invoking computeWorldMatrix, doing the intersection test, and then restoring it to the original position. While not identical, this does come out somewhat similar to the notion of moving an object (or a copy of that object) until it collides, uncolliding it, and then having it in the uncollided position all before actually rendering anything. Here's the PG w/o scene render in the collision checks: https://playground.babylonjs.com/#1UK40Z#8 If I understand what you have made, then this in theory will never turn the torus white.
  15. It looks awesome!! I took your advice and tried to make sure that babylon was the same version everywhere that I was using it. I'm not sure what I really did, but somewhere along the process of trying to change out 3.3 for 4, and then undoing it, little things started working. In the end I made a new custom deploy of babylon 4.0, and then I copied and pasted *just* the water material from that directly above my own code. This gave the water reflections, but no animation. I couldn't really figure out a way to get 4.0 running NullEngine (I'll just wait until it is released on npm, and then untangle things). As for the water animation, that part does make sense to me now, I was missing this: engine.runRenderLoop(() => {}) ^ my previous code only called scene.render(), which might explain why some pg code doesn't work the same for me... i'm not sure what invoking runRenderLoop with an empty function does, but i'm guessing it triggers some important core-level update logic..? in this case passing variables to the water shader maybe. I also learned that I *can* setGlobalVolume if I do it inside of setTimeout -- I'm not sure what mess I've made, but I'm either accidentally overwriting the audioEngine or things aren't initialized quite when I think they are.