timetocode

Members
  • Content Count

    63
  • Joined

  • Last visited

1 Follower

About timetocode

  • Rank
    Advanced Member

Contact Methods

  • Website URL
    http://timetocode.tumblr.com/
  • Twitter
    bennettor

Profile Information

  • Gender
    Not Telling
  • Location
    Seattle, WA
  • Interests
    game development, pixel art, procedural content generation, game music

Recent Profile Visitors

984 profile views
  1. In a blender armature modifier there is a setting called "Preserve Volume" which basically stops meshes from collapsing on themselves in some bone-related animations. I'm a newbie animator, but I'm aware that there is more to joints+deformations than just checking the preserve volume box and hoping things look good. This is particularly relevant given that some game engines, such as our beloved BabylonJS (also Unity) do not have this feature. So the question here is really what techniques do people recommend for keeping joints rotating/bending and looking nice in Babylon? An extreme example of volume being preserved versus not preserved occurs when twisting a model comprised of two cubes and two bones. With volume preserved we get what appears to be the twisting of a square column: Without volume preserved, we get twisting that collapses in the vicinity of the joint, creating something like an hourglass: If the twisting motion continues the central joint will get all the way squished: Something similar occurs in other twists such turning a head, in certain shoulder rotations, in internal/external leg rotation at the hip, and probably a bunch of other scenarios. A milder deformation occurs in movements that are mostly single-dimension rotations such as a knee or elbow bend.
  2. Thanks, that fixed everything! It looks like the mesh and rig itself had been made in a way where a scale transform on the bones had not been applied, and applying it scrambled the whole thing. So I've now made a few custom meshes (essentially just chains of bones and cubes) with transforms applied, and I can certainly play the test animations by name. I have a follow-up question regarding rotations that preserve volume, but I'll make a new topic for that. Edit: new thread
  3. Looks like it has something to do with applying transforms. I went into object mode and hit Ctrl + A and then "applied" transforms for location, rotation, and scale. I have no idea what that does... or rather I know that it sets the xyz, scale and rotation to 0, but I don't know what the implication is. The end result for me is pretty much scrambled (like even the original is a mess now). HOWEVER, i think they now look the same between blender and babylon.js. So I think this is part of the equation of how to get these things working. Presumably there was a way to do this that didn't scramble the model. The mesh+rig I was testing is a freely available "Minecraft Simple Rig" that comes from https://sites.google.com/view/boxscape-studios . Perhaps someone could shine some light on how to make a rig+model from scratch that plays nicely with bablyon, or how to convert an existing mesh+rig+animation. (Maybe I just applied one transform too many..) I'm going to be making these things from scratch anyways, so even if the answer is a lot of work, I'd like to know.
  4. Thanks that got things moving, which is a big improvement. My animation however looks very different. Any idea what settings I'm missing? In Blender: In Babylon:
  5. How do I play animations created in the blender action editor? I'm trying to organize walk, run, idle, etc animations and start/stop them by name. Also is there a particularly good way to import the blender objects that don't have any world position when the game begins? I noticed there are many different ways to load things. Objects in this game are spawned in after a receiving a network message, so there's no pre-existing concept of a scene (or my brain just isn't used to thinking about scenes). Even the terrain itself is chosen by the server. Which loading strategy should I employ? Blender scene: Action editor from under the dopesheet: Loading and positioning the character (exported with blender/babylon exporter 5.6.4): BABYLON.SceneLoader.LoadAssetContainer("./", "blocky.babylon", this.scene, function (container) { console.log('CONTAINER', container) var meshes = container.meshes var materials = container.materials // manually position the object somewhere that i can see it when the game starts up let whatever = meshes[0] whatever.position.y = 21 whatever.position.z = 4 container.addAllToScene() // can i just get rid of this somehow? // would prefer something like scene.add(new BlockyEntity()), executed later }) When inspecting the loaded `container` object, none of the animations arrays are populated. There does appear to be a `container.skeletons[0]._ranges` that has properties that match the names of my animations (crazy, walk, walkdss, etc). I'm not sure how to play them. Here is the character (appears to be hovering on the last frame of one of one of the animations? or maybe this is just the pose it is in in blender not sure) Thanks
  6. timetocode

    Pixel perfect click

    I think I'm going to stick with the extract.pixels technique, it is working nicely. With the broadphase and the caching of pixel data the performance is extremely fast.
  7. timetocode

    Pixel perfect click

    I tried both ways. I couldn't get the bezier curves to work fully... it was easy enough to draw them around the pieces by translating the context, but then that requires translating the context again for the collision checks which became a bit of a mess. Also Path2Ds aren't real javascript objects with clear cut properties.. they seem to have automagic properties that are part of the dom or the context or something else that is not conventionally accessible. Here's the pixel approach via pixi extract (v4). let pixels = renderer.plugins.extract.pixels(someSprite) That got the pixels from a sprite, which I cached on every sprite. Then here is the pixel perfect collision check including a broadphase: // x, y are the mouse coordinates coordinated to world space pieces.forEach(piece => { // broad phase check, where the collider is an SAT Box that fully contains the sprite if (SAT.pointInPolygon(new SAT.Vector(x, y), piece.collider)) { // how far from the top left corner of the sprite we clicked let px = Math.floor(x - piece.x) let py = Math.floor(y - piece.y) // use the sprite.width here let width = piece.width // convert 2D x,y to 1D index let index = px + py * width // get the alpha channel of the pixel (*4 because RGBA) let alpha = piece.pixels[index * 4] // if not fully transparent, then this object is under the mouse if (alpha !== 0) { piecesAtPosition.push(piece) } } }) // skipped: logic to choose which of the pieces is closest to the mouse if // multiple are under the pointer Haven't tried rotation yet, as its not part of the launch featureset. Presumably rotating the sprite, creating a render texture, and then overwriting the pixels array with the pixels from that render texture would enable pixel perfect collisions again. I'd like to note that while I'm using SAT for collisions (this is a multiplayer game and I have other collisions that are server-only) most people would do fine with just pixi collisions for the broad phase. It would be somewhat easy to add this to all of pixi in general.. though its really just for pixel-perfect picking, not for pixel-perfect collisions. Not sure if people would be interested or not. Rotation would be the hard part. Zooming/panning turned out to be fine with this method so long as the mouse coordinates are converted to world coordinates. For me this turned out to be a lot of math b/c the zoom is implemented by scaling the stage, pan by moving the stage, and then there is a "low graphics" mode that locks the canvas to a low resolution and then fills the screen via css. All this stuff needed applied to the regular mouse x,y to convert it to a position in the game world. Attached is a gif show clicking through the gaps in one jigsaw piece and grabbing the piece behind it, also clicking in the hole of one of the pieces and missing it.
  8. timetocode

    Pixel perfect click

    Anyone have recommendations for getting a pixel perfect hit (click) detection for some oddly shaped jigsaw puzzle pieces? I've already got non-pixi collision detection via SAT.js rectangles, but as jigsaw pieces are pretty irregular I'd like to use the SAT Box as a broadphase and then do a pixel perfect check. I have a few ideas but I'm not sure if they are overkill. One idea, since the jigsaw pieces are cut with bezier curves, is to export the bezier data along with the jigsaw puzzle spritesheets and then recreate the paths in a regular html canvas. Canvas has a isPointInPath function that sounds like it can tell if the mouse is in the bounds of the puzzle piece's bezier path... tho I've never used this on a complex multi-curve object so I'm not certain. I also have quite a bit of pixi-level scaling and zooming going on, so there would be much math. There's no rotation yet (and there might not ever be), but I guess these would hypothetically be reasonable to rotate. It also sounds like the path would need remade frequently, because in this game it is possible to pan/zoom and while the pixi sprites will be fine, these canvas-bound paths won't be coming along for the ride... they'll pretty much need to be re-pathed near where the player clicks if the view has changed at all. A more pixi-centric idea is to use the SAT Box as a broadphase, and then convert the relative position of the mouse over the jigsaw piece to the coordinate of an exact pixel within the pixi sprite. So for example if the mouse was at 50,50, and the sprite was at 45,45 and was 20x20 pixels big, then some amount of math can deal with the offsets and convert the 2d coordinate to a 1d index and tell which pixel of the sprite is under the mouse.... and then check the alpha...? I'm guessing I should cache the pixel data per sprite. How do I do this in a way that works both with the webgl and canvas renderers? Are there other good ways? Just having a non-canvas bezier isPointInPath would fix everything too, if anyone happens to know a lib that does that. Thanks
  9. timetocode

    Streaming models for large environment

    An environment can be broken into tiles or chunks. I've only done this in two dimensions, but given that a realistic map exists almost entirely along the horizontal plane I think the same logic from a 2D game can apply. In 2D games it is common to have tiles, and then we can convert the player's x,y coordinates (x,z if in 3d) to find the tile coordinates. The tiles are just in a 1D or 2D array. Usually its a 1D array because javascript arrays are 1D naturally and some simple math can turn x,y into an index within the array. Some more math can then load the tiles that are +/- 100 units away from the player (or whatever, depending on the view). This approach can work for fairly large maps, but at some point the maps are so big that the idea of having the whole map in a single array in memory won't work. When these maps get truly massive, we can no longer just have them in memory as an array... instead we need something conceptually similar to pagination, where we only work with a finite section of data that comes from something much larger or even infinite. In games this is sometimes called chunking. Minecraft popularized this term. A very large tile map (just an example) can be divided up into chunks that are 32x32 tiles. Converting the player's coordinates to chunk coordinates is just a matter of dividing their x,y by the size of a chunk (e.g. 32 tiles x 16 pixels, or 0.25 km). After we have the chunk coordinates we can load or generate the chunk the player is in, as well as any neighboring chunks up until we feel we have enough map to satisfy the view distance. In 3D this is going to be a similar process, but perhaps there are some cooler things at our disposal such as LoD. I don't know enough about babylon to guess at chunk sizes or what the general constraints would be.
  10. timetocode

    Keep child at original scale

    I'm not sure if this applies for what you're doing, but I've done player.scale.set(3, 3) let item = new PIXI.Sprite.fromFrame('sword.png') player.addChild(item) player.itemInHand = item player.itemInHand.scale.set(1/3, 1/3) This example is pretending that I've scaled the player artwork up 3x, and then added something called `itemInHand` to the player which I didn't want scaled up, perhaps because I had already drawn the object large enough. So the object is scaled up, but one its children is scaled down, just to undo the scaling. The general pattern is parent.scale.x = n; child.scale.x = 1/n; not that I've ever applied such a thing generally, only ever explicitly in corner cases. I'm also using a centered anchor (obj.anchor.set(0.5, 0.5)).
  11. To bundle sprites I use the free version of texture packer. The primary feature for me there is that I can just put all of the sprites, as individual frames, into a folder and point texture packer to it. Texture packer can then automagically produce two files: a spritesheet.png where all the images exist together, and a spritesheet.json that basically describes the locations (x,y,width,height) of each image. Pixi can load this and then exposes them via the function `fromFrame('/goblin_north_attack0.png')` etc. So PIXI.Sprite.fromFrame('filename') and PIXI.Texture.fromFrame('filename') are two ways to use it. On a technical level these are no longer the files that they're named after. These are just rectangular selections from the generated spritesheet. The folders become part of the sprite names, so one way to organize instead of goblin_north_attack0, is to make /goblin/north/attack/0.png and then that whole string ` /goblin/north/attack/0.png` is the name of the frame. I'm not super keen on naming something 0.png, but this can be nice if the source art work has a lot of sprites... and really it can be named anything, all we truly need from it is that it has a number somewhere that we plan on incrementing for moving through the animation. Multiple spritesheets are still an option though, even doing the above. One just has to be sure not to name a sprite the same thing in both sheets (easily done via the folder trick, as it ends up being a prefix to everything in it). I usually end up with a spritesheet for all the characters, items, creatures etc, and then another spritesheet for the game world if I'm using TiledMapEditor which has its own format. I've written a barebones exporter from TiledMapEditor to pixi if anyone wants it. I'd say that this solves many problems, and it does, but it isn't without some tedious work b/c usually whatever drawing program I use or whatever art I purchase has its own ideas about the frame format. For example most purchased sprites are sold as images containing frames in rows or grids. Most of the drawing programs I've used export animations as a row of sprites all right next to each other, or as a folder of individual frames that have been named automatically. They need to be cut up into individual frames and renamed to be put into a single sheet. If the volume of the work for cutting the images out and naming them is going to exceed an hour, I usually go fire up some imagemagick tutorials. Imagemagick is a command line script that can do things like split a grid-aligned spritesheet into multiple images. I end up doing this pretty much any time I purchase artwork... usually after a fair amount of opening the spritesheet in drawing programs and displaying grids to figure out the frame width and height, and sometimes some manual removal of areas that I don't plan on feeding into the scripts. I'd also like to note that the code I pasted above is very pseudo. Just skimming it again I can see that the checks for varying things are off by 1. There's also usually some section about facing left or facing right, and then taking the set of animations (e.g. goblin_right_attack) and flipping them by setting the sprite's scale.x to be -1 (only applies if left/right are just going to be mirrors of each other). Good luck!
  12. I'm not sure if this is improper, but I've always just skipped the whole animatedSprite thing and just made animations manually. I also just put every single sprite together in one spritesheet and name them stuff like goblin_left_run0, globin_up_attack6, etc. I put the whole game in a requestAnimationFrame loop, and invoke update on every entity, and then its up to that entity's own animation code to see if its graphics should change. Here's some pseudo code for the core loop that changes the frames, as well as idle, run, and attack. This would hypothetically be inside of a class, but I've written it just plain. // defaulting to the 4 frame idle animation let animationName = 'idle' let frameNumber = 0 let acc = 0 // a variable that stores time let maxFrame = 4 let frameDelay = 0.5 update(delta) { // accumulate time acc += delta // is it time for next frame? if (acc > frameDelay) { // next frame frameNumber++ if (frameNumber > maxFrame) { // loop to start of the animation frameNumber = 0 } // change the graphics sprite.texture = PIXI.Texture.fromFrame(animationName + '_' + frameNumber + '.png') } } // change to the run animation run() { animationName = 'run' acc = 0 frameNumber = 0 maxFrame = 8 frameDelay = 0.250 } // change to the attack animation attack() { animationName = 'attack' acc = 0 frameNumber = 0 maxFrame = 12 frameDelay = 0.180 } //etc Maybe in the end its not too different than the animatedSprite... except for very explicit control over the timings/loops, and no specific arrays of frames (though they're implied by the names of the frames in the spritesheet). Performance will be fine as in the end all any of this does is display a small subsection of a texture that is already loaded -- one can easily have hundreds on a low end machine, and sometimes many thousands.
  13. What's the right way to load multiple models from blender (or anything), and then spawn them many times into the game world? Let's say I have a tree.obj and a house.obj... and then I want to load it, without rendering anything yet, and then I'm going to programmatically generate terrain and add 200 trees and 20 houses. I've tried LoadAssetContainer, but I couldn't seem to invoke scene.createMesh or createMaterial on the data... all I could get to work was container.addAllToScene(). Also is there a friendly name for the loaded model? I was able to position it by doing container.meshes[0].position.y = 30, but I assume there's another way to interact with it. So the questions are: which file format for the blender objects (obj, babylon.. then one model per file..?) which loader to use how to load without spawning the object into the world how to spawn multiple of the object into the world (clone..? then position+rotate?) how to remove individual objects without unloading the source mesh Thanks for the help
  14. timetocode

    Loading models in NullEngine

    Looks like the answer is that './' and '/' are not valid paths for either babylon or xhr (not sure which). This syntax will work in the client, but not on the server. I was able to get it working with the full path: BABYLON.SceneLoader.LoadAssetContainer('http://localhost:8080/', 'cubio.obj', scene, (container) => { //etc...
  15. How does one load a *.obj in NullEngine? const BABYLON = require('babylonjs') require('babylonjs-loaders') // mutates something globally global.XMLHttpRequest = require('xhr2').XMLHttpRequest ... BABYLON.SceneLoader.LoadAssetContainer("./", "cubio.obj", scene, function (container) { console.log('CONTAINER', container) container.addAllToScene() }) Error: ...\node_modules\xhr2\lib\xhr2.js:206 throw new NetworkError("Unsupported protocol " + this._url.protocol); Do I need to do something to xhr2 to teach it about obj?