fenomas

Members
  • Content count

    433
  • Joined

  • Last visited

  • Days Won

    9

fenomas last won the day on February 25

fenomas had the most liked content!

2 Followers

About fenomas

  • Rank
    Advanced Member

Recent Profile Visitors

1,068 profile views
  1. Hey, it looks like your two questions never really got answered: 1. You can prevent physics bodies from rotating with: player.physicsImpostor.physicsBody.fixedRotation = true That's usually what you want for things like player characters. 2. Manually translating (moving) things in a physics simulation usually breaks the simulation. The best two options for moving characters around are: Applying impulses or forces to the player body Attach the player body to another (invisible, non-colliding) body with a joint (like a spring or a distance joint), and then move the invisible body around manually (by changing it's position values) They each have somewhat different results, but it depends what you want. I don't know what's going on with your other issues, as most of the links in the thread seem not to work, but anything that you can reproduce in a playground link should be easy to address.
  2. Yeah. The key to understanding physics engines - the thing the tutorials don't mention - is that real physics engines are about 10% physics and 90% constraint solving. That is, it's really easy to move a thing around and and solve collisions between it and static objects - that's why "mesh.moveWithCollisions" can be done in a few lines of code. But once you have two things moving around, then solving one collision can create another - at which point the problem stops being physics and starts being constraint solving. And this is why moving things around is best done with joints. When you move something around with a joint you're giving the constraint solver a constraint, which of course it knows how to handle. When you manually change a body's position or velocity, in general you violate constraints that the solver thinks it already solved, which is something it's not well-equipped to handle.
  3. Last one If you do this, you'll effectively be rolling your own ... ahem, implementing your own joint constraint. (Which you can do, of course, but the scene will be more stable if you leave constraints to the engine - it can solve them more accurately because it knows about all the constraints in the scene, whereas your thrust management logic would just be trying to solve one constraint in isolation.)
  4. Let me try to de-mystify this. Imagine you're driving in a car that only moves when you close your eyes. The only way to drive such a car would be in small steps - first you'd close your eyes and let the car move forward a bit, then you'd open them, pausing the car, to correct your steering, and then you'd close them a bit more, etc. The implications of driving a car like this would be: Opening your eyes more often makes it easier to ensure you stay on the road But the more often you open your eyes, the more the car is stopped, so it takes longer to get anywhere On a straight road you can get away with closing your eyes longer, but but on a curvy road you'll need smaller steps Stepping a physics engine works exactly the same way - first the engine blindly moves the simulation forward a bit, then it pauses, opens its eyes, and tries to correct anything that's gone off track. And as in the previous analogy: Smaller timesteps always make the simulation more accurate But smaller timesteps mean you must step the engine more often (assuming you want it to run at real-time speed) Simpler simulations can get away with longer timesteps; more complex scenes will need smaller ones Hope that helps take the magic out of things! (Note though - all that theory is just for explanation. As a practical matter, since Babylon steps the physics once per render, almost everyone should almost always use 1/60 as their timestep, or else they'll effectively be running their physics in slow-motion or fast-forward. So when advising beginners, I'd encourage you to just tell people to set the timestep to 1/60 (if it's not set that way by default) and never ever touch it.)
  5. I think you might want to steer clear of doing that - overwriting the mesh's rotation just makes the mesh render un-rotated, regardless of whether the simulation underneath thinks it's rotated. So if the simulation goes wonky later, that line may make it harder to figure out why. (Of course if the character in question is a sphere or something you may not care if the model rotates, but...)
  6. Whoa, a shoutout! But yeah, if you need to control a simulation (other than by applying forces and impulses), you almost always want to use joints rather than manually changing a body's position or velocity. Like, if you want to drag a body around with the mouse, you should make the mouse move around a static anchor that doesn't collide with anything, and then attach the anchor to the body with a joint. Incidentally Jack, if you want to keep a character from rotating then you probably want player.physicsImpostor.physicsBody.fixedRotation = true (or whatever the current syntax to access a physics body is).
  7. @Temechon Gah. I think I know what's going on but I don't have a typescript environment set up to confirm, and it's really hairy to track down. I *think* what's happening is: __extends, and some other helper functions, are defined in boilerplate that's used by typescript projects, and when people use webpack to bundle several typescript projects together they use some kind of configuration that prevents the boilerplate from being defined more than once. In your case, when you build a project that requires in the inspector bundle as a dependency, your root project is probably typescript, so the boilerplate gets included at the beginning somewhere. In my case, my base project is plain Javascript, and I pull in both Babylon and the Inspector bundle as dependencies. The Babylon bundle includes the __require boilerplate, but the root project and the inspector bundle don't - so the inspector bundle throws. If I open up my Babylon bundle hack in the line: window.__extends = __extends somewhere appropriate, then the inspector bundle now finds the helper function and the bug goes away. I don't know what the ideal fix is here - I don't know how typescript is meant to be bundled and the whole thing is making my head hurt. It might be simplest to just change the inspector's bundle config to include the boilerplate - it's not that much file size, and production code shouldn't need to load the inspector anyway. With all that said, now that the code runs, trying to use popup mode still throws: scene.debugLayer.show({ popup: true }) Exception: TypeError: Cannot read property 'document' of null at t.openPopup (http://www.babylonjs.com/babylon.inspector.bundle.js:408:4811) at new t (http://www.babylonjs.com/babylon.inspector.bundle.js:408:315) at DebugLayer._createInspector (webpack:///./src/external/babylon.js?:61456:35) at HTMLScriptElement.script.onload (webpack:///./src/external/babylon.js?:6478:21) I don't know if that's related to the packing setup or not. All this effort to get an FPS meter
  8. When I invoke "scene.debugLayer.show()", using the current nightly BJS builds, as soon as the inspector bundle loads and runs I get an error like: Exception: ReferenceError: __extends is not defined at http://www.babylonjs.com/babylon.inspector.bundle.js:408:12893 at INSPECTOR (http://www.babylonjs.com/babylon.inspector.bundle.js:408:13544) at Object.<anonymous> (http://www.babylonjs.com/babylon.inspector.bundle.js:408:13575) at __webpack_require__ (http://www.babylonjs.com/babylon.inspector.bundle.js:21:30) at Object.<anonymous> (http://www.babylonjs.com/babylon.inspector.bundle.js:49:19) at __webpack_require__ (http://www.babylonjs.com/babylon.inspector.bundle.js:21:30) at http://www.babylonjs.com/babylon.inspector.bundle.js:41:18 at http://www.babylonjs.com/babylon.inspector.bundle.js:44:10 If I grab the inspector bundle file out of the github "/dist" folder and src it in my html locally I get the same error. Am I missing something? Edit: tagging @Temechon
  9. Neat demo! Note though, I don't mean access the vertex position values, I mean access the indexes - or some other data that distinguishes one polygon from another within the same mesh. What I'd like to do is dynamically construct a mesh, and have different parts of it be textured with different tiles from a texture atlas. In other words, something like this: http://www.babylonjs-playground.com/#E4RBTW#1 -- if you imagine that model is one single mesh, and that the red and green textures are two different tiles coming from the same texture atlas, then that's what I'm going for. But I assume that to do this dynamically, I need to somehow set a parameter that tells the shader which polygons use which texture. (Like you did here, except in the general case.) Does that make sense? But I have no idea how this would be done in practice.
  10. Ah I see, cool! Quick question - is it possible to access the vertex indices inside a shader? For my use case, it would be ideal to have one geometry use several different tiles out of the texture atlas. So I suspect that the most straightforward way to do this would be, to set UV values and vertex ranges as parameters. For example, vertices (or triangles) 1-100 use one set of UV values, and vertices (or polys) 101-200 use a different set, etc. Is something like that possible to do on the shader side?
  11. @NasimiAsl Hey, thanks and that looks very cool! To clarify a little though, when I say "for real content" I mean stuff like, in real content one would need to have a generalized shader, and pass in UV offsets by setting parameters on a material or something, right? As opposed to hard-coding the offsets into the shader, in which case one would dynamically be creating new shaders very frequently. Similarly, I imagine that to use this effect in real content, one would in general want to also need the standard material features - specularity and so forth. I guess this would be done by forking standard material and writing your code somewhere inside, to replace the standard texture lookup logic. Is that generally the idea?
  12. Hey, I'm very interested in this (getting a texture from a texture map to wrap) as a user, but I don't know what ShaderBuilder is, or what its implications are. Am I correct in thinking that ShaderBuilder is some kind of tool to make experimenting with custom shaders easier (in the playground and so on), and if you wanted to do something like this for "real content" you'd want to make a ShaderMaterial, the way it's done in the tutorials? If so, is there a version of this effect that's built that way that I could try to hack on and figure out what it's doing? If Babylon just knew how to do this as a feature, it would be dreamy...
  13. That should work, but I don't think it's doing what you asked for - (bundling the image without loading it at runtime). I believe webpack will just replace that require statement with a URL to wherever it intends to put the image, and then the image will get loaded normally.
  14. It looks to me like wingy's API should work as long as you don't create an Image object around the target of your require statement. In other words: var tex = BABYLON.Texture.LoadFromDataString( "background.png", require('url-loader!?limit=10000!./background.png'), scene) I assume this will blow up if the image is over the size limit, of course.
  15. I'm just using the packaged scripts from the github repo - in "/dist/preview release/babylon.js" and so forth. Are your changes not in those builds yet, or where should I be looking?