• Content Count

  • Joined

  • Last visited

About HoratioHuffnagel

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Sorry yes, I am looking at Babylon FBX exporter features. I guess the json format would be fine - but I am investigating the gLTF loader. OBJ just doesn't support the features I need (animations for one).
  2. In the supported features under Maya. https://doc.babylonjs.com/resources/maya#exported-features You'll note that in the 3DS version of the same doc - it says it supports Morph Targets. https://doc.babylonjs.com/resources/3dsmax#exported-features I checked in the maya docs for both morph and blend (given terminology is slightly different) and could find nothing. Cheers, HF
  3. Hi, Thanks for getting back to me - yes. That does seem to be feasible. We should be able to take the one mesh, and modify multiple attributes and adjusting the weights accordingly. I may have misunderstood the purpose of the multiple blend shape deformers. If that feature is not required to allow multiple, tunable attributes on the same mesh - then it's probably fine. If I can independently tune eyes, mouth, nose etc - with just the one deformer, it should be fine. I was just a bit suspicious that when looking at the documentation for the Babylon fbx loader - there is no mention whatsoever regarding blend shapes or morph targets. I guess it's just out of date? Regards, HH
  4. Hi, I am a Maya user, and I am looking for a 3D engine that will support morph targets / blend shapes. From what I can see the Maya exporter for Babylon does support this - however, the code suggests it only supports one blend shape deformer per mesh. Now - I need to be able to create a character editor that supports modifying multiple attributes (i.e. mouth width, eye spacing, facial structure etc). I also need to be able to use skeletal animation exported from Maya (as a separate file). Any thoughts on how I could achieve this? Thanks! HF
  5. Hey, Just wondering if anyone has had any success with Oculus Go or similarly low powered VR headsets. I am deliberately avoiding the higher end Rift and Vive devices that require stand alone PC's. In particular, I am looking to make a game using something like Bablyon, ThreeJS etc and presumably package it with Cordova or Cocoon or something - in VR that will run "natively" on this kind of device. But I am having a terrible time finding information on performance, what kind of scene size, vertex count I can expect etc. I would like to tackle a unique idea I have for an escape the room game - low poly environments would be fine - but I would need to be able to use the full Oculus Go input system (with 3DOF). Any links / reading material would be appreciated! Cheers!
  6. I have to ask though - I work in a studio that uses both WebGL and Unity - (Unity only for native at the moment) and if you have been watching the recent Unity press releases on 'unity for small devices' - you'll see a massive shift in the way their engine works. Making it's prospects for high performance WebGL export a reality. So once this is released, what value do engines like Babylon offer? Unity seems to be promising performance improvements, tiny export sizes and a studio quality editor. So now you have a viable pipeline in Unity, with AMAZING tooling, and improved performance... the reason to use standalone 3D engines becomes harder to justify from a commercial point of view. To me - this is the biggest hurdle for open source engines - how to compete with tooling and asset pipelines like those offered by Unity. Once the performance issue becomes less of an issue, and bundle size is no longer an issue - the evaluation now becomes, performance vs tooling. And most studios, unless they are really pushing the platform, will probably pick tooling. And if you are indie - time is valuable - so again - tooling would likely win out.
  7. Hi, I am currently working on a project using PlayCanvas - however, any answers or pointers can be more WebGL specific. I need to create an effect like the WoW or LoL AoE textures, (you know, those circles that appear on the ground indicating where an area of effect attack will hit). However, I am having a little trouble finding documentation on how to achieve this. In Unity for instance, you'd just use a Projector component. In WebGL I believe we need to do some kind of Frustum intersection test - but I am not really sure how to achieve it. Any pointers would be highly appreciated. Cheers!
  8. Yeah - definitely not redundant. I had a look at the facet data route - but oddly, with the event that came back it always looked like the facet id was 0. I may have been look at it wrong - but it was part of the confusion. Anyway - solved, and thanks again. Will make sure I get a demo up later today.
  9. I think I have it - thanks for the advice everyone. I will do up a playground to demonstrate the result once I have had some sleep. But the basics are: Use moveWithCollisions to do movement Setup AABB boxes as children of the obstacles (use this to detect collisions) Setup OnIntersectionEnterTrigger actions on the player object When a collision is detected, do a ray cast (up, down, left, right) to determine where the collision was using intersectsMeshes The object that is returned will let me get the normal using .getNormal @huntsYour solution also looks completely valid - I might give that a go too to see what is more performant. Thanks again. Cheers!
  10. I've had a look at that code before - it doesn't need or consume the face normals. The reason I need normals is because the user can move with continuous velocity in a single direction. When they hit the wall, they must 'reflect' off the wall. Basically like pong. The only problem is, unlike pong, I don't know the orientation of the wall ahead of time. While many games wont require it, it's common enough to need to know (i.e. projectile hits wall, what direction should particles spray out etc).
  11. Yes, I know it is the normal of the wall - the question is how do I obtain it? If I have a set of walls in an environment, I have successfully coded number of ways to determine if there is a collision, using OnIntersectionEnterTrigger, or intersectMesh - each time I find myself stuck with either not enough, or too much information. Now - I know that is user error more than anything - it's just that I am up against the clock and have to get something working soon. I should also mention - this is an environment that will have walls - in a maze like layout - think Pacman - player can navigate freely through the environment, and collide with walls, rebounding off them - and there will be no friction (so they need to rebound constantly). So this is where I am at: I am choosing to use moveWithCollision (so I can just write my own movement routines) I have settled on OnIntersectionEnterTrigger to get the collision event At this point I have an enormous data structure (returned in the callback for OnIntersectionEnter) and I can't tell what is actually useful for getting the normal of the face I hit. I will also like need to the collision point, as I will need to reposition the player outside the wall (as they may continue to apply force in the direction of the collision) One way would be to use a ray test - but it seems the collision event SHOULD have the data in there somewhere? Thanks
  12. Okay - so the secret is to rant on a forum, then the answer will come immediately too you. My error was to pass a new SetValueAction instead of ExecuteCodeAction. Wow - I really should have seen that! So - thank you very much guys - it's been a couple of days of digging through documentation, but you both gave me some excellent resources to read! However - I do now have one last question - I have this collision event - but I can't seem to find what would constitute the normal of the collision? Any ideas?
  13. [EDIT: I have added a new post, because I realised the error was wetware - I really should have seen this problem myself. ] Thanks guys - this is somewhat helpful (it has shown me a lot of promising possibilities for sure!), but every path I follow isn't quite there. All I want is a collision event, and a normal. This is insanely difficult to do apparently... grr. let mesh1 = player.mesh; let mesh2 = wall1.mesh; mesh1.actionManager = new BABYLON.ActionManager(renderer.scene); mesh1.actionManager.registerAction(new BABYLON.SetValueAction( { trigger: BABYLON.ActionManager.OnIntersectionEnterTrigger, parameter: mesh2 }, "scaling", new BABYLON.Vector3(1.2, 1.2, 1.2))); // This is directly from the documentation but results in the following error // babylon.js:25 Uncaught TypeError: t.split is not a function // This is being thrown from the registerAction method by the looks Does anyone have any idea what this error may be caused by?
  14. Hi, I am currently working on a game where I want to avoid using heavy physics systems. Mainly because I have size restrictions on downloads, and the physics requires a movement style like asteroids, but with no friction and a capped speed. Something that gets a little tricky with physics engines. So - Babylons's inbuilt physics system looks perfect - I can move the player with my own calculations, and just detect the collisions and react to that. Only problem is I'm not sure how to get the intersection point (i need to reflect the characters velocity when it hits a wall). let result = player.mesh.intersectsMesh(wall.mesh, true); So for instance - this is great... but only returns a boolean. I COULD cast a ray along the direction of travel, but that wont help if I clip the corner of an object. Any thoughts? Cheers.
  15. Thanks guys - firstly, sorry for the late reply (things got very busy!). Secondly, your replies make a lot of sense.