• Content Count

  • Joined

  • Last visited

  1. As far as your multiple animations go, I believe that you may be able to live with what you can actually get out of blender. I'm not sure how much you have used the NLA editor, but that allows you to manage and combine animation sequences. Specifically in the context of armature animation, you can first use the Dopesheet to define (record) different actions. The quick tutorial on this is to do the following: 1. put your armature in pose mode 2. set your main timeline keying set to LocRotScale, or LocRot, depends on the complexity of the armature rig. 3. split your screen (Dopesheet editor one side, 3D view the other). 4. set mode dropdown to Action in the Dopesheet view. 5. in the next dropdown this is where you store your different pose actions. a. I recommend your first one being your rest pose, so you can always have a good starting point. b. but at least create a new action even if you disagree with me. 6. turn on the properties panel on the 3D view if it is not on already (press N on the keyboard to toggle it). 7. begin posing each frame as you normally would, except instead of adding keyframes to the main timeline: a. in the properties panel go through each of the Location (x, y, z) and Rotation fields (x, y, z) and hit the i key. b. over in your dopesheet view, you should start seeing the keyframes popping up. 8. after you are done, you can use the NLA editor to sequence these actions in the main timeline. 9. Then you can do what these guys are saying regarding managing the timeline at different time ranges. a. This might not be the most desirable, but at least you can manage it according to context this way. i. for example, you may want your first action to be "whistling", and your second action to be "walking", ii. well now based on game properties, you can set the time such as frames 1-10 = whistling, 15-35 = walking, 40-60 = walking+whistling, etc... iii. Then you can almost use a property bit flag to get all possible scenarios and call the proper frame range. b. The cool part is, is that you can still manage all of this in Blender, and get one big timeline in BabylonJS, without having to rewrite/repose the same actions each time. I'd like to share some blend files for you, but I can't seem to figure out how to attach them. Anyway, I hope this added value to your pursuit. Kind Regards
  2. I figured this out. Where i have BABYLON.Condition I replaced it with new BABYLON.Condition(donut.actionManager)and everything worked as expected. If anyone knows: From a performance perspective, which one is considered more taxing on the frame rate? From the newbie perspective, these both seem like legitimate claims to the same end. The ExecuteCodeAction approach seemed a lot more instinctive (for me anyway) to just use an anonymous function to set/change the properties. The CombineAction approach seemed a lot more formal (explicitly/legally defined so to speak). Thanks again for both contributions you two.
  3. Based on the code from Vousk-prod., below would be my interpretation of it, given the original example of intersection: I tried to run it, but it failed to allow the animation to continue, am i missing something here? Thank you for taking the time to help me understand. donut.actionManager = new BABYLON.ActionManager(scene);donut.actionManager.registerAction( new BABYLON.CombineAction( {trigger: BABYLON.ActionManager.OnIntersectionEnterTrigger, parameter: sphere}, [ new BABYLON.SetValueAction( BABYLON.ActionManager.NothingTrigger, donut.material, "emissiveColor", BABYLON.Color3.Red() ), new BABYLON.SetValueAction( BABYLON.ActionManager.NothingTrigger, donut, "scaling", new BABYLON.Vector3(1.2, 1.2, 1.2) ) ], BABYLON.Condition ));donut.actionManager.registerAction( new BABYLON.CombineAction( {trigger: BABYLON.ActionManager.OnIntersectionExitTrigger, parameter: sphere}, [ new BABYLON.SetValueAction( BABYLON.ActionManager.NothingTrigger, donut.material, "emissiveColor", BABYLON.Color3.Gray() ), new BABYLON.SetValueAction( BABYLON.ActionManager.NothingTrigger, donut, "scaling", new BABYLON.Vector3(1.0, 1.0, 1.0) ) ], BABYLON.Condition ));
  4. I'm posting amorgan's code reference here so I can see both methods side by side. // Intersections donut.actionManager = new BABYLON.ActionManager(scene); //donutMat.actionManager = new BABYLON.ActionManager(scene); donut.actionManager.registerAction(new BABYLON.ExecuteCodeAction( { trigger: BABYLON.ActionManager.OnIntersectionEnterTrigger, parameter: sphere } ,function () { donut.scaling = new BABYLON.Vector3(1.5, 1.5, 1.5); donutMat.emissiveColor = BABYLON.Color3.Red(); })); donut.actionManager.registerAction(new BABYLON.ExecuteCodeAction( { trigger: BABYLON.ActionManager.OnIntersectionExitTrigger, parameter: sphere } ,function () { donutMat.emissiveColor = new BABYLON.Color3(0.5,0.5,0.5); donut.scaling = new BABYLON.Vector3(1, 1, 1); }));
  5. OK, that was amazing, and way more efficient from a coding perspective! Thank you for the help!
  6. OK, I was playing around a little bit more, and it seemed that I was able to go one step further by registering half of the actions on the sphere.actionManager. So I ended up keeping the donut.actionManager scale as the example had it, and placing the donutMat.emissiveColor change on the the sphere.actionManager (both for the same triggers - BABYLON.ActionManager.OnIntersectionEnterTrigger & BABYLON.ActionManager.OnIntersectionExitTrigger). You can see the implementation here: This Worked! This seemed like a workaround though, and in this particular case it worked as I only had two actions to pull off, and the OnIntersection(Enter/Exit)Trigger was not being used for the sphere. If it were &/or I would have had any more actions, I don't think this would have quite worked without creating something very non-intuitive. I stumbled upon this: However, I personally could not extrapolate what the code should look like from this. i was hoping to get some direction/examples on the CombineAction class, as it sounds like the right thing to be using (ignorantly stated - assuming that this is what this class is for). Any help is greatly appreciated. Kind Regards
  7. I know you marked this as answered, however, I am very interested in the topic, and was wondering if a dynamicTexture along with drawtext could be implemented alongside of javascript event listeners that would place the focus on hidden form items? Even if these are not truly hidden (eg. = 'hidden';) but rather have their z-index less than that of the canvas element to hide them, but still make them usable. If I remember correctly (it's been years since I've messed with it), their seemed to be good PHP stuff you could do via Imagemagic, where you could pick button styles, add text, change fonts, etc... Then all that really needed to happen from there was that the button was served up via a downloadable file, or rendered in the webpage after submit. example: I'm guessing, that this could be served up more efficient if you could actually use some AJAX methods in the hidden html elements. Although great care would have to be taken based on the readiness of the calculated image stream. Anyway, this post really intrigued me, and I thought it was worth getting more people thinking about it, and how to pull it off in creative ways, until the standards solidify to incorporate this kind of thing. Kind Regards
  8. The below code is a snippet from the following location: My question should clarify what I'm asking, even though the above code works (eg. doesn't throw an error). What I was trying to accomplish was taking both actions and causing them to fire at the same time. So far the above code sets both properties, but only by allowing it to take turns alternating each time the intersection occurs. My question is: is there a way to change both "scaling" and "emissiveColor" during the same intersection instance? Thanks in advance for the help.