xtreemze

Generated audio tied to Babylon Audio Spatialization?

Recommended Posts

My question is in reference to audio tied to a mesh, so volume and panning are handled by the BABYLON engine. This works great for wav and mp3 audio files but lately I've wanted to use generators like tone.js to generate tones with javascript. It seems that BABYLON audio requires a URL to insert audio into the audio chain but is there any way to use the audio engine in BABYLON with tones generated with tone.js and the web audio API without rendering them to wav or mp3?

Share this post


Link to post
Share on other sites
On 6/11/2018 at 7:39 PM, JCPalmer said:

If tone.js can produce an AudioBuffer, or you can make one from all of the parts available, then you can supply this instead in the BABYLON.Sound constructor.  Things are not looking promising otherwise, though can you produce an Audio element?

Interesting that there are other possibilities to pass into the BABYLON.Sound constructor. But in this case, if I'm not mistaken, the buffer would not be real-time sound, as it would somehow be buffered/pre-rendered. My intention is to use variables that constantly change per frame to affect specific parameters of the synthesizers so recorded sound would not be an ideal solution. Nonetheless, there would be about 20ms of delay to play with so the sound might still be perceived as immediate.

12 hours ago, davrous said:

I'm going to have a look to tone.js to tell you but as a first glance, it doesn't seem to be made to be easily integrated.

Thanks Davrous! I appreciate you taking the time to have a look.

Share this post


Link to post
Share on other sites

You're right about the delay. Ideally, I would need to take their audio graph and connect to the input node of the BABYLON.Sound object. I've seen in their samples & doc I can provide my own audioContext, which is good but I haven't seen how to get the output audio node object to connect to my own audio graph. It then seems we would need to both modify the audio engine of Babylon.js and tone.js to make this works.

Indeed, I don't have the option to customize my audio graph today like that because I haven't thought about such a use case. 

For now, we're both creating an audioContext and both connecting to the audio destination (speakers). To make what you'd like working, it would need to let Babylon.js taking control of the audio destination and panning node (to have spatialization) and put in between the procedural audio generation of tone.js 

Share this post


Link to post
Share on other sites
8 minutes ago, davrous said:

You're right about the delay. Ideally, I would need to take their audio graph and connect to the input node of the BABYLON.Sound object. I've seen in their samples & doc I can provide my own audioContext, which is good but I haven't seen how to get the output audio node object to connect to my own audio graph. It then seems we would need to both modify the audio engine of Babylon.js and tone.js to make this works.

Indeed, I don't have the option to customize my audio graph today like that because I haven't thought about such a use case. 

For now, we're both creating an audioContext and both connecting to the audio destination (speakers). To make what you'd like working, it would need to let Babylon.js taking control of the audio destination and panning node (to have spatialization) and put in between the procedural audio generation of tone.js 

Thanks Davrous for taking such a deep look into this. Since it's not feasible I'll mark this as solved as I just wanted to see if it was easily done with the current APIs.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.