Jump to content

Shader to Produce Sound?

Recommended Posts

Are we able to do audio output off a shader like:

Or do I need to rely on html5 audio api methods to procedurally generate noise?

Here is how they do it:

Which means I doubt we have a like method, but from what this describes I bet we could pull it off decently easy.

Link to post
Share on other sites

Understand that, but when you know how to send sound you only a step away from generate it. We can make this a community project if you like to.

I will use WegGL2 32bit Float Textures, later you can convert to WebGL1 equivalent. Hope that is ok. 

Do you have experience with HTML5 Audio??? i have only limited knowledge. Would be a good start, if someone show me how to get a 32FloatArray out of Audio context.  A 440Hz Sine would be also nice. hahah, im a noob. 

Link to post
Share on other sites

I think there might be a possible bug in 


  1. buffer:[Exception: TypeError: Method get %TypedArray%.prototype.buffer called on incompatible receiver [object Object] at TypedArray.get buffer [as buffer] (<anonymous>) at TypedArray.remoteFunction (<anonymous>:2:14)]
  2. byteLength:[Exception: TypeError: Method get TypedArray.prototype.byteLength called on incompatible receiver [object Object] at TypedArray.get byteLength [as byteLength] (<anonymous>) at TypedArray.remoteFunction (<anonymous>:2:14)]
  3. byteOffset:[Exception: TypeError: Method get TypedArray.prototype.byteOffset called on incompatible receiver [object Object] at TypedArray.get byteOffset [as byteOffset] (<anonymous>) at TypedArray.remoteFunction (<anonymous>:2:14)

This is pretty odd.  Im pretty sure if I can get this byteArray to construct correct, we should hear a horrifying noise being played.

Nevermind Im dumb, I got this.

Getting closer with this:
Cause at least I'm starting to get the correct bucket size now.

Link to post
Share on other sites

^_^ awesome I will check out whats going on when I get home and have speakers!  Ill use your rtt method, and combine it with a shader to output a image that gets converted into a byteArray.

So the way that I was conceptualizing this is:
44100 sample rate
10 seconds of sample

Gives us 441000 bytes
So in an Image with our max width being 4k, we get a height of 110.25 which we ceil to 111;

so now we have a 4k by 111px image which if we wanted to be fancy we could do a square image by doing the sqrt of (441000) which equals approx 665.  So instead of a 4k by 111px lets do a 665x665px image that becomes our rtt.

Now in our sound generation shader we pass the uniforms bufferSize vec2(665.), duration float 10.0.  (we might have to do different ones later but this is a prototype)

Then we need to get our sampling space correct which can effectively be a float that each pxl needs assigned to it.
so something like 
vec2 timeSpace =  vec2(1.0/(vec2(duration)/bufferSize));
float timeCoords = (vUV.x*timeSpace.x)+(((vUV.y-1.0)*timeSpace.y)*timeSpace.x);

^^^^ that math is not correct I bet, but hopefully gets the idea across.
So from there each pxl effectively has a timeCoord which should be a unique float starting at 0 and hopefully ending at 10, every pixel after 10 could be ignored unless you wanted a overflow gutter for the sound (useless most of the time)

Now from there we pass the timeCoords through various generators that change each pxl accordingly.
red => LeftChannel Min
green => LeftChannel Max
blue => RightChannel Min
alpha => RightChannel Max

then whabam, grab the rtt internal texture when its ready and pass that to the audio context to spin up the buffer.

This is gonna be dope, if we do this right we could effectively generate whole songs off the GPU and create a whole niche for music/sound production.

Link to post
Share on other sites

Close, looks like you are doing a mono deployment though.
What you made and what I got sitting at the house, I think we can make this a real thing.

there is no reason to add the iTime in either, that will just mess up the sound generation once we are using functions to make the sounds we want.


Link to post
Share on other sites


This is more or less the structure I was hoping for.  Have not gotten this to play a sound yet, so maybe you can help with that part.

Trying to get it so the sound can be generated in its entirety in one pass and not have a uniform be updated constantly.  Basically one pass its done and its now a usable asset.

Link to post
Share on other sites

ok, i will look into it. 
Yes, stereo is on my to do list. Their is actually a smarter way to to it with Float32 Textures, but i save this one for a weekend.
And meanwhile: this is the base WebGL (easy) : https://jsfiddle.net/nabr/xam09g6c/ 
You need somehow adapt it into you project. Not sure how it should work with Rendertarget textures. What i trying to say, i keep this thread in mind. Will be back asap. 

Link to post
Share on other sites

Still trying to get a buffer to render out completely, I think we should leave this open for now.

The method that we have that "works" right now generates constantly and is not ideal for "precompiling" sounds procedurally and storing them in the ram for later use.

I have been doing a bunch of side reading on how the buffers are stored and decoded and think Ill have a more robust solution soon.

I just dont get to look at this really while at work, cause we dont have any speakers and only get to check it out when I get home and have free time.  So its not a major priority.

Link to post
Share on other sites
On 7/25/2018 at 2:09 AM, Pryme8 said:

there is no reason to add the iTime in either, that will just mess up the sound generation once we are using functions to make the sounds we want.


iTime is in this context maybe not a proper name, but you need to find the location of a pixel in an array. so the formula 
location = x + y * width; is just right. Here is a good tutorial: https://processing.org/tutorials/pixels/
I chosen iTime becourse you usually want do more stuff inside main(). 
- RenderTarget Texture their is not way you can pull data ( buffer ) out of it. You need something with r e a d option. 

Ok. Good luck. Keep us posted.

Link to post
Share on other sites

I am already pulling the data buffers out of the rtt's, as an Uint8Array.
https://www.babylonjs-playground.com/#16HY5Y#3 <- you can see it in the console report.

Getting pixel data from rtt's is super easy, just need to convert it and then pass it to a Audio Node.

I think I have it figured out, I just have not had a chance to do anything with it for a day or two.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Create New...