Jump to content

Syncinc Audio And Sprites


georgejfrick
 Share

Recommended Posts

Hello,

 

I'm converting a rather large set of Flash Games (all educational). One of the problems I've run into is character mouth animations. In some of the school scenes, a character is talking and their mouth moves (somewhat), to the audio.

 

I have tried to substitute for this with 'random', and it looks horrible. 

 

I'm looking for ideas and other solutions people have tried in this regard. There doesn't seem to be anything in the api to get say, the current playback level where the sound is at? If I could sample the currently playing sound (this.gameSounds.getSoundLevel()) in my update loop, I could move the mouth based on a high/low. 

 

Alternatively, I'd like to export the audio information to build some JSON data to use in the place of sampling. So every X nanoseconds I could move the mouth to the open/closed position that is next in the data array [ 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1 ] for open/closed mouth frame.

 

Any help would be soooooooo appreciated,

 

George

 

Link to comment
Share on other sites

  • 3 weeks later...

I will answer my own question in case someone is searching.

 

I had to chop out a lot of specific code to my app/engine, so this may not be 100%. It's also 1AM and I started work at 8AM. But you get the general idea. Open an audio context and take samples. For each sample, determine what your animation is. To make the animations better, increase the sample rate. It gets difficult if you have multiple animations. I only have open/close. You would need to update determineValue() to return additional possible values.

 

You would take the array produced by this code and drive an animation with it. My code is ugly because I hadn't 'fixed it up' yet. I'll do that Monday morning.

function buildAudioLipSyncArray(url, sampleRate) {    var audioContext = new AudioContext(),        resultingArray;    /**     * Simple Ajax request for audio (or anything)     */    function fetchAudio( url, callback ) {        var xhr = new XMLHttpRequest();        xhr.open('GET', url, true);        xhr.responseType = 'arraybuffer';        xhr.onload = function() {            callback(xhr.response);        };        xhr.send();    }    /**     * Use the audio context to decode the audio bytes.     */    function decodeAudio( arrayBuffer, callback ) {        audioContext.decodeAudioData(arrayBuffer, function( audioBuffer ) {            callback(audioBuffer);        });    }    /**     * slice the audio sample into chunks of t seconds long.     * For each chunk, determine if it represents silence or not.     * return the array of [silence(0), speaking(1), 1, 0, 0, 0, 1, ... ]     */    function sliceAudio( audioBuffer, sampleLengthSeconds ) {        var channels = audioBuffer.numberOfChannels,            sampleRate = audioContext.sampleRate,                     samples = sampleRate * sampleLengthSeconds,            output = [],            amplitude,            values,            i = 0,            j, k;        // loop by chunks of `t` seconds        for ( ; i < audioBuffer.length; i += samples ) {            values = [];            // loop through each sample in the chunk            for ( j = 0; j < samples && j + i < audioBuffer.length; ++j ) {                amplitude = 0;                // sum the samples across all channels                for ( k = 0; k < channels; ++k ) {                    amplitude += Math.abs(audioBuffer.getChannelData(k)[i + j]);                }                values.push(amplitude);            }            output.push( determineValue(values) );        }        return output;    }    /**     * Based on a buffer chunk, return if it is silence or not. If you had more animations, you     * would need to check the range and pick an animation. We are only doing 'silent' or 'speaking',     * so this works well.     */    function determineValue( buffer ) {        var total = 0,            bufferIndex = 0;        while ( bufferIndex < buffer.length ) {            total += ( buffer[i] );            bufferIndex++;        }        return (total / buffer.length) > .05 ? 1 : 0;    }    /**     * Run the program for a given url with a given sample rate. In this case, 100 milliseconds.     */    fetchAudio(url, function( arrayBuffer ) {        decodeAudio(arrayBuffer, function( audioBuffer ) {            resultingArray = sliceAudio(audioBuffer, sampleRate);        });    });        return resultingArray;}
 // Output looks like:  var speechTmp = [0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0,                   1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0]; //  Usage looks like:         onGameSpeech: function (speech) {            if( this.speechAnimation ) {                if( ! this.timer ) {                    this.timer = this.game.time.create(false);                    this.speechIndex = 0;                    this.timer.loop(100, this.incrementSpeech, this);                    this.timer.start();                }            }        },        incrementSpeech: function() {            this.speechAnimation.frameName = this.animationFrames[ speechTmp[this.speechIndex++] ];        },        onGameSpeechStop: function (speech) {            if( this.speechAnimation ) {                this.timer.stop();                this.timer.destroy();                this.timer = null;            }        }, 
Link to comment
Share on other sites

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...