I've been working in entertainment and interactive technologies for more than 20 years. I've taught myself several development languages including C, C#, and Visual Basic, and other wrap around languages such as Python. As an animator, I've worked on video game titles such as Matrix: Path of Neo, and GTA IV. For broadcast production, I was in R&D for Nickelodeon, and brought CatDog to life on live television. As for Film, I was the Motion Capture and Facial Animation Supervisor for the Matrix trilogy, Motion Capture Supervisor and Motion Edit Supervisor for the Lord of the Rings Films, Technical Consultant for Avatar, Animatronics Systems developer for Hobbit, and have been very fortunate to travel the world developing technologies for many delivery platforms. One key role for me was when Richard Taylor and Pete Jackson the founders of Weta asked me to work with them to develop and build the pipeline for a new company Weta Productions in NZ, which has been a force in television animation since 2006.
Prior to working in visual arts and interactive media - and in a different life - I was an Audio Engineer and have credits including Digital Recording Engineer for the album "Ray Charles Live at the Sundome." And as a side note, I was a reasonably successful recording artist under the BMG record label - as well as a Staff Songwriter for Polygram Music in Nashville - under a different name of course (this means a fake name.)
My work in the areas of both real time and post animation is what I'm primarily known for these days, as I was fortunate to be one of the first people to use motion capture for video game animation in the early 1990s. I first applied mocap to feature film for the Mortal Kombat movies when it was primarily post, but as a Supervisor on the Matrix films, I had to develop and placed into production the very first systems for real time motion capture to accommodate the high speed martial arts performances. And as a Supervisor at Weta in NZ, I built and managed much of the animation production for the Lord of the Rings films. Following LOTR, I was hired to develop new motion capture software and cameras (in association with MAS) for Jim Cameron for the film Avatar. If you're interested, there are lists of recent work on IMDB and other sites, but as these are posted by both the film and television production companies the lists are generally incomplete.
These days I'm generally considered a technology geek, and love the BabylonJS framework which currently provides me the most freedom to expand the delivery of rich and entertaining content globally. I am a founding partner in 3rd Brain, which is a company committed to content delivery on the web. One of my business partners is Louis Gurtowski, who owns several utility patents for adaptive multimedia streaming. We are currently porting and adapting our streaming server technology to NodeJS to broaden our real-time adaptive multi-user streaming within HTML5, and are utilizing the BabylonJS framework and supplemental technologies to build and test our proprietary server and authoring applications for the compiled streaming of content to web browsers and mobile devices.
New update - 09/01/2017 - With the help of @Pryme8, we have rewritten the client design app utilizing HTML5, PHP, and babylon.js, and I'm soon to FINALLY be heading to NZ in a few weeks to release the beta of our real time multiuser app. We'll post a demo soon when ready. Otherwise, working with friends such as Weta and Lockheed Martin, they have been very patient in allowing our very small team of 2 part time developers (including myself) to work through all of the highly complex issues in syncing media between unlimited users in real-time; as well as to pass functions such as drawing in real-time between multiple users. Even the task of building a single application which allows all users to draw together using the mouse and/or drawing on screen using only touch events, was quite a challenge - which when posing on user forums such as Stack-Overflow, I was clearly told that there was no way to build a single app to draw using any device and event. But I found a way, and we won't be required to support 2 apps, but only one single app which works on all devices, all OS, and all browsers virtually the same without having to identify hardware, OS, or browser. So finally, we are realisticly weeks away from delivering the first beta. As this app was never meant to be more than a demo, it has taken so long to build as it is now a professional product which will be used by most every media design and production company - so it had to be functional enough to release as a product. And developing a product for release and to earn revenue, is a very different task from developing a demo simply to show the power of our real-time server technology. But it has been a necessary evil, as this has dramatically helped us to develop a far better and more functional API for our server; which allows virtually anyone to quickly modify their single user app or game into a multi-user real-time app - with virtually no server experience necessary. I'm just happy to finally release our design app and focus wholly back onto the development of our server technology.