Jump to content

Kinect and babylon.js


benoit1842
 Share

Recommended Posts

  • 5 weeks later...

hi benoit1842!  Welcome to the forum, and sorry it has taken so long to get a reply.  I am not the person you seek, but I discovered this:

 

http://www.spritehand.com/2014/01/sharing-3d-scans-in-webgl-using.html

 

That was authored by Andy Beaulieu.  He's a superhero, so he might be pretty busy, but he is nearby.

 

Also, there is a guy REAL nearby (yet another superhero) named Davey Catuhe.  He is exceptionally busy, but he not only wrote most of the core of Babylon.js, but also wrote a book about Kinect.

 

Perhaps if you were more precise in the type of input you want... about using Kinect with Babylon.js... maybe we could get some experts to say hi.  Again, welcome to the forum.  Feel free to tell us about... and/or show us your projects, if you please.

Link to comment
Share on other sites

Hi benoit1842, just saw your question after Wingnut replayed.

 

I assume you require the user pose and gestures (as you stated you want to "...have some input of current user"). There is currently no driver to deliver the kinect information straight to the browser. 

I have been experimenting quite a lot with both kinect and openni (a better framework, IMHO, sadly no longer maintained). The best way to achieve this is programming a native server, that will read the kinect's input, serialize it (in any way you wish to have it, i personally chose JSON) and make it available to the browser (I have used a bidirectional socket server).

If you only require abstract information such as the head's position, it shouldn't take long to implement that. I sadly can't offer my implementation. It was implemented during my working hours, and my company decided not to open source it at the moment.

Would be happy to offer a few hints, if this is the general direction of your question.

Link to comment
Share on other sites

Hi guys,

 

If your looking for a nice way of sending data from and to a server i would like to point out:

http://www.asp.net/signalr

 

its amazing im using it in combination with BJ already and i love using it. The bidirectional communication you guys are talking about is possible, not only by polling the server the server can push data too :)

Link to comment
Share on other sites

Hi guys,

 

A (web)socket server will come in handy in your implementation. This offers, as FreeFrangs said, the possibility to "push" information to the client, and not to poll it constantly.

I have been using OpenNI more than the kinect SDK and therefore used a Java Socket server (I personally used https://tyrus.java.net/), since OpenNI has a JNI (Java-C) binding.

In general, what you do is read the information as you would read it in a standard application (think of a desktop application reading your gestures). But instead of outputting the needed information to the screen or to a local controller, serialize it and send it to a connected client using a socket server.

 

The same process would work with the kinect SDK. But since you are using Microsoft's SDK you would need to work with .NET (I find C# to be a wonderful language). After a very quick search I found this link - http://msdn.microsoft.com/en-us/library/fx6588te(v=vs.110).aspx , but I believe FreeFrags offered a more complete solution for a socket server. In the link I gave, the "Magic" would happen in the "Send" function. It shouldn't echo the client's request, but constantly push the Kinect data to the client. 

 

The client should then act on the information sent (it really depends what was sent, if it's the location on the screen, real world coordinates etc'). A simple tutorial for websockets - https://developer.mozilla.org/en/docs/WebSockets/Writing_WebSocket_client_applications , there are many many more.

 

Again, sorry for being so abstract, I sadly can't show any code from my implementation. But I would be happy to answer any question you guys have.

Maybe I will find the time at home and implement something quick in the next few days, but I wouldn't count on it :-)

Link to comment
Share on other sites

If you send the correct information from the kinect (in this case, probably the screen or real-world coordinates of the skeleton), you can achieve it without a problem.

You get from the kinect the correct axes - x,y and depth (z), you just need to stream them constantly and convert them to your coordinate system in babylon.

 

Just a hint - Kinect streams (max) 30 fps of data. Execute the data processing event based / async (and not "before render"), otherwise babylon will also run at 30 fps (instead of the wonderful 60 fps you can get).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...