JCPalmer

Members
  • Content count

    2,094
  • Joined

  • Last visited

  • Days Won

    7

JCPalmer last won the day on May 15

JCPalmer had the most liked content!

3 Followers

About JCPalmer

  • Rank
    Advanced Member

Contact Methods

  • Twitter
    whoopstopia

Profile Information

  • Gender
    Male
  • Location
    Rochester, NY
  • Interests
    Power Napping

Recent Profile Visitors

2,892 profile views
  1. Kinect V2 skeleton

    First @dbawel, I hope you are OK. Yesterday on the evening news, the small place you mentioned you live, was indicated as the location of one of the Ventura county fires. I remembered I had no idea how pronounce it, which triggered my memory when I heard an odd name. My Kinect adapter unit arrived yesterday, but I resisted doing anything till today. My back room was a victim of being the last room to clean for a couple of visits this year (never got beyond just dumping stuff there). That combined with being the entrance point for a large dog made the amount of dirt even larger than I thought. After cleaning all morning, was ready to play. After hook up, Kinect Studio did a successful capture. I have more, but will wait till things firm up. Having an actual system means there are things I can actually try.
  2. Kinect V2 skeleton

    That was not my picture. There is a "Xbox Kinect Adapter for Xbox One S and Windows 10 PC". I ordered it. I cloned the https://github.com/peted70/kinectv2-webserver, but I do not think I can build it with Visual Studio Code. After getting no where, decided to just start with a new / empty .Net core 2.0 app & start adding stuff from the REPO. Did not get very far. As soon as I added some using statements, I got error adding 'using System.Windows.Media'. After searching web, found it needed a reference to 'PresentationCore', which I found in the repo. I added the section to my .csproj, but that got rejected. I don't think this is going to work for .Net Core, which is command line. Why would you need graphics for a command line program was probably the thinking? .Net Core is the only thing that runs in Visual Studio Code, not ".Net regular", so unless I can find & remove the section of the program that needs this, I am going to have to go the Node.js route.
  3. First, I said only way not easy way. But no, I would only create / export shapekeys at key points. In your walking example maybe to & from leftForward & rightForward. If you also are going to use a skeleton, then just the "swish" needs to be isolated, since the matrix weights of the cloth mesh is already going to modified by the skeleton. My morphing is always done on the cpu, so I know I can do both morphing move bones at the same time. I assume you can to both with BJS morph targets, but do not know.
  4. Kinect V2 skeleton

    Ok, I ordered my used Kinect V2 sensor, $39.9995 & free shipping, Saturday afternoon. Today, Monday its on my front porch before Noon. First order of business is to prepare the room. It really needs it with stacks of stuff, but also I have a motherboard with 3.0 USB, but the case it is in only has USB 2.0 ports on the top / front. Need to plug-in from the back. Might also need some sort of adapter, since the end of the sensor cable looks female. Some picture I saw looks shows a piece I never got. Investigating, there is an adapter needed to be bought when connecting using USB. Probably is going to cost me more than the sensor. I have been combing over the large collection of examples / github projects, as well as the SDK. There was a javascript interface supplied by the SDK 1.8, but not included for 2.0. Even that seemed slightly clunky. If you go down to the comments of this, you have go to a sample program, compile it, and make changes to a config file to make the server part. The websocket client / server approach, which the "thing" in 1.8 may or may not have been implemented in, seems like the best way to do it. It might even allow python clients, depending on if a "websocket" is really any different from a socket. A free standing server program you might launch does not exist in all the things I have seen, so far. Having to install node.js, even though I have, restricts the usefulness to people who either cannot or do not want to have to install all this stuff. Seems like too much can go wrong. One of the repos, seemed to have a reasonable JSON output, but when I asked a question about a JS file referenced in the readme which was missing, I was told the project was proprietary. I saw one blog, that even mentioned babylonjs (about says he works for MS), that talked about using supersocket in a 22 line console app which wrapped everything in JSON. There is a REPO based on this, no license, & no actual binary that I can see. I do not see everyone building their own .net app from source. I do not know how to do it. The saga continues..
  5. I do not think a .babylon file even supports that. Making a set of shapekeys at key-frames, then animating in BJS seems like the only way.
  6. Kinect V2 skeleton

    @dbawel, do you have this, or is there a Blender Python plugin somewhere? There is a link to a .blend file in first post. Think Blender can export an fbx from it. If you could see how well the converted skeleton performs, that would be great! I want to make the most perfect skeleton possible from MakeHuman for use with Kinect2. If the skeleton is poor, it sort of puts an upper limit on what can be done. I have no interest in making a Kinect2 interface for Blender myself, but might use one since it would fit right into my workflow. My final output will be using my pose interpolation, which is improving (float free). There is one important set of poses where IK might take a long time / or I might have to have to settle for low quality. I am quite able to perform it myself using mocap then keyframe reduce. Finding some mocap, probably from a different skeleton, from the web is both unlikely & never going to translate right. I found eBay has new Kinect2 sensors at $80, so it definitely is within my budget. Need to check around first, though. I have other news on this proposal front, but still need to check some things, first.
  7. Kinect V2 skeleton

    This topic is a follow up to one of the "what's next?" comments I made. I have added facility into an addon for Blender, which allows post import operations on MakeHuman meshes / skeletons. The button highlighted, converts the 160 some odd bone, default skeleton to one made for Kinect2. The bones are named for the bone tail which lines up to the joints the sensor's output. There was an old topic about this, but: The people are gone. They shared nothing but pictures. This includes their skeleton, which could have been the reason for all their problems. Here is a .blend of my converted skeleton. Any comments would be appreciated, @gryff, @dbawel maybe? The skeleton units are in decimeters, which are easy to convert the sensor's meters to, but any solution should probably have a units multiplier. The .blend is already in weight paint mode, seen below. Just RIGHT click any bone to see its weighting on vertices. For the rest of this topic, I am just sounding out how I think this might be implemented in 3.2. DB, for your electronics company work, I think you are going to have time requirements which are too tight for waiting for 3.2. First, I wish to use this for pose capture, but making changes to the Skeleton class (or QI Skeleton sub-class) to do live animating may actually help debug things & therefore be a twofer. To this skeleton class, if the bones array cannot be changed to a { [bone: string] : BABYLON.Bone } dictionary for compatibility reasons, I think there needs to be some additional data elements & a method to switch on Kinetics, because calling getBoneIndexByName() 25 times every update is probably slower. Next thing is, yes bones are basically a local matrix to the bone parent, & kinetics returns data in it's world space, BUT that is no reason to go all the trouble of converting sensor output to local space (unless doing mocap - more later), WHEN skeleton.prepare() is just going to have to convert it back. Performance could suck. Having an alternate _computeTransformMatrics() seems like the level to branch off to convert from kinect space to world space. Here is a non-running mock up of all these changes: public onAfterComputeObservable = new Observable<Skeleton>(); protected _kinectsDictionary : { [bone: string] : BABYLON.Bone }; protected _useKinetics : boolean; protected _kineticsBody : any; protected _kinecticsBodyIdx : number; protected _kinecticsUnits : number; protected _kineticsFloorClipPlane : BABYLON.Quaternion; public switchToKinetics(val : boolean, units = 1, bodyIdx = 0) : void { this._useKinetics = val; if (val) { for (var i = 0, len = this.bones.length; i < len; i++) { this._kinectsDictionary[this.bones[i].name] = this.bones[i]; } this._kinecticsUnits = units; this._kinecticsBodyIdx = bodyIdx; } else this._kinectsDictionary = null; } public incomingKineticsDataCallback(eventData : string) : void { var parsed = JSON.stringify(eventData); this._kineticsBody = parsed.bodies[this._kinecticsBodyIdx]; // account for right handed to left handed for this here this._kineticsFloorClipPlane = new BABYLON.Quaternion( parsed.floorClipPlane.x, parsed.floorClipPlane.z, parsed.floorClipPlane.y, parsed.floorClipPlane.w ); this._markAsDirty(); } /** * @override */ public _computeTransformMatrices(targetMatrix: Float32Array, initialSkinMatrix: BABYLON.Matrix) : void { if (this._useKinetics) { this._kineticsTransformMatrices(targetMatrix, initialSkinMatrix); }else{ super._computeTransformMatrices(targetMatrix, initialSkinMatrix); } this.onAfterComputeObservable.notifyObservers(this); } protected _kineticsTransformMatrices(targetMatrix: Float32Array, initialSkinMatrix: BABYLON.Matrix) : void { // ... } For using this for capture, maybe a method in the bone class say, worldToLocalMatrix(), which could be called by code monitoring the onAfterComputeObservable. This is my current straw man. I do not even have the hardware, right now. Thoughts?
  8. Fire Particles

    Thanks. This was the demo scene for QI 1.0. Tried to think of the worst possible skeleton exercise. Have to credit @SimonBiles. Seemed kind of pervy when I was looking at her Olympic floor routine over & over, taking screen shots, but I overcame. Let's see her do it over a fire! On an even close relation to materials, the dog crap on the sign had a blinking HighlightLayer that made it look like neon. It broke in 3.0. Think I am going to make simple pg, and belated report to try to get it operational again for 3.1. I have other signs in mind for this as well. @Sebavan, incoming.
  9. Doubt about inverted texture

    Yea, as soon as I saw this I had a pretty good idea what the problem was, but saw they were using a different Blender output option, so I moved on. This was not my problem. FYI, the default setting for the invertY arg of the Texture constructor is TRUE. In fact, there is no way inside the structure of a .babylon file to not default. Guess the shaders were just designed to access textures inverted. I noticed when doing a fully scale test of compressed textures using the mansion scene that things were off. The grave stone writing was upside down.
  10. Fire Particles

    FYI, there is now a fire material. I used in this scene, twice. The gingerbread man briefly use the material with different textures. There is no correct way though.
  11. Avatar Animation via Kinect v2

    That is always the current one. Actually in this case, that will be the same place as I got the position to add the incremental too. Also, animations or interpolate targets are queued. When put on the queue, you specify amount left, up, & forward relative to the position the mesh from behind, camera, light, now skeleton will be at the time the MotionEvent gets pulled off the queue. The system assumes the mesh is designed face forward you, but can also be designed facing away. Couple of sign switches in that case. Queuing really helps string a whole bunch of animations together, like morph targets for speech, then just forget them at the application level.
  12. Avatar Animation via Kinect v2

    The growing # of methods in Bone, like getPosition, means I can just use the bone as the container. Just call this & get the last / current position, add my incremental, then build a target to interpolate to.
  13. Avatar Animation via Kinect v2

    Where am I: something happened while playing with BJS. This could maybe be an idea. How it happen: as said I was playing further with IK than this, where I actually moved the IK target meshes "forward". What happened is feet would not touch the ground. I found if I made sub-poses of just the pelvis in where I moved it slightly down and forward Blender and exported , the first step was like perfectish. Got ahead of myself, cause that trick could not be repeated, or could it? You cannot transfer the forward to the mesh without it turning into a moon walk. Pelvis in this skeleton, was not the root bone. It was down on the ground doing nothing, with pelvis as the only child. My animation system is POV movement & rotation based. For a mesh with a skeleton, isn't the local matrix of the root bone sort of POV? Not fully coherent at this time.
  14. Avatar Animation via Kinect v2

    I have my doubts those other people are still around. I did do a look around very recently into Kinect. Here is the Current SDK Manual. The install is here. From your description of your current use, that might already be installed by you or the software you are running. One major problem I see is the javascript reference for Kinect is for 1.8, and gone for 2.0. That may be overcome by this Github repo as a replacement for the Microsoft javascript solution. It was update just 2 months ago, so probably Kinect 2. I assume your requirements are for live transfer rather than as capture, since you can already do capture & your client could just buy the stuff you did. I was primarily seeking a capture capability with a very short workflow pipeline. I am in progress with my own commercial output at this time. With both voicesync & lipsync operational, or at least operational enough for me to use, I have switched back to armature experiments. Have expanded to start to play with IK directly in BJS, as well as exported poses out of Blender. I am starting to have a feeling that I have a way to deal with root bone translation which will really solve the "floating" effect. also adding key frames from a Kinect might be worth it if I did not have to run them through some long workflow pipeline. I cannot really work on this right now, above my own work. Root bone translation is my priority right now.
  15. Morph: 1. No, use shapekeys & one export. 2. Using .babylon exporter, exports shapekeys as morph targets. Do not know what other formats do with shapekeys, nor what the loader does for those formats. 3. Doubt you can use morph to replace skeleton poses. Also morph target copy an entire set of vertices. Can be very big file.