Jump to content

Making ALL cameras 3D


JCPalmer
 Share

Recommended Posts

Are saying the anaglyph is worse? I found some red blue glasses in a book on Mars. Though except for color, it looked good. If you are sayin stereo stuff is worse. I do not side by side compare. Looked slightly "better", but I have no actual metrics to quantify

Link to comment
Share on other sites

In fact you can't really detect those misalignements with glasses or with a stereo dedicated display, because our brain is pretty smart and it tends to compensate the defects. Those problems have to be checked by hand (well... by eyes :P). Not so difficult to see if you know what to look after. And the rule is simple, every similar points in both images must be on the same horizontal line, if you see a vertical shift, the stereo is not good. And the wider the shift the harder for the brain to fuse, resulting on pain and strain for the viewer. Since real eyes are always on the EXACT same line, images produces by the eyes are perfectly aligned on every point. That's what the brain loves.

Link to comment
Share on other sites

Glad you published an empirical test.  What might be good is if you could snap a window shot, then annotate it with the worst example you can find.

 

I really cannot run fast enough right now, to think I can beat a 2.1 release.  Change any names you feel are important & PR ASAP.  This is a new feature, which ideally should have been done in alpha, not at the tail end of beta, especially with all the re-factoring (though I am glad it was done).  We can always improve the results / performance, but any term changes after release will cause breaks in user code.  It actually works on a 3D TV, so we have a valid base line.

 

I take your point about having more elements in the test scene, as opposed to moving a single object to and away.  After I can see the test illustrated in a picture, I will think along the lines of making the cursor shaped as a long horizontal line, or a cursor which moves a line.  That way you can easily do the test you describe.

 

I mentioned my entire repository back up to you before, I store the entire repository as a single OS file.  Will copy this file / repository off, for later reference.

Link to comment
Share on other sites

No need to design a cursor as a long horizontal line, to check alignement I simply open on other window (explorer for instance) and drag it above the 3D scene, using the upper border of the floating window as a perfectly horizontal ruler ;)

Link to comment
Share on other sites

So, I've just PRed the stereoscopic system refactoring.

 

What has changed, mainly:
- Renamed everything related to stereo according to the regular names
- Renamed some function for better clarity
- Throwed away SUB_CAMERAID_A and SUB_CAMERAID_B constants. I found them irrelevant , particularly if we let rigs open for any kind of setups we don't know in advance. We have the sub cameras array and that should be enough for devs to use and extend class for more complex rigs, it's up to the user to define its cameras and IDs.
- Put a generic rig parameters instead of every single VR and stereo params inside the global camera class (to provide better flexibility for later)
- Inverted cameras (depth was calculated inside out)
- Put babylon.anaglyphCamera.js in babylon.stereoscopicCamera.js (renamed babylon.stereoscopicCameras.js) because an anaglyh camera is a stereoscopic camera, it's just the final output which is different, there is no reason to treat them differently (quite the contrary, it would be disturbing for users to separate them)
- Added STEREOSCOPY_CONVERGENCE_MODE and STEREOSCOPY_PARALLEL_MODE constants for _cameraRigParams.stereoMode flag to let you implement your parallel mode alongside convergence mode, JCPalmer (if we decide to implement both to let user choose which one better suits its need).

 

BUT there's still many work to be done, because
1. the stereoscopy procuded is not ok, sorry to tell you that JCP :( (when we rotate camera the alignement is shifting verticaly up and down and the parallax is kind of rotating with the camera :wacko: , that's a bit strange because for ArcRotateCamera the alpha value is simply shifting both side and that seems a logical approach :huh: ). The VR left and right cams computations are based on an other method I don't totaly understand ^_^ but it seems ok. Maybe we should use the same kind.
2. the regular stereoscopy usage need to define interocular value in metric unit and not an angle unit (I'm talking about that halfSpace value, now renamed to a stereoHalfAngle,  provisonnaly converted from interaxialDistance value).
3. We need to provide stereoscopy for FreeCamera as well.

 

Oh and I think that _getVRProjectionMatrix shouldn't be directly in Camera class, it's a bit specific, no, what do you think ?

 

Typically, in production a virtual stereo rig is used that way:
-> we have a camera we can manipulate as usual, that camera has a target we can move and an interrocular distance we can change real time. While moving camera and objects in the scene, those three params (camera position, target position - defining where focus point will be on screen -, interaxial distance - defining depth amount -) are constantly modified.

Behind the scene, this camera is in fact a helper acting exactly like a camera (that could be a camera not rendered), and the two rendered cameras (left and right) are simply parented to that central camera/helper, horizontally shifted on both side by the half of interoccular distance, and they all share the same target. This way we are not calcultating the position of the two cameras each frame but we simply benefits from the parenting system. The convergence is automaticaly set thanks to the shared target, and, on interrocular changes the two cameras are re-shifted. When coming to stereoscopic production the target and the interaxial distance are strictly required because they must be changed to create proper nice and "audience safe" depth all the time. For a game engine, that's maybe not as important as for a video, but we need to provide specialists what they are familiar with B)
Currently, the implementation is rather different. That's not a problem by itself, we have to implements the less resources consuming version, as long as people can use it as usual stereoscopic cameras.

 

Erm… that's strange I cannot login anymore to bjs docs…  (maybe it's because of the new doc project ?)
I wanted to fix the "contribute page" (eg. Approved Naming Convention page) because now when changing or adding files it's not the gulpfile we have to edit anymore, it's the config.json file.

Link to comment
Share on other sites

Jeff,

 

I wanted to let you know that I tested the xperia against samsung, ipad, ios, and pc, and other than fps, all of my gui elements using bGui worked exactly the same - resolution dependant of course.  I didn't have the chance to test with the dialogue extension, but I can't imagine it's a Sony issue.  The waterproofing is simply a seal on ports, but quite effective as I dropped my tablet in the dish sink by mistake and it was perfectly fine.

 

Did I understand that you benchmarked at only 5fps on a post process?  If this is correct, try turning on your developer options and adjust hardware scaling to 0.5.  Just use caution with other settings.  I hope you were saying you only lost 5fps.  Also, I'm not sure why you wouldn't be able to render interlaced odd/even frames and maintain 60fps - scene dependant of course.  I'm also exploring web workers as I have plenty of cpu left and you should be able to thread a post render process with remaining overhead.  Just a few thoughts.  Great work on the new cameras! :)

Link to comment
Share on other sites

Actually the postprocess for stereoscopy is just here to split view in two. Wouldn't be more optimised to simply define the two camera viewports as half screen size and not use postprocess at all ? (except for anaglyph of course)

The only interest I see to use postprocess for stereoscopy is if we decide to compensate the keystoning problem (would be important to do that, to provide a perfect stereo brain safe proof).

Link to comment
Share on other sites

@db - First there are controls in the recent tester to let you change both the FOV and spacing.  Yes, that was 5 fps, not a 5 fps drop.  Web workers run on a core separate from render loop.  All modern mobile devices are multi-core, so the fact you get better throughput, does not mean anything for this application.  Dropping hardware scaling is only good once admitting defeat.  Dialog wise, my dialogs disappear now.  Was getting 60 fps with hidden dialogs & no post process.  Only thing added was post process.

 

@Vousk - I originally had 2 viewports.  The viewports for side by side, same as VR, did not shrink though.  They only removed part of the image from the left & right, but the zombie (used for early tests) stayed the same size / shape.

 

Working right now on a canvas+ alternative, Intel XDK.  But before even that, I want to take a deep look at engine's fps metering.  I do not need some kind of n sec average to be calc'ed and displayed.  Will settle for a frame counter.  I will know when I switched it on, to calc duration.  Calc fps by dividing by the frame count.  Write to console.  Using this will allow for consistency, but not interfere.

Link to comment
Share on other sites

Hey,

I just mde the changes to the tester to accommodate names changes.  See a problem / difference with how the interaxialDistance is handled in setCameraRigMode from setCameraRigParameter.  One does a div by 0.0637Can see this in the tester updated.  Click on one of the 3D rigs, you see

first way.  Change distance, & you see the other.

 

If CDN not cut yet, could this still be made consistant for 2.1?

Link to comment
Share on other sites

@JCP - all good points, my friend.  I like your statement that attenuating hardware scaling is only good once admitting defeat - this is so very true.  I'm curious to see how close the calculations for metering fps mathematically mirror the rendering result on your tablet.  This should be good info and possibly provide further insight into BJS cpu management and cache. I look forward to your next test(s), and thanks for playing this out to the end.  We all benefit from your hard work  Lots of eyes watching this one. :rolleyes:.  I even saw Oculus on the thread recently - as they must wonder what the hell we're doing with our cameras over here - however, if they don't release the Rift soon, others will begin to relegate them to a secondary support position as well.  Their loss. :ph34r:

 

DB

Link to comment
Share on other sites

Ok, am solely working on performance right now, using the convergence method of the final 2.1 BJS.  No changes have been made to the test scene other than performance metering, so I am not going to republish it.

 

I went with a logging approach, adding the smallest possible extra overhead to the render loop.  I found nothing of value that could be reused from Engine's fps metering.  Engine also has very low overhead by merely computing the duration of a frame in the render loop.  If you wish to do fps metering, however, it starts managing an array of the last 60 samples.  Then it computes the fps from the samples every frame.  If using debug layer, this value is displayed externally.

 

I just want an average of say a 10 or 20 second interval with no overhead.  The clicking of buttons on the test scene after running 10 secs or so, would trigger a calc-log-reset sequence.  To prep for also checking on XDK, I have implemented this in a file close to the app.js that is generated when you start a project.  Here it is:
 

var frameCount;var startTime;function onAppReady() {    if( navigator.splashscreen && navigator.splashscreen.hide ) {   // Cordova API detected        navigator.splashscreen.hide() ;    }        var canvas = document.getElementById("renderCanvas");    canvas.screencanvas = true; // for CocoonJS    var engine = new BABYLON.Engine(canvas, true);    var scene = new BABYLON.Scene(engine);            shape_key.initScene(scene);    createDialog(scene, scene.activeCamera);           prepCloth(scene.getMeshByID("Cloth"), scene);    frameCount = 0;    startTime = BABYLON.Tools.Now;    	    scene.activeCamera.attachControl(canvas);        engine.runRenderLoop(function () {            scene.render();            frameCount++;    });}// this is an XDK event; when not using XDK, place in <body onload="onAppReady()">document.addEventListener("app.Ready", onAppReady, false) ;// in order for logging to work in XDK, change "dev.LOG" in "init-dev.js" to "true"function logPerformance(description) {    var totalWallClock = BABYLON.Tools.Now - startTime;    var fps = (frameCount / (totalWallClock / 1000)).toFixed(2);    BABYLON.Tools.Log(description + " was performed @ " + fps + " fps");    // reset for the next activity    frameCount = 0;    startTime = BABYLON.Tools.Now;}

Running this on the Sony / Android tablet, my I got very comparable #'s to the on screen values of canvas+.  Doing no 3D rendering produced 30 fps with the dialog visible & 60 when hidden.  When viewing any of the 3D modes, this dropped to 5-6 fps regardless of the dialog being visible or not.

 

Here is where I have new stuff to report.  I decided to also to run the same tests on CocoonJS's canvas+ on the iPad.  The iPad does not support miracast, so it can not actually be used to display on a 3D tv, but it is a controlled way to try to find the cause.  (FYI, the Android numbers are without broadcasting to tv, though they did not down when doing so).

 

For no 3D rig the iPad numbers were 29 / 59 fps.  Almost the same.  For any of the 3D modes, the #'s also matched Android in that it did not matter whether the dialog was visible or not.  That is where the similarity ends though.  The performance drop was not nearly as bad.  They all were in the 19-21 range.

 

One could extrapolate that on iPad, the parallel method might produce very acceptable performance.

 

Not ready to call what it might be the reason for the difference.  Am going to follow through with the XDK version, so see if they might additional clues.

Link to comment
Share on other sites

  • 4 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...