Jump to content

Making ALL cameras 3D


JCPalmer
 Share

Recommended Posts

Here is the link 0ne last time. https://googledrive.com/host/0B6-s6ZjHyEwUfkRGTUVVTUxaT01UZUlieGdCODF1blFZZHJFMnRJUHBpcG1KcnBjUVluYXc

 

Now have multiple buttons to hide the dialog, 1 with a play toggle & 1 without.

 

Even with cocoonjs, all of the android screen is no longer use-able.  At least for manufactures that do not have dedicated back or menu buttons.  Hope that was not traded off for waterproofing.  Stereogram vertical will never work on those devices.  A little annoying for stereogram horizontal too.  iOS does support true full screen in apps, but not miracast.

 

Canvas+ has a built-in FPS meter in app launcher.  Got 60 / sec with no sub camera.  Dropped to about 5 for all others.  WebVR did well to make sub camera optional.  Think some gains could be made for others by eliminating the pass post process.

 

Still, as a camera re-factoring exercise, this was very successful.  Further camera deconstruction / modularization, in future releases, might also be good.  That is, pull out the aiming positioning into its own class that supports a CameraInput interface.  I noticed on the Sony tablet that you could add 2 different kinds of playstation gamepads.  If hardware is moving to plug-in play, then having user configurable games is a good goal.  You can still have classes to glue it back together.

Link to comment
Share on other sites

BTW "stereogram" is not at all the correct term for naming your cameras.

A stereogram is a very particular stereoscopic image.

Your camera's names should be Stereoscopic Cameras, and to respect the standardized nomenclature the different types are: Crossed Side by Side, Parallel Side by Side, Over Under, Row Interleaved, Column Interleaved, Page Flipping, etc

Link to comment
Share on other sites

The UI part, the scene added.  It is in the code as constants though.  Actually stereogram is a term I saw on a 1850's picture.  It might have been a brand name.  DK, did ask for commits above.  Thanks.  I plan no PR's using 2 finger typing.  If someoneless does, fine.

Link to comment
Share on other sites

Yes, horizontal should be Parallel Side by Side (most used format), but be carefull, in JCP example, the horizontal camera is Crossed, not Parallel.

Vertical is Over Under.

 

And it clearly will be more correct to call them Stereoscopic Cameras and not Stereogram Cameras.

I'd pay a Duff to the one who will be able to generate a stereogram camera  ^_^  (not impossible, but certainly complex)

Link to comment
Share on other sites

Well, in fact, after a better look at the final result, those stereoscogramic ^_^ cameras don't seem correct in their position/orientation calculations.

The texture used for the ground is pretty blurry and to check a stereo camera more 3D objets would be needed in the scene, but it looks like their is a slight offset problem. As if the second camera have been translated on X axis, then rotated to look at the target, resulting in a position impossible for our eyes, leading to brain pain and strain after period of viewing (or immediate dizziness for sensitive people).

Link to comment
Share on other sites

After checking a bit, the first camera is offset-ed on -X and the second on +X. Each camera are targeted to the main camera target:

https://github.com/BabylonJS/Babylon.js/blob/master/Babylon/Cameras/babylon.targetCamera.ts#L259

 

 

Why do you say this is not what should be done? For me this is pretty legit (good simulation of each eye)

Link to comment
Share on other sites

If the two cameras are both offseted opposite way, targeting the same point, this is ok then (as I said the scene is not really optimal for a visual checking, but I (thought I) saw some strange vertical shifts between the two eyes on the anaglyph version, and those kind of offsets are generaly due to bad camera positionning).

Link to comment
Share on other sites

Oh well, I see what the point is. I'm on my phone this evening so I will go back tomorrow to explain in deťails but simulating the eyes system is (better than moving only one camera but) not always the most advised method for producing stereoscopic images (particularly with anaglyph).

Link to comment
Share on other sites

Vousk,

Thanks for picking this up. Most of my effort has been in the plumbing & testing scene. Everything takes so long now. Too frustrating for me to get much done. pM me with a google ID, and I will add you as a collaborator on that google drive directory. You cannot do much without a tester.

I thought I was capable to run some profiling on pic to see if there is any little thing to be done, with big performance impact

Jeff

Link to comment
Share on other sites

Ok, am managing to get both hands on my keyboard.  do to both less pain, & chair repositioning. too close to the screen, so I just look down instad of at the screen.

 

I got the energy to do some more organized playing with a real #d display.  Thw cross eyed comment might be right.  Procedure:

  • Setting the FOV down to .2576, to get the tablecoth big enough. 
  • Played the scene just until cloth fully folded, then paused.  After that used the hide toggle.
  • Looked best @ 1/2 space = 0.1, not much cross at all.

The other reason this interested in true parallel is performance. Maybe with parallel could be done with no sub-cameras.  Means rendered only once, no need for the pass fragment shader.  In interleaf shader just shift a little left or right.  might not help anaglyph, but you cannot always save everybody.

 

Can a camera be to be a little wider than the display, cuase each side would then use a little different part?

Link to comment
Share on other sites

All those multi camera setups are always called a "rig", and for stereoscopic ones there are standardised naming conventions (for the different stereo modes, but also for every variables used - for instance distance between the left and right camera is called interaxial distance, usually set to approx real interoccular distance 6.37cm it determines the overall amount of depth within a scene).

I think it's important to name things correctly and we also need to think for further steps, any kind of rigs can be implemented (cylindric 360° with 9 cameras for instance).

That's why I'm in the process of changing names everywhere - PR tomorrow, I can't for now.

 

Now time for some stereo related piece of information, for those who are interested:

 

To shoot a stereoscopic scene (real or virtual) there are two main possibilities:

  • convergence mode : simulating eyes rotations to focus on target

The convergence angle determines where objects in the scene will fall relative to the screen plane (behind or in front - "jaillissement" in french, I don't remember the english term… <_< )

  • parallel mode : eyes are always looking at infinite distance

The convergence is simulated by modifying parallax (distance between an identical point on left and right images) by horizontaly shifting the two produced images.

 

There is an infinite debate between stereographers all around the world to decide which method is the best. But usually, virtual 3D cameras use convergence and cinema production tends to use both, depending on the sequence to shoot.

In both cases, two cameras are required to get the images because all the tiny differences between the two viewpoint are required for the brain to correctly fuse images to feel the depth.

 

In convergence mode, the "cross-eyed" (sometimes simply called "crossed") mode is not the good one to go at all for default stereo mode. This mode is usually dedicated to visual checks without any additionnal accessories (no glasses needed, you can fuse cross-eyed image by eyes squinting to get the depth, very funny - and/or usefull - when mastered). Parallel mode is the one used by every side by side stereo dedicated devices. I'll correct this in BJS.

 

In convergence mode, we're experiencing a 3D alignement issue called keystoning (this is the little problem I saw on JCP example).
This is mainly due to the fact that the real world is directly projected onto our two retinas, and those eyes sensors are spherical and not parallel one to each other when eyes are rotated (and also our eyes have a brain connected to them that does a lot of behind the scenes work to correct optical deformations). In contrast, a virtual stereoscopic view of the world is composed of two rotated planes (while rotating cameras, the frustums rotate at the same time) projected on a single central screen, resulting on trapezoidal images... ermm..., I have some difficulties to explain that in english <_< and without the ability to draw a figure, you could find clearer explanations than mine on the web (for instance here http://doc-ok.org/?p=77 )
So, now you understand better :lol: :lol: , we have here an asymetric geometric distortion in left and right eye to fix.
Generally this is corrected in post-production, but for a game engine, that should be corrected directly at render time with a slight lens morphing applied to each camera  (not the same as for VR, but kind of).
I'm not a math guy, so I won't implement this during stereo camera refactoring :rolleyes:

 

In parallel mode we also are experiencing a problem. The obtained image after parallax modifying is less wide than the original camera width (since we are loosing some info at the extreme left part of the left image and the extreme right part of the right image) (once again a little drawing would be usefull :P)

 

FYI, in stereoscopic production it's also required to avoid stereo window violations.
That occurs when an object appears in front of the stereo window and is cut off by the window. It's impossible in real world, a window frame behind the object can't obscure that object. The brain can't understand that info, resluting in painfull artefact for the eyes.
But don't worry, as a 3D engine we just need to provide the good camera rig. It's the job of people who are using the rig to avoid window violations and to fine tune all that particular stereo aspects.

Link to comment
Share on other sites

well, I found some energy and implemented parallel.  Since you are also modifying, I'll not PR, unless asked for.  I did get a performance increase & results actually seem a little better.  Given this is a game engine, sounds like parallel is the winner.

 

Increase is still not that much from 5 fps to 10 (nothing was 60).  The only difference is the shader, so that is where it is.  If this can be improved, should make much more viable.  For the stereoscopes, I changed what the halfspace was defined to be.  Here it represents the # of pixels of shift left or right.  Kept the same for Anaglyph, radians.

 

Here is the shader:

#ifdef GL_ESprecision highp float;#endifconst vec3 TWO = vec3(2.0, 2.0, 2.0);varying vec2 vUV;uniform sampler2D textureSampler;uniform vec2 stepSize;uniform float width;uniform float halfSpace;void main(void){    bool isRightOrBottom;    vec2 texCoord1;    vec2 texCoord2;    float shift;        vec3 frag1;    vec3 frag2;    #ifdef IS_STEREOSCOPIC_HORIZ	    isRightOrBottom = vUV.x > 0.5;        shift = halfSpace * stepSize.x * (isRightOrBottom ? 1.0 : -1.0);	    texCoord1 = vec2(isRightOrBottom ? (vUV.x + shift - 0.5) * 2.0 : (vUV.x + shift) * 2.0, vUV.y);	    texCoord2 = vec2(texCoord1.x + stepSize.x, vUV.y);#else	    isRightOrBottom = vUV.y > 0.5;        shift = halfSpace * stepSize.x * (isRightOrBottom ? 1.0 : -1.0);	    texCoord1 = vec2(vUV.x + shift, isRightOrBottom ? (vUV.y - 0.5) * 2.0 : vUV.y * 2.0);	    texCoord2 = vec2(vUV.x + shift, texCoord1.y + stepSize.y);#endif    if (texCoord1.x < 0.0 || texCoord2.x < 0.0 || texCoord1.x >= width || texCoord2.x >= width){        gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);	}else{            frag1 = texture2D(textureSampler, texCoord1).rgb;        frag2 = texture2D(textureSampler, texCoord2).rgb;            gl_FragColor = vec4((frag1 + frag2) / TWO, 1.0);    }}

Here is postprocess:

module BABYLON {    export class StereoscopicInterlacePostProcess extends PostProcess {        private _stepSize : Vector2;        constructor(name: string, cam: Camera, isStereoscopicHoriz: boolean, private _halfSpace : number, samplingMode?: number) {            super(name, "stereoscopicInterlace", ["stepSize", "width", "halfSpace"], null, 1, cam, samplingMode, cam.getScene().getEngine(), false, isStereoscopicHoriz ? "#define IS_STEREOSCOPIC_HORIZ 1" : undefined);                        this._stepSize = new Vector2(1 / this.width, 1 / this.height);            this.onSizeChanged = () => {                this._stepSize = new Vector2(1 / this.width, 1 / this.height);            };            this.onApply = (effect: Effect) => {                effect.setFloat2("stepSize", this._stepSize.x, this._stepSize.y);                effect.setFloat("width", this.width);                effect.setFloat("halfSpace", this._halfSpace);            };        }        public setHalfSpace(halfSpace) : void{            this._halfSpace = halfSpace;        }    }}

Here are changes to Camera:

        public dispose(): void {            // Remove from scene            this.getScene().removeCamera(this);            this._disposeResources();        }                private _disposeResources(): void {            while (this.subCameras.length > 0) {                this.subCameras.pop().dispose();            }                                // Postprocesses            for (var i = 0; i < this._postProcessesTakenIndices.length; ++i) {                this._postProcesses[this._postProcessesTakenIndices[i]].dispose(this);            }        }                // ---- 3D cameras section ----        public setSubCameraMode(mode: number, halfSpace = 0, metrics?: VRCameraMetrics): void {            this._disposeResources();            this._subCameraMode = mode;            this.setSubCamHalfSpace(halfSpace);            var camA : Camera;            var camB : Camera;            var postProcessA: PostProcess;            var postProcessB: PostProcess;            switch (this._subCameraMode) {                case Camera.SUB_CAMERA_MODE_ANAGLYPH:                    camA = this.getSubCamera(this.name + "_A", true);                    camB = this.getSubCamera(this.name + "_B", false);                    postProcessA = new PassPostProcess(this.name + "_leftTexture", 1.0, camA);                    camA.isIntermediate = true;                    postProcessB = new AnaglyphPostProcess(this.name + "_anaglyph", 1.0, camB);                    postProcessB.onApply = effect => {                        effect.setTextureFromPostProcess("leftSampler", postProcessA);                    };                    break;                case Camera.SUB_CAMERA_MODE_CROSSEDSIDEBYSIDE_STEREOSCOPIC:                case Camera.SUB_CAMERA_MODE_OVERUNDER_STEREOSCOPIC:                    var isStereoscopicHoriz = this._subCameraMode === Camera.SUB_CAMERA_MODE_CROSSEDSIDEBYSIDE_STEREOSCOPIC;                    postProcessB = new StereoscopicInterlacePostProcess("st_interlace", this, isStereoscopicHoriz, halfSpace);                    break;                case Camera.SUB_CAMERA_MODE_VR:                    camA = this.getSubCamera(this.name + "_A", true);                    camB = this.getSubCamera(this.name + "_B", false);                    metrics = metrics || VRCameraMetrics.GetDefault();                    camA._vrMetrics = metrics;                    camA.viewport = new Viewport(0, 0, 0.5, 1.0);                    camA._vrWorkMatrix = new Matrix();                    camA._vrHMatrix = metrics.leftHMatrix;                    camA._vrPreViewMatrix = metrics.leftPreViewMatrix;                    camA.getProjectionMatrix = camA._getVRProjectionMatrix;                    if (metrics.compensateDistorsion) {                        postProcessA = new VRDistortionCorrectionPostProcess("Distortion Compensation Left", camA, false, metrics);                    }                    camB._vrMetrics = camA._vrMetrics;                    camB.viewport = new Viewport(0.5, 0, 0.5, 1.0);                    camB._vrWorkMatrix = new Matrix();                    camB._vrHMatrix = metrics.rightHMatrix;                    camB._vrPreViewMatrix = metrics.rightPreViewMatrix;                    camB.getProjectionMatrix = camB._getVRProjectionMatrix;                    if (metrics.compensateDistorsion) {                        postProcessB = new VRDistortionCorrectionPostProcess("Distortion Compensation Right", camB, true, metrics);                    }            }            if (camA) {                this.subCameras.push(camA);                this.subCameras.push(camB);            }            this._update();        }        private _getVRProjectionMatrix(): Matrix {            Matrix.PerspectiveFovLHToRef(this._vrMetrics.aspectRatioFov, this._vrMetrics.aspectRatio, this.minZ, this.maxZ, this._vrWorkMatrix);            this._vrWorkMatrix.multiplyToRef(this._vrHMatrix, this._projectionMatrix);            return this._projectionMatrix;        }        public setSubCamHalfSpace(halfSpace: number) {            switch (this._subCameraMode) {                case Camera.SUB_CAMERA_MODE_ANAGLYPH:                    this._subCamHalfSpace = Tools.ToRadians(halfSpace);                    break;                case Camera.SUB_CAMERA_MODE_CROSSEDSIDEBYSIDE_STEREOSCOPIC:                case Camera.SUB_CAMERA_MODE_OVERUNDER_STEREOSCOPIC:                    this._subCamHalfSpace = halfSpace;                    // will not be found if being called from setSubCameraMode()                    var p : PostProcess;                    for (var i = 0; i < this._postProcessesTakenIndices.length; ++i) {                        p = this._postProcesses[this._postProcessesTakenIndices[i]];                        if (p instanceof StereoscopicInterlacePostProcess){                             (<StereoscopicInterlacePostProcess> p).setHalfSpace(halfSpace);                        }                    }            }        }
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...