Jump to content

Making ALL cameras 3D


JCPalmer
 Share

Recommended Posts

This is an idea I have started to think about.  It is not quite fully baked, so let me say what I think would be a good strategic direction & why, first.

 

At Xmas, I bought myself a new 4x, 3D, Sony TV.  Have not done a lot with the 3D yet, but did run some samples from a channel on Roku, which I have from my prior setup.  Looked amazing.  When I viewed the samples with the 3D off on the TV, they were just 2 pictures side by side.  Am waiting for Sony's 2015 4x upscaling 3D Blue-ray player with "Playstation now" built in, but I digress.

 

There is also a screen mirroring feature on the Sony & probably other brands.  It uses something called miracast.  http://www.howtogeek.com/200796/what-is-miracast-and-why-should-i-care/ .  What if you could make either a website, or app, which the user could run on a mobile device with a switch to go 3D.  If they had a 3D TV, then it might be unbelievable.  Great for Device orientation, arc rotate, and possible game pad cameras.

- - - - - - - - -

Now the how:

I have nothing against Oculus, but having a whole set of duplicate cameras just for it is not great.  What if Oculus flops?  Oculus cameras might even be used to achieve this, except for this wierd distortion required.  Might it not be better to refactor into putting an undistorted 3D capability into FreeCamera & arcRotateCameras, turned on with set3D(true)?  You could also have a setOculus(true), which also distorts it.

 

 

 

Link to comment
Share on other sites

Well, thought I would bake it a little more.  First a correction, the final images ARE distorted.  Went back to this 1 Roku channel that said they doing 3D, WeathTV.  Re-watched these samples, this time with the 3D off, that sort of looked like the old tv show "life of the rich and famous".  It is obvious that their real camera made 2, 9 x 8 pictures out of 2, 9 x 16's.  I imagine James Cameron's, over-under, Fusion Camera System http://en.wikipedia.org/wiki/Fusion_Camera_System does the same thing in the opposite dimension.  Do not think we would to fix the aspect ratio, just divide what ever the window was.

 

I took a picture of the screen, which unfortunately did not get it all.  Noticed 2 things, first the pictures are almost identitical spacially.  Yes, things are far in the background, but the 3D text in the foreground starts at nearly the exact same distance from the left.

 

2nd, it looks like the 3D effect is produced by the colors being "cooked" on the right.  It was harder to see on the indoor shots, but it always was there..  Guess that is the other distortion.https://www.dropbox.com/s/yu7d8vynmap5f8b/2015-05-18%2008.40.33.jpg?dl=0

- - - - - - - - - - -

I also looked at Oculus source code.  Probably just leave that alone.  Maybe a redo of anaglyph might be better.  If AnaglyphArcRotateCamera & AnaglyphFreeCamera could be put inside of ArcRotateCamera & FreeCamera and generalized more, then a set3D() method could take constants: (None, Anaglyphic, side-by-side, over-under).

 

Maybe the same trick used to turn SIMD on/off could used, which looks like it has no call overhead could be employeed here too.

Link to comment
Share on other sites

Hi Jeff,

 

You don't stop thinking, do you?  I drive myself crazy pondering such things - but I supose someone has to.  I went down the road of 3D with Jim on Avatar, and it's now a fairly complex path to something that doesn't completely ruin the experience.  Using anaglyph imagery is perhaps ok to implement, but it's much easier to grasp what the basic problems with 3 dimensional imagery is in displaying on a 2D screen.  It is a very un-natural process, and can easily confuse the brain trying to interoplate.  Oculus and other similar devices such as the hololens have a short shelve life once newer technologies come to market; and the hololens isn't even released yet.  This is why Zuckerberg now realizes he made a bonehead decision, as Oculus will never roucoup 2 billion dollars.  That's just insane. :blink: 

 

The retina of the eye can be understood as a plug directly into the brain, since it is one of the two the primary methods we use to collect information.  The new technologies forthcoming in about 2 years don't simulate natural light, but they produce it across fields.  However in the mean time, such tools as using anaglyphs might carry us through, but I would be curios if you might take a look at the new technologies in preparing for the very near future.  We need more good minds pondering this, as the 2D path is already saturated with too many crappy solutions.

 

DB

Link to comment
Share on other sites

DB,

Is there anybody you don't know?  For a second there, I thought you said, by ommision, that Oculus Rift was on the market.  The site say Q1 2016.  Did some break even math, 2 billon / $350 per unit = 5.7 million units (@ 100% margin).  Just because it will not payoff does not mean they would shut it down & eat the whole thing, though.  I think you have too much invested in Magic Leap to be objective.  Enough said on the non-anaglyph front.

 

Anaglyph has been around since 1852.  Given that 3D movies have made a comeback, after iMax kept them alive for decades, people want to see them in their homes in 3D too.  I think BJS would do well to upgrade the anaglyph and put it inside the Camera base class.  Make it overridable for say Arc rotate.  I can think of other implemenations where this could payoff as well.  Since TVs, can switch to processing any input as Side-by-side or above-below 3D, then 3D console games are a potential.  I am pretty sure BabylonHx ports to them. 

 

Being inside the Camera base class means you only have to put up UI before the games actually gets going, to allow the customer to switch it on.  All cameras would also have it, not just Free & arc rotatate, but gamepad, virtual joystick, follow, etc.

 

If you have any insite on the Fusoin camera system that would apply to virtual cameras, or other tips on implementing this, I am all ears.

 

Jeff

Link to comment
Share on other sites

Hi Jeff,

 

 

I have been very lucky to work with alot of great people.  But that's what happens when you get into technology early, ad work 15-20 hours a day for 20 years; sometimes three productions at once.

 

I can tell you about the Fusion camera system, as I also knw Vince Pace at Pace Productions in Burbank where the Fusion system was developed.  Jim is a partner in the company, and has allowed Vice to pursue 3D beyond what anyone else is doing.  If you ever visit them, they have hundreds of 3D cameras, and usually more than half of them are rented out at any given time.  Sometimes they are supporting a hundred productions at once.  But Vince is always Jim's Cinematographer.  To use the Fusion system, you must have two cameras at a specific distance and completely synced together in a single moving unit.  Only if you replicate this and convince Vince and Jim to relaese the interpolation software to the public, I don't see how you might use Fusion in Web\GL.  However, it wouldn't be that difficult to replicate what Vince's team has built in WebGL.  You simply need to understand the optics, and then render two cameras at a fixed distance in your scene.

 

As for Oculus, I don't know why they haven't released the Rft yet, as they have thousands of Rifts available for development which are already mass produced.  At GDC there was hundreds on the floor.  So I've been testing one for almost a year and I can't stand using it.  I thought they were releasing it this year, and there are several competitors who are releasing this year for less money.  So I think they are incredibly poor at business, unfortunately.  Even when they release, it might be a bit passe'.  But I gave the students at UCSB an introduction to what Magic Leap is building, and they all were bumfounded.  So I recommend taking a look at Magic Leap for the not too distant future, but also realize that the computation in rendering light fields is incredibly CPU heavy.  

 

But I belive that within 4-5 years, this won't need to be post, but the CPU speed will render in real time which is why I'm working in WebGL.  I just bought a new tablet from Sony - and Experion V2, and I have to sa that this is amazing.  It's only 6mm thick ( about the thickness of a nickel ), can be used for hours under water without fail, has a 2.3 Ghz quad core processor with 3MB of RAM on the processor, has a very fast ARM GPU, and runs as fast as my new ASUS laptop.  SO were getting really close, and Magic Leap has hired the co-creator of the XBOX which I met with 2 weeks ago.  Hardware appears to be the least of our problems at this time.

 

I hope this info helps.

 

DB

Link to comment
Share on other sites

Actually if you look at the code of Oculuscamera there is no that much amout of code. The interesting camera is the WebVR who is targeted to be more standard (The only difference is how we handle the inputs)

 

I like your idea of having the distorded rendering baked into Babylon.Camera itself letting children only control the input. But we have to think about how we can define specific values like fov, eye distance and so on.

 

So for me this is a great idea which just need to be shaped

Link to comment
Share on other sites

What Vince does with distance between the cameras is the very same proportion to the human pair of eyes.  This is called the IPD or interpupillary distance.  When you are fitted for eyeglasses, this is measured and there is an average range that exists for humans.  This range is 54mm to 68mm, and we typically default to 60mm in general calculations.  A child's IPD can be as narrow as 48mm, and I would leave the IPD for VR cameras as a variable.  This is less dependant on the person than it is the scene parameters.  So anyone who would want to use this "new" camera would adjust the IPD and focal lenght custom to their scene based on focal length and distance to objects in the scene.  

 

In working on stereoscopic film conversion in the past, we adjust the IPD as a fixed (constant) setting for a scene, and it works for all people as the pupils can interpolate this on an (x,y) plane (camera plan.)

 

It should be very easy to simply add this variable to the VR camera, and allow it to be set by the developer based upon their own preferences and the device they are developing for.

Link to comment
Share on other sites

I am not ready for the stuff talked being about yet.  I am 5 hrs into implementing.  I am doing all the refactoring, and straightenout out of stuff, like NO ONE except for Camera implements _update().  _update() is now defined on Camera as:

public _update(): void {    this._checkInputs();    if (this._subCameraMode !== Camera._SUB_CAMS_NONE){        this._updateSubCameras();    }}public _checkInputs(): void {            }

All the _update() methods in subclasses have been renamed _checkInputs().

 

In order to get things to compile, I have to avoid using private members in Camera that are duplicates of the old cameras, e.g. _leftCamera.  Since left and right are not always valid, I switched to A & B, and they are just an index into subCameras.  All new statics are:

private static _SUB_CAMS_NONE = 0;private static _SUB_CAMS_ANAGLYPH = 1;private static _SUB_CAMS_HORIZ_STEREOGRAM = 2;private static _SUB_CAMS_VERT_STEREOGRAM = 3;private static _SUB_CAMS_OCULUS = 4;private static _SUB_CAM_A = 0;private static _SUB_CAM_B = 1;

The 2 stereograms are implement right now without a post process, just viewports.  I hope to get sample scene up, then figure out stuff like fov & should there be a color brightener on the B sub camera, for stereograms.

Link to comment
Share on other sites

I'm certain you're not ready to implement more at this time, but for future reference here is some additional info I came across for what they are calculating within the Fusion algorithm.

The Depth Bracket of your scene refers to the actual distance between your closest object in the frame and the furthest object.  The Parallax Budget refers to your calculated maximum positive parallax and desired maximum negative parallax represented in percentage of screen width.  For example if I determine through a simple calculation that my positive parallax should never exceed 0.7% of screen width and I have determined that my negative parallax should not exceed 2% of screen width, then my total Parallax Budget is 2.7%.   The Depth Bracket must be able to be squeezed into the Parallax Budget.  There are many algebraic formulas to determine the proper interaxial distance to achieve this.

The native parallax for a given screen size simply refers to what percentage of screen width will equal the human interocular.  If you are using 2.5 inches as the baseline interocular and you know your presentation screen will be 30 feet wide (360 inches) then just divide 2.5 by 360.  2.5 ÷ 360 = 0.007 or 0.7%  Therefore the Native Parallax of a 30 foot screen is 0.7%, so we should make sure to keep our maximum positive parallax under 0.7% of screen width if we plan to show our footage on a 30 foot wide screen.  If we shoot for a 65” 3DTV, then we can get away with over 3% positive parallax.

The 1/30 rule refers to a commonly accepted rule that has been used for decades by hobbyist stereographers around the world.  It basically states that the interaxial separation should only be 1/30th of the distance from your camera to the closest subject.  In the case of ortho-stereoscopic shooting that would mean your cameras should only be 2.5” apart and your closest subject should never be any closer than 75 inches (about 6 feet) away.

Interaxial x 30 = minimum object distance
or
Minimum object distance ÷ 30 = Interaxial

If you are using a couple standard 6″ wide camcorders in a side by side rig as close as they will fit together then the calculation would look like: 6” x 30 = 180 inches or 15 feet.  

The 1/30 rule certainly does not apply to all scenarios.  In fact, in feature film production destined for the big screen we will typically use a ratio of 1/60, 1/100 or higher.  The 1/30 rule works well if your final display screen size is less than 65 inches wide, your cameras were parallel to each other, and your shots were all taken outside with the background at infinity.  When you are ready to take the next step to becoming a stereographer you will need to learn about parallax range and the various equations available to calculate maximum positive parallax (the parallax of the furthest object,) which will translate into a real-world distance when you eventually display your footage.

DB

Link to comment
Share on other sites

Well, I am close to a working sample scene.  The buttons for 'Stereogram Horizontal' & 'Vertical' each do the oppose, but they do it.  'None' oddly makes gryff's zombie disappear.  Then there is the Dialog System invading the scene.  This is because in the layer mask work done for specialty cameras, Camera's default was left at 0xFFFFFFFF.  The Dialog System merely changes scene cameras to match the mesh default, 0x0FFFFFFF.  I did not even know a about subcameras till 2 days ago.  I am in that file, so I am just going to change the Camera default to fix.

post-8492-0-55111300-1432229736.png

I will post link to scene once above fixed.  Leaving some more thoughts on how it went till then.

 

DB, just an explanation of how I got to the Fusion Camera System.  In 2009, I saw a James Cameron interview on a cable TV show http://www.g4tv.com/videos/48219/Avatars-Cameron-Pace-3D-Camera-Rig-Review/.  He showed a camera which had up-down cameras, not right-left.  Fast forward to when I was making my first post in this thread.  The Sony has an up-down 3D setting.  I remembered the show.  I looked up 'Avatar' & saw Fusion was used.  Do not know yet how much of this translates to the virtual world, but it is good to have that info, thanks.

 

Link to comment
Share on other sites

To start here is the link: https://googledrive.com/host/0B6-s6ZjHyEwUfkRGTUVVTUxaT01UZUlieGdCODF1blFZZHJFMnRJUHBpcG1KcnBjUVluYXc

Took longer, since my last use of viewport was using orthographic cameras.  You can stretch / compress easily with camera just using viewport.  After I fixed all those minor problems, I realized that the stereograms were just drawn with a different window size, not compressed.

 

I figured out after a while I would need a post process.  The stereogram vertical seemed easy.  The zombie was now smaller in height as required, but unfortunately smaller in width.  Here i just mapped the x into .25 to .75.  But cannot figure out what to about stereogram horizontal.   Any thoughts? Here is the fragment shader:

#ifdef GL_ESprecision highp float;#endif// Samplersvarying vec2 vUV;uniform sampler2D textureSampler;uniform float isStereogramHoriz;void main(void){    vec2 coord;	if (isStereogramHoriz == 1.0)            coord = vec2(vUV.x, vUV.y);	else	    coord = vec2(0.25 + vUV.x / 2.0, vUV.y);	gl_FragColor = vec4(texture2D(textureSampler, coord).rgb, 1.0);}

I put in a switch to skip postprocessing, just to see how the Stereogram vertical looks without the compression.

 

Also most of the code is added at the bottom of camera.

// ---- 3D cameras section ----// skipPostProcess is a temp argpublic setSubCameraMode(mode : number, halfSapce : number, skipPostProcess? : boolean) : void{    // not likely in production that any prior sub cams, but in dev maybe     while (this.subCameras.length > 0){        this.subCameras.pop().dispose();     }                             this._subCameraMode = mode;        this._subCamHalfSapce = Tools.ToRadians(halfSapce);                 var camA = this.GetSubCamera(this.name + "_A", true );     var camB = this.GetSubCamera(this.name + "_B", false);     var postProcessA : PostProcess;     var postProcessB : PostProcess;                 switch (this._subCameraMode){        case Camera._SUB_CAMS_ANAGLYPH:            postProcessA = new PassPostProcess(this.name + "_leftTexture", 1.0, camA);            camA.isIntermediate = true;                                postProcessB = new AnaglyphPostProcess(this.name + "_anaglyph", 1.0, camB);            postProcessB.onApply = effect => {                effect.setTextureFromPostProcess("leftSampler", postProcessA);            };            break;                            case Camera._SUB_CAMS_HORIZ_STEREOGRAM:            camA.viewport = new Viewport(  0,   0, 0.5, 1.0);            if (!skipPostProcess) postProcessA = new StereogramCompressionPostProcess("horiz comp left", camA, true);                                                    camB.viewport = new Viewport(0.5,   0, 0.5, 1.0);            if (!skipPostProcess) postProcessB = new StereogramCompressionPostProcess("horiz comp rite", camB, true);                                break;                            case Camera._SUB_CAMS_VERT_STEREOGRAM:            camA.viewport = new Viewport(  0,   0, 1.0, 0.5);            if (!skipPostProcess) postProcessA = new StereogramCompressionPostProcess("vert comp top" , camA, false);                                  camB.viewport = new Viewport(  0, 0.5, 1.0, 0.5);            if (!skipPostProcess) postProcessB = new StereogramCompressionPostProcess("vert comp bot" , camB, false);              break;                            case Camera._SUB_CAMS_OCULUS:            camA.viewport = new Viewport(  0,   0, 0.5, 1.0);            camA._OculusWorkMatrix = new Matrix();                               camA._OculusHMatrix = OculusLeftHMatrix;            camA._OculusPreViewMatrix = OculusLeftPreViewMatrix;                                camA.getProjectionMatrix = camA.getOculusProjectionMatrix;            postProcessA = new OculusDistortionCorrectionPostProcess("Oculus Distortion Left", camA, false, OculusRiftDevKit2013_Metric);                                camB.viewport = new Viewport(0.5,   0, 0.5, 1.0);            camB._OculusWorkMatrix = new Matrix();            camB._OculusHMatrix = OculusRightHMatrix;            camB._OculusPreViewMatrix = OculusRightPreViewMatrix;                                camB.getProjectionMatrix = camB.getOculusProjectionMatrix;            postProcessB = new OculusDistortionCorrectionPostProcess("Oculus Distortion Right", camB, true , OculusRiftDevKit2013_Metric);    }    if (this._subCameraMode !== Camera._SUB_CAMS_NONE){        this.subCameras.push(camA);        this.subCameras.push(camB);    }    this._update();}        private getOculusProjectionMatrix(): Matrix {    Matrix.PerspectiveFovLHToRef(OculusAspectRatioFov, OculusAspectRatio, this.minZ, this.maxZ, this._OculusWorkMatrix);    this._OculusWorkMatrix.multiplyToRef(this._OculusHMatrix, this._projectionMatrix);    return this._projectionMatrix;}        /** * needs to be overridden in ArcRotateCamera & TargetCamera, so sub has required properties to be copied */public GetSubCamera(name : string, isA : boolean) : Camera{     return null;  }        /** * needs to be overridden in ArcRotateCamera, adding copy of alpha, beta & radius * needs to be overridden in TargetCamera, adding copy of position, and rotation for Oculus, or target for rest */public _updateSubCameras(){    var camA = this.subCameras[Camera.SUB_CAM_A];    var camB = this.subCameras[Camera.SUB_CAM_B];    camA.minZ = camB.minZ = this.minZ;    camA.maxZ = camB.maxZ = this.maxZ;    camA.fov  = camB.fov  = this.fov; // original Oculus did not do this                // only update viewport, when ANAGLYPH    if (this._subCameraMode === Camera.SUB_CAMS_ANAGLYPH){        camA.viewport = camB.viewport = this.viewport;                    }}        
Link to comment
Share on other sites

You don't need to compress, as both images should be interlaced and end up very close to the same resolution upon display.  I don't see where you are rendering every other line and then interpolating to interlaced images.  This why they use vertical cameras, so that the camera is easier to manage in the real world.  I hope this makes sense.

Link to comment
Share on other sites

Oh yea, that helped.  Scrap the viewports like the Oculus process was using & switch to a process more similar to ANAGLYPH.  Make camA not actually display with isIntermediate = true. Not sure if I can get away with just one shader. I see camA's frame buffer is passed to camB's postprocess via, effect.setTextureFromPostProcess.  Will check if there is something like effect.setTextureFromCamera().

 

Will not get to this till later, but I smell blood.  This is going to work!  I will also change sample scenes to one that is more "gonso".  Want some actual movement of meshes forground-background wise.  Have a flying tablecloth test scene too.

Link to comment
Share on other sites

Ok, I updated the link.  Please forgive this long email, but tomorrow is a holiday & I am just doing a memory dump.

 

First, here is the shader & subset of setSubCameraMode()  to replace the ones above:

#ifdef GL_ESprecision highp float;#endifconst vec3 TWO = vec3(2.0, 2.0, 2.0);varying vec2 vUV;uniform sampler2D camASampler;uniform sampler2D textureSampler;uniform bool isStereogramHoriz;uniform vec2 stepSize;void main(void){    bool useCamB;    vec2 texCoord1;    vec2 texCoord2;        vec3 frag1;    vec3 frag2;        // outer if branch should have no impact at all, since fragments will ALWAYS take the same branch	if (isStereogramHoriz){	    useCamB = vUV.x > 0.5;	    texCoord1 = vec2(useCamB ? (vUV.x - 0.5) * 2.0 : vUV.x * 2.0, vUV.y);	    texCoord2 = vec2(texCoord1.x + stepSize.x, vUV.y);	}else{	    useCamB = vUV.y > 0.5;	    texCoord1 = vec2(vUV.x, useCamB ? (vUV.y - 0.5) * 2.0 : vUV.y * 2.0);	    texCoord2 = vec2(vUV.x, texCoord1.y + stepSize.y);	}        // cannot assign a sampler to a variable, so must duplicate texture accesses    if (useCamB){        frag1 = texture2D(textureSampler, texCoord1).rgb;        frag2 = texture2D(textureSampler, texCoord2).rgb;    }else{        frag1 = texture2D(camASampler   , texCoord1).rgb;        frag2 = texture2D(camASampler   , texCoord2).rgb;    }        gl_FragColor = vec4((frag1 + frag2) / TWO, 1.0);}

and

switch (this._subCameraMode){    case Camera._SUB_CAMS_ANAGLYPH:        ...                        case Camera._SUB_CAMS_HORIZ_STEREOGRAM:    case Camera._SUB_CAMS_VERT_STEREOGRAM:        var isStereogramHoriz = this._subCameraMode === Camera._SUB_CAMS_HORIZ_STEREOGRAM;        postProcessA = new PassPostProcess("passthru", 1.0, camA);          camA.isIntermediate = true;                            postProcessB = new StereogramInterlacePostProcess("st_interlace" , camB, postProcessA, isStereogramHoriz);          break;                        case Camera._SUB_CAMS_OCULUS:        ...}

I did not get to doing a better scene, but will work on it.

 

Think that I should be PRing this as is.  We are not going to get this in one shot no matter what.  I do not even have an Oculus Rift.  There are # of things still to do:

  1. Test Oculus still works on an actual device.
  2. Either delete un-needed Oculus classes, or hollow them out to a couple of lines.  Would actually recommend deleting them.  Oculus is not a real product yet.  Devs must change a box saying so to buy one.  Get rid of the garbage now.  Also, that webVR camera probably should not be hardcoded to only work with Oculus.  Another reason to delete them.
  3. Delete or hollow out Anaglyph camera classes.  Does anyone really use them?
  4. Test the stereograms.  I have hardware, fuckin iPad, issues.  More in a later section.  Beyond the initial test of seeing a single image with 3D turned on, need intelligent ways to set separation.  I just have this hard coded in the test scene & have no idea if I got lucky or not.
  5. As I worked with the Cameras, I think it would really be good if Target Camera could be collapsed into Camera.  There are only 2 Cameras that directly subclass Camera, Target & ArcRotate.  Those cameras have _getViewMatrix(), so the one for Oculus needs to be swapped out in the subclasses.  ArcRotate has to have its own _getViewMatrix(), but might be eliminatable.

As I said before, I do not have all the hardware to test this.  I have a 3D TV, but iPad does not support Mira-cast.  iPad's only way to get to a TV is through an Apple TV box.  I am not buying that.  That's all I need, another way to get Netflix.  Apple also "leaked" this week, that they have abandoned making an actual TV, but are working on a new version of Apple TV.  I need another way.

 

I was always going to get an Android tablet for testing.  Had been delaying, since my aging Samsung Galaxy s3 was good enough till now.  It kind of has Mira-cast, but it is just garbage artifacts upon connection.  DB, I like you have a Sony, since I probably would not feel good about importing a non-US product.  Mira-cast is still fairly new.  If anything is going to work with a Sony TV though, it is going to be a Sony tablet.  Think the 8" form factor gives me the best coverage.  Where did you get yours?

Link to comment
Share on other sites

This is a good start.  Rendering from 2 seperate views can be used for practically any devidce or application.  The only elements that need to be user variables depending on scene objects is the IPD - distance between cameras and the FOV.

Link to comment
Share on other sites

Ok, more time.  Yes, I am now using #define for isStereogramHoriz.  DB, think he wants to cause no changes for anaglyph users.  There is the hollow out way though:

export class AnaglyphFreeCamera extends FreeCamera {    constructor(name: string, position: Vector3, eyeSpace: number, scene: Scene) {        super(name, position, scene);        this.setSubCameraMode(Camera.SUB_CAMS_ANAGLYPH, eyeSpace);    }}

If Oculus is being dropped from BJS altogether, that would eliminate 1,2, & 5 from the todo list above.  Please let me know, so I can pull Oculus out of my new way too.  Unless you really meant "hollow out" like above.  Also that WebVRCamera might be still be good, adding your own 3D mode or not.  If deleting, I will just pack away my own copy.

 

Just to bring this up again.  If there was a way to avoid the PassPostProcess, that would seem to cut out a draw of the whole screen.

 

Finally, controlling the thing.  My take on it is we want to make fully operational scenes, assuming the customer does not have any 3D equipment.  If UI is put up at the beginning allowing the customer to customize the scene, the developer can switch it on as required.  Even if the customer does not have the equipment, they would probably view being asked in a positive light, and would now know the scenes can capabilities they are not even using yet.

 

To that end, hiding direct access to fov & maybe minZ / maxZ,  using getters / setters, would allow us to add code to set _subCamHalfSapce to the best value.  I do not know what that code would be, but know there should be minimuim performance issues.  This stuff is probably changed very infrequently.  Any thoughts on this, or what the "code" should be?

Link to comment
Share on other sites

I have not pulled from repo in a week, since I have changed so many files, & did not want to cause a merge conflict on my repo till I had to:

  • babylon.camera.ts - implemented most everything
  • babylon.arcRotateCamera.ts - renamed _update() to _checkInputs(); added new functions GetSubCamera() & _updateSubCameras()
  • babylon.targetCamera.ts - added new functions GetSubCamera(), _updateSubCameras(), _getSubCamPosition(), & _getOculusViewMatrix()
  • babylon.followCamera.ts - renamed _update() to _checkInputs()
  • babylon.freeCamera.ts - deleted _update()
  • babylon.Gulfile.js - added stereogramInterlacePostProcess.ts

added:

  • babylon.stereogramInterlacePostProcess.ts
  • stereogramInterlace.fragment.fx

So, I did not know you made changes in vr dir.  Looking from GitHub, you can rip out a lot more.  All that innercamera class stuff is no longer needed. Cameras should also not implement _update() anymore.  Think I should be doing commits, and at least a trial PR.  Please advise.

 

I made stereogram process much closer to Anaglyph, & does not use viewports like the Oculus process.  Reason is, I want the postprocess to reduce the 2 subcameras.  Then I have the full sizes so that I can average the rows or columns being interlaced.  Hence the passthru question.  Have not measured it, but Oculus seems like a dog compared to others.  When you switch to it in the tester, both sides don't even switch in the same frame.

 

Also, that stupid oculusGamepadCamera.js keeps coming back in the camera directory.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...