Jump to content

Multiple questions about VR camera


Recommended Posts

I have multiple questions about VR camera. 

1. There are multiple variables with hard-coded values defined in source for VRDeviceOrientationFreeCamera . What is significance of those ? Variables like Inter-pupillary distance , lense distance , lens to eye distance, resolutions etc https://github.com/BabylonJS/Babylon.js/blob/master/src/Cameras/VR/babylon.vrCameraMetrics.js

2. Do we need to make changes to variables mentioned in #1 above if we have big scale model ? Let's say we have living room model whose bounding box dimension goes into 200 babylon units or say 2000 babylon units. So the logical distance between two camera used in VR camera rig needs to be changed ? Ideally this is how it should be done or else there won't be perceivable depth when seen using VRDeviceOrientationFreeCamera and a VR headset. 

3.  I can see the resolution(or DPI?) changes when we switch to VRDeviceOrientationFreeCamera mode (we have set a switch in our app to switch camera) . A DOM based overlayed button that looks smaller in normal arc rotate camera, suddenly becomes bigger when we switch to VRDeviceOrientationFreeCamera. I assume this is the thing that is responsible for Aliasing that I mentioned in this earlier post . And so probably it isn't "aliasing" but it's low resolution rendering which causes effect like aliasing.

4. How to set initial  direction and orientation of VrDeviceOrientationFreeCamera ?  

5. When we tried the camera in smartphone, and rotated the orientation of camera , the speed or degree of rotation was perfectly aligned with device's orientation. But the Axis around which it rotated, seemed to be tilted (like axis around which our planet Earth rotates) . What could be causing this ?



Link to comment
Share on other sites


1. They define the device you are using (distance between eyes, focal point, etc...). Most of the time these values work but you may want to tweak them for specific devices

2. They are all related to the device and not the scene.

3. The camera uses postprocesses so you are losing anti-aliasing. But resolution should not change

4. Pinging @RaananW for this one

5. Same as previous one:)

Link to comment
Share on other sites

Hey :)

4) The orientation cameras (both VR, non-vr, webvr, etc') are all getting the orientation from the device. You cannot set the initial orientation. Because even if you do set an initial rotation/orientation (which you can, simply change the rotationQuaternion of the camera) it will be overwritten the minute the device will send the first orientation event.

If you want a "global transformation" of the camera  - meaning a quaternion that will always be multiplied with the device's orientation, - this is rather easy to add (and, actually, not a bad idea).

5) Not sure I fully understand. The orientation is set to be a a loot-at kind of thing. so, if your scene has something up, tilting your device up will make you see it. Same thing goes to all other directions. If you experience something different, I would be very happy to see an example in order to understand what went wrong.

Link to comment
Share on other sites

@RaananW @Deltakosh   Some followup questions now.  

1. What can we do to eliminate aliasing ? We used FXAA but it doesn't make any difference.  In this earlier post Someone tried to explain the order of pipeline in regard to VR camera. But there's no clarity how and where to use FXAA to reduce aliasing in VR camera. 

2. When using inside VR headset, I didn't perceive any sense of depth in the scene. It was just like 360 video, where I could move my head to see around. But not sense of depth. Is it because VRDeviceOrientationCamera just does that ? Just take one render and show it in split screen, and then sync orientation with camera rotation, And there aren't  two actual separate logical cameras placed at distance apart to generate two separate renders ? 



Following applies only if babylon's VRDeviceOrientationFreeCamera is real 3D (two logical camera set at distance apart) 

I want to understand how the phenomenon of depth and distance in real physical world relates to in babylon VR camera implementation. In real world , for the object that are far away (lets say 2 miles ) from our eyes , we don't see depth in them. It just feels like 2D image. But for close objects (let's say 6 inch to 1 ft)  we can clearly see sense of depth with our eyes. Which means there will be significant difference in the part of object we see with left eye alone and right eye alone. The difference that couldn't be there for object far away. 

Now there should be something in babylon which relates to this. Model scale must make difference in perceived sense of depth. In VR camera rig , if both camera are at  x distance apart, model must be scaled accordingly. Otherwise there won't be any sense of depth.  If let's say camera are 10 unit apart. And if there's interior room model which is 1,00,000 unit large, every object in living room would be so far away from camera there won't be any perceived sense of depth when it's viewed using VR headset. Compare this with real world. If somehow we are able to build imaginary living as big as entire city , and if we are standing in one corner of such room, we won't be able to perceive sense of depth for object  on another corner of room because distance between both eyes is way too small for the distance between eye to object. 

So which param do we adjust, in order to compensate for large model size in babylon such that there remains sense of depth as close to physical world as possible ?. 

Link to comment
Share on other sites

Hi again, D-man!  Howya doon?  I hope well.

  "anaglyph" - I believe that is the word you seeketh.


As you can see... NOTHING.  heh.


There it is.  Left/right red/blue shift.  Put on your 3D glasses and dive in! 

Now I suppose you want a deviceOrientedAnagylphFreeCamera, eh?  Find that custom inputs doc, and that should confuse you nicely.   :)

See, this is where the new camera custom inputs thing... has me baffled.  The newest theory is... EVERYTHING uses the universal camera.  But there are traits... both on the inputs side of cameras.... and on the move/render side.  For example, VR/webVR (2x eyes) and anaglyph cams don't render like an arc or free, and arcs don't move like frees.  None of these "frontside traits" are related to input devices whatsoever.  Input traits (control-device attachments) are backside traits.  Render-types and movement-types... are frontside traits.

So, the new custom inputs system for camera... hasn't addressed this well.  It is ONLY for inputs, and it's frontside traits are a mess of entanglements.

Ds... you need combined frontside traits.  You need anaglyph trait, and freeCam or arcCam traits, and possibly VR traits (left/right eye).  You need these three combined frontside traits, along with devOrient backside trait, and optional (fallback) everything else (mouse, gamepad, keys, touchpad).  Fall-back-siders.  heh

I just invented this "frontside traits" and "backside traits" terms... just now, as you likely assumed.  But... this is the issue.  Is the new custom inputs thing ONLY for doing backside traits?  Do we have a frontside attachment system where we can attach Arc-ness and Anaglyph-ness (and VR eyepiece shapes and nose-bridge widths)?  hmm.  Goofy Wingnut, eh?  :)  No answers.  Just more questions.

Link to comment
Share on other sites

Inputs are only for...inputs :) Every camera has a dedicated set of inputs. For instance, the freecamera is deeply linked with mouse and keyboard. 

You can stop there if you want. But with the new input system you can create your own input or use additional ones. What about adding touch support to the freecamera? Easy as this:


Now on the other end, there is VR support.

You have already setup cameras that you can find here:




*AnaglyphUniversalCamera (which is keyboard, mouse and touch)


But if you look deeply in the code you will see that any camera can enable anaglpyh (or stereo) support. We provide the previous cameras for convenience only.

For instance, if you want a freecamera with anaglyph rendering:

freeCamera.setCameraRigMode(BABYLON.Camera.RIG_MODE_STEREOSCOPIC_ANAGLYPH, { interaxialDistance: 0.5});


So you have bot a world where you don't want to mess up with internals and then you just need to pick the right already defined camera

And there is a twin world where you can do it by yourself with just a few lines of code :)

Link to comment
Share on other sites

Regarding VRDeviceOrientationCamera, like other VR camera, it uses a dual camera system:


As you can see here(https://github.com/BabylonJS/Babylon.js/blob/master/src/Cameras/VR/babylon.vrDeviceOrientationCamera.ts#L11), the camera uses the RIG_MODE

To improve antialiasing, you can set compensateDistortion to false to disable lens postprocess(fortherecord, webgl does not allow AA on render target textures)

Here is the process to create both cameras:


Regarding metrics, most of the time there is no need to change these values:

now going back to your initial question, ypu should play with interpupillarDistance and lensSeparationDistance.


Link to comment
Share on other sites

Wow, thanks for the excellent info, DK!  I need to read it thoroughly.  Let's leave metrics aside, for the moment.  :)

User wants VRAnaglyphDeviceOrientationFreeCamera, as best I can determine.

VR and Anaglyph mixed... seems commonly needed.  User needs stereoscopic (two eyeports), anaglyph (3D depth), and freeCam combined, with devOr input.


Can you help us add VR-ness (dual eyes) to that demo, DK?  Can that be done?

And then we'll add device orientation as a custom input later, maybe.  So, VRAnaglyphFreeCamera with added devOr.  Possible? 

It seems that the pre-made convenience cameras... avoid combining VR (dual eyes) and anaglyph (3D depth) together.  It seems we are allowed to have anaglyph, or VR, but not both at the same time. 

But it's likely I'm wrong.  :)  EVERYthing is possible with BJS/JS, in theory.  :)

I think I might have my terms confused.  Anaglyph = dual image rendering red-shift/blue-shift... but NOT necessarily dual eye ports (which I call VR).  Single eyeport anaglyphs (like the above demo) can be used with cardboard 3D glasses of olden days.

VR = Oculus-like dual eye-port rendering... and would not work well with old cardboard 3D glasses.

Is this the correct terminology?  See why "trait adding" on the frontside, would be cool?  Just add VR-ness to our demo... and we're done.  (such a feature would likely break backward compat forever)

"Stereoscopic" is a wobbly (information-less) term to use in BJS, because it doesn't say if the view is VR (dual eye-port rendering) or not.  The term "stereoscopic" applies for both methods... dual eyeport (Oculus) or old 3D glasses (full-screen anaglyph, like the above demo).  VR'ing seems like an add-on frontside trait... something a user might turn on/off often, switching easily from Oculus to old cardboard glasses ... but both having anaglyph depth effect.

I bet I'm still confused, though.  :)  Maybe dual-eyeport never uses anaglyph, because Oculus-like things automatically shift the two eyeports to cause depth-effect all on their own?  No need to use anaglyph-ness when dual eyeports are active?  Device does anaglyph red-blue-shift FOR us?  *shrug*  I'm SO stuck in the 80's.  heh

Link to comment
Share on other sites

Thanks JC, but where is the red/blue shift?  Or should the red component be in the left cam, and the blue in the right?  I have SO much to learn, sorry.

I wonder if I have an old Viewmaster around here somewhere, and some wheels.  :)  I need basic training in stereoscopics.

Link to comment
Share on other sites

@Wingnut I do not know if there is a need to do stereoscopic an anaglyph.  It is either one or the other.  The op is using vrDeviceOrientationCamara.  That one is preset to do RIG_MODE_VR.  You are off in space my friend.

op is saying his scene is just not "feeling it".  He never mentions his hardware.  Do not think Anaglyph is really an option.  I made the stereos.  Tested on my 3D tv.  Not perfect yet.  Not a priority for me to do much more right now.  Maybe 2.7, or 2.8.

Many people have not liked their results.  They should find the documentation of the distortion metrics to use published by their hardware vendor, if they exist.  DK seems to imply interpupillarDistance and lensSeparationDistance are scene / or user specific.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Create New...