Jump to content

Babylonjs camera internal parameters. focal length, fov and maybe distortion


bablylon
 Share

Recommended Posts

Hello

Physical perspective cameras can be described by internal parameters (focal length, sensor size, distortion and pixel size) and external parameters (position, rotation). I am trying to create an accurate reproduction of a physical scene using babylon js.

I know that external parameters; position and rotation can be easily changed. Do any of you know whether internal parameters such as focal length, fov etc. can be changed? If not what are the default values. How are camera.fov and camera.fovType related? What kind of units does camera.fov expect?

Thanks in advance

Link to comment
Share on other sites

Hi... welcome to the forum! 

Have you been to our camera tutorial?  Inside it... are links to the API pages for the primary camera-types.  At the API pages, you can view all the properties and methods available for each, whether created at that layer, or inherited from another camera.  There are other tutorials near that area... that talk about depth rendering... both linear and log.

Be sure to notice the hierarchy indicators at the API pages.  For example, let's visit the followCamera API...  http://doc.babylonjs.com/classes/2.4/FollowCamera

See the blue boxes at the top?  You bet.  FollowCamera is a subClass of targetCamera, which is a subClass of Camera, etc. 

Unfortunately, clicking on the blue boxes is currently broken, but you can go down two lines and do some clicking... back-walk up the hierarchy.

Custom cameras and overloading (adding more knobs and dials to a camera you instantiated)... is certainly allowed, too.  Nearly everything allowed in webGL... is also allowed with Babylon.js.  In other words, BJS has EXCELLENT cameras that are easy to use and totally fun.  :)  Plenty of knobs and dials, and easily add more.

A look at the base camera class .fov property shows that it is a number... and it's likely a floating point number.  BJS does not restrict the range of that number in any way, and let's webGL implementations manage any out-of-range errors.  This keeps BJS framework COOKIN', speed-wise, and it lets mad scientists like us... freely torture our graphics cards.

In writing our camera tutorial, SOME things were not included.  If we included every little detail, the camera tutorial would become too difficult to read.  It is meant as an introductory document... and not ALL knobs and dials are documented.  One example... is camera.fovMode.  I don't think it is talked about in the tutorial, and little information is available on the camera class API page.  But, I see it listed.

Follow me... on a research mission, if you wish:

First, go to the BJS source code... navigating-into or 'fovMode'-searching your way to:  src/cameras/babylon.camera.js.

After some more searching or snooping, we find this...  https://github.com/BabylonJS/Babylon.js/blob/master/src/Cameras/babylon.camera.js#L31

camera.fovMode = Camera.FOVMODE_VERTICAL_FIXED.  A static property.  Ok, statics are usually set at the bottom of the source code... so let's look.

https://github.com/BabylonJS/Babylon.js/blob/master/src/Cameras/babylon.camera.js#L560

There we go... vertical-fixed == number 0, and horizontal-fixed == number 1.  More 'fovMode' searching within that document ... and you can start to see HOW it's used.  Soon, you are writing your own docs for .fovMode... in your head, right?  The static properties are aptly named, and with a few playground experiments, you could easily see the affect of both "fixed" modes.

Back at the camera api, we see that camera.fov is set to 0.8 by default.  0.8 what?  Who knows?  Nobody is saying anything.  By keeping our camera tutorial lightweight, we have omitted some rather important information, eh?  The 0.8 is likely radians, making the 0.8 == a bit more than 45 degrees.  But how would anyone know, eh? 

How would anyone know there was a camera.fovMode, too, huh? 

Sorry about that.  There has been SOME talk of dividing the camera tutorial into separate files, and then adding more information.  It has not yet happened and it has not yet been determined if that move would be wise or not.  Just possibly, we need two docs... Introductory Cameras and Advanced Cameras.  Do you think that would be a good idea?

You ask good questions, @bablylon, and you found some weaknesses in our docs (thx for the help).  I guess we have some work to do.  I hope this helps with your questions.  Welcome again... good to have you with us.  Be well, talk soon.

Link to comment
Share on other sites

Hi Wingnut

Thank you. That was an impressively detailed and well researched answer. Totally unexpected. What is your association with the Babylon.js?

Learnt a great deal about fov in babylonjs from your explanation. What would you say is the best way to specify a focal length for the camera? One way could be to override the projectionMatrix functions, is there a better way?

Link to comment
Share on other sites

Hiya B!  Thanks for the kind words... real nice of ya! 

As for the focal length, I have no idea.  (sorry)  But a forum search returned with some magic beans. 

I went web-searching, as well... found some interesting reading.  Saw the words "intrinsic" and "extrinsic"  (matrices)... did a little deeper searching.

WAY over MY head, but if I keep reading things like that, it won't be over my head much longer.  :)  (that's a lie, matrices will ALWAYS be over my head)  haha

Let's hope smarter people come visit this thread, and help. 

Be sure to check out that "spaces" topic (seen in the forum search results).  In that thread, @dbawel has 3 consecutive, long posts... about... some heavy stuff.  DBawel worked/works amongst movie and high-tech amusement ride folk... with mocap tracking and aspect ratios and all sorts of "space" stuff.  I can't understand most of what he talks about, but I KNOW I'm getting smarter each time I read his words.

Within my HIGHLY-limited intelligence breadth, I would say "no", there is no better way.  But keep in mind... I'm wrong about things... 68% of the time.  Don't put much weight on my words, okay?  :)  Maybe DB will come visit, and give us his ideas on this subject.  I won't understand them, but you likely will, B.  (That's IF dbawel isn't all tanked up on the mushrooms, like he sometimes is.)  heh.  Just kidding... maybe.

What's my association with BJS?  Well, I'm in love with the framework... mostly for art and storytelling... not much shoot 'em up interest.  I am one of the custodians for the docs... primarily tutorials and overviews sections (I fix stale links), and... all you guys and gals are my dear acquaintances... mathematicians, artists, designers, webGL mad scientists... I REALLY just like hanging around with all of you guys.  I learn at 10,000 mph when I camp this forum.  I have not written any core code... except I contributed light.setDirectionToTarget() to all lights that use .direction.  Just exciting, huh? :)

What's your story, B?  Wha cha doon?  Wha cha into?  Where ya been?  (But don't let me pull you too far off-topic, of course.)  Bye again.

 

Link to comment
Share on other sites

Oh @Wingnut - I can't believe how well you know me from reading strictly written dialogue. I hope we can get together one day to share a "brew" face to face. ;)

Anyway, I'm really glad to hear @bablylon is asking these questions, as I don't know many peolpe who have ever wanted or been required to have such knowledge. However, this will all change in a couple of years when Light Field Rendering becomes the standard by which we measure light absortion, reflection, camera sensor, digital display. etc. So now is the time to learn, if you want to sty in front of the rest of the world (as I know @Wingnut takes very seriously.)

The best reference I can find online is by K. Graumen in a white paper she wrote while at Berkely (Kristen Gramen if I am correct,) which can be found here: http://www.eecs.berkeley.edu/~trevor/CS280Notes/02Image%20Formation.pptx

It helps if you know a little bit of linear algebra, however not entirely necessary as the diagrams are excellent to understand the basic physics of digital camera properties. Make sure to view every page. I also can say that I had to learn all of this the hard way, as trying to not only sync multiple cameras with different physical properties such as lenses and sensor deltas (even usingthe exact same physical camera models), but then mapping each camera's unique "distortion map" to calculate an object in 4 dimensional space down to the sub-pixel using 2D images. But invaluable in understanding digital rendering. Perhaps this sounds completely irrelevant on the face of the matter, but once you understand the physical properties of light, rendering becomes second nature - as well as understanding the translation of digital geometry to fit into our physical senses.

I should stop there, as the white paper provides al of the information and math required to undertand the matix properties of camera FOV, which includes intrinsic (pixel) and extrinsic (world view) matrix properties.

I'l just add that this is extremely important in the calculations of all digital rendering, but becomes far more complex in understanding Light Field Rendering. And to simplify this, imagine this in a quantum world, where you wil be able to view any object from any viewpoint simultaniously, as well as to consider all possibilities in a single point calculation (within limits, of course) and is required to calculate believable LFR. This is far outside what (most) current physical cameras and digital renderers calculate, however it will be inescapable in the very near future - and the time is now if you want to know what is hitting the consumer display market in a couple of years.

I'm happy to answer any questions concerning cameras, renderers, etc., and most any developer can curently build anything within the current limits of digital rendering such as in the WebGL framework and Babylon.JS (which I love :),); but soon it all changes.

Cheers,

DB

Link to comment
Share on other sites

Thanks for the wonderful responses. 

I tried to get things working without having to dig deeper into the source code but have been unsuccessful.

Is there a paper explains the math specific to Babylon.js (I understand the theory and math of Projective Geometery to some extent. Just not how Babylon uses it) and something that explains the idea behind how the system is designed and how control flows through the code and what the purpose of each function is. (See below for one such source of confusion)

babylon.math has a function called PerspectiveFovLHToRef which is used to get the projection matrix for the camera. I noticed that the last two lines of the function suggest that the matrix has atleast 16 entires. Studying the matrix has led to the following questions:

  1. A projection matrix takes Pto P2. This would imply that the projection matrix should be 3x4. However the matrix in the source code clearly has more dimensions. Why are those dimensions needed?
  2. Why does the projection matrix need to now the znear, zfar (probably to determine the view frustom).
  3. Why do the diagonal terms in the top two rows of the matrix differ by a factor of engine.aspect? Shouldnt they be the same?

I am sure there is some smart linear algebra going on. Is there a source?

PS: @Wingnut I am working on a side project that is trying to accurately reproduce images taken by cameras.

Link to comment
Share on other sites

@bablylon - there was a post last year discussing this in reasonable detail that should provide you with answers to most of your questions:

As for an answer to your question #2:

8 hours ago, bablylon said:

Why does the projection matrix need to now the znear, zfar (probably to determine the view frustom).

Generally as in OpenGL and DirectX, the frustom is used to generate a coordinate system relative to the camera (and it's properties) to provide the necessary information in calculating values such as to define a position for a vertex within this local coordinate system. Generally, this is how fov and other values are calculated in the "virtual" space of the camera to approximate how light passes through a physical lens and render a reasonably accurate image relative to a physical camera. I haven't looked at the math specific to babylon.js, but I assume that you'll find the computation of the frustom is similar to that of OpenGL and DirectX - as well as other APIs.

I hope this helps.

DB

Link to comment
Share on other sites

  • 1 month later...

@dbawel is all mushroomed-up, again... I can tell.  But when he is in that condition, he can see into the future.  Then, if you can get him talking, he'll tell you everything he sees, like he did in this thread.  It helps if you can speak Photon, though.  Many of those Photonic words he uses... don't translate smoothly to English.  :)

And don't let that talk of "change" scare anyone.  Local coding Gods will hide all that alien Photonian technology... under a nice chocolatey layer of user-friendly.  Deltakosh and the core team will not settle for anything less.  Custom cam inputs, SIMD, webworkers, observables, did you feel any bumpy roads when the core boys slid those advanced things under the BJS floorboards?  Nope.

It's cuz they're Gods.  :)  But DB... he's actually part alien (Photonian)... on his father's side, I believe.  I heard a rumor that he can turn mostly invisible... by bending light and space-time.  He'll do it for friends, but... I heard that after each time he does it, 3 days later... he has to deal-with a 42-hour workday.  Just one, but, still.  I guess it has something to do with his space-time un-bending. 

But I can still out-bend him in off-topicating a forum thread.  SO, neener, et al.  :)

Link to comment
Share on other sites

@Wingnut - My Brother... how well you know me. I've been watching your posts lately, and am impressed. My expertise (if you can even call it that,) is far more limited. Today, I'm just trying to declare "empty" variables for videoTextures and considering a post; but embarassed at my limited understanding of the Javascript language. 

However, having said that, if you look at how opengl calculates a vertex in space from a camera's viewpoint and orientation, it's basicaly the same. I was going to write about the 16 matrix positions and what they represent, but I found a GREAT link that saved me what I would guess to be an hour trying to explain on this forum. If you follow the link below, you'll find EVERYTHING you need to completely understand how the camera matrix works in WebGL and if further explaination is needed by anyone, then I can most likely expand... but the following link just about covers it all.

http://webglfundamentals.org/webgl/lessons/webgl-3d-camera.html

It all becons back to openGL, and there is a far more technical link for the tech heads out there - but PLEASE read the above link first, as the next link gets a bit astray from general usage in WebGL, but still quite valuable. So only go here if you want to dive further into physical camera simulation computed to a 2 dimensional screen. For most, I would avoid this, but for some, it might help:

http://www.3dgep.com/understanding-the-view-matrix/

And if anyone has a quick and simple method to declare a variable as a videoTexture without media to be used by JQuery to load at a later time, then I'm all ears.:D

DB

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...