Recommended Posts

Hi g.  I think that's a spot light prop, not a point light prop.  There is talk about it in the tutorial.  It is the rate of power falloff over distance.  Set it to 0, and the spotlight keeps going, and going, and going.  :)  One could call it distance-reach resistance. The amount of dirt and dust particles in the air... causing drag on our photons.  hehe.  Photon friction.  Somebody stop me, please!

 

Oh, so you want to know the difference between spotlight.intensity and spotlight.exponent, you say?  I have no idea.  I have never gotten spotlight.intensity to work worth a hoot.  Our spotlights have this gruesome hotspot in their center that just refuses to be tamed.  .exponent works pretty good, though.  We will probably have to look into radial falloff eventually... softness/hardness around the edges of the spotlight circle.

 

It all comes down to what 'beat' or 'song' BJS is going to march-to.  Dad72 exemplified it when he said something like "other engines do it this way", and that was why BJS does it a certain way.  Now we are marching to the beat of Blender exporting, trying to honor the bloat and creaping featurism from that modeler.

 

I don't know where it's going.  Sceneloader might be the death of what could have been good, and completely dynamic.  We could have grown our models instead of importing them.  I hope DK has some really old versions of BJS on his drive, so that when all this fluff drives us batty, we can branch an old version and get back to the basic fun... with an API that one can learn in a few days.

 

Gryff, IF you ARE seeing an .exponent on Blender point lights, I'm pretty sure the folks who coded the Blender->babylon exporter would want to know that.  Maybe they now do.  :)  Be good.

Share this post


Link to post
Share on other sites

I think that's a spot light prop, not a point light prop.

 

 

 

You are right of course - I had been playing with both the spot and point light in Blender and was not concentrating when I typed. I wondered if "exponent" meant it was used for "falloff" though I could not see what blender parameters influenced it.

 

But that raises a question, both the point light and spot light have falloff properties in Blender, so why is there no 'exponent" for the point light? Think of lanterns, street light, globe lights, table lights etc. - lighting which has a limited range

 

Every time I look at lights - I have questions.

 

cheers, gryff :)

Share this post


Link to post
Share on other sites

Well, point light's .intensity prop actually works pretty nice, so, with a point light, intensity is the same as falloff... i suspect.  It should be that way with spotlight, too.  intensity = power = shoot-distance... i would think.  I speculate quite a bit, though, and because i do that, I'm wrong about 89.4% of the time when i say stuff.  :)

Share this post


Link to post
Share on other sites

with a point light, intensity is the same as falloff.

 

It should not be though - and its a poor work around.. The main reason I was attracted to babylon.js was the ability to create a complete scene in Blender - meshes, cameras, animations and ... lights. I installed EasyPHP on my computer yesterday, so now I can view .babylon files locally - and my experiments with lighting have been disappointing

 

I want to create mood and atmosphere - dark and  brooding - but that does not look possible right now. What I see in Blender may not be what I get in .babylon :(

 

I speculate quite a bit, though, and because i do that, I'm wrong about 89.4% of the time when i say stuff.

 

You do a fine job Wingy - I have downloaded a bunch of your demo files - to learn from them :)

 

TC

 

cheers.gryff :)

Share this post


Link to post
Share on other sites

That might be a new (maybe pinned) forum topic. Blender Exporting - Before & After.  You always wanted your own "Goes On Forever" forum topic, right gryff?  ;)  You are free to answer deltakosh here, of course, but I bet if you started a new forum topic, named something like that, or similar, introduced its purpose in the first post, maybe DK would be willing to pin it.  Potentially, it is a pretty big topic.  Then you could use the second post to loosely quote dk's comment... and pour your heart out into an answer.  *shrug*

 

You don't need to be an expert.  I think deltakosh would be interested in hard facts, or opinions, or feelings, or any words on the subject.  I am speculating again, though, so I could be wrong.  :)

 

PS: Thanks for your kind words.  You have become a friendly team member and valuable asset as well.

Share this post


Link to post
Share on other sites

Well, I hope gryff returns someday.

 

Meantime, i have begun work on the cameras tutorial #5.  First, I saved the original, as I always do.  Then I installed the version that I recently edited.  I removed touchCamera temporarily.  It will be re-added real soon, or included in a new tutorial that talks about our 5 'specialty cameras'.  I consider touchCamera a specialty camera.  Others may not agree.

 

The current camera tutorial only covers freeCamera and arcRotateCamera.  I can put touchCamera back in there quickly, if someone feels it needs to be done.  I am going to be working over the next few days.. to add more to the current tutorial, or possibly make a tutorial called "specialty cameras".  I need to think about it.  Suggestions certainly welcome.

 

The section that WAS in the cameras tutorial about touch cameras... was just a simple introduction to them, anyway.  I think touchCameras will need to be talked about in much more depth... and it might not belong in the "basic series".  I don't know enough about touch cameras yet, to write much good information about them.  I will get right to work on it, though.  I realize that mobile devices use touch-events quite extensively, so I will not ignore the toughCamera too long.  I do not use mobile devices, but many others do.  The next few days of my life will be dedicated to learning all 5 of the "specialty cameras"... as fast as possible, and quickly writing as much as i can about them, somewhere.

 

Meantime, take a look at the fresh cameras tutorials which covers free and arc.  Help me find and fix mistakes...  and make comments and suggestions here.  Shrink and relocate the new picture, as wanted (or even remove it).  Adjust anything.  Be well!

 

PS:  DK - you and Davrous and friends... can write information documents in French, in ascii, and I will do my best to quickly translate to English and also into MD, and make them look somewhat like a tutorial.  It doesn't have to be pretty, but type lots.  Reiterate much, so the software translator has lots of text to work with. With those software translations and with looking at the code, I should be able to quickly write some tolerable .md documents.  Just fire them at wingthing at charter.net  I'll translate, markdown, and put them all on a new wiki menu... maybe something like
"The Trailblazer Tutorials"  :)  We can enter new territories fast, with bulldozers, even if it gets messy.  :)

Share this post


Link to post
Share on other sites

You did a really great job on tutorials!

 

From my point of view, I think this tutorial (#5) should a least cover: free, arc and touch. We have special articles for virtual joysticks and occulus. Anaglyph cameras are just free or arc with postprocesses

Share this post


Link to post
Share on other sites

Thanks.  I will get touchCam put back in there within a day or two.

 

You say "camera.lockTarget = vector3 or other object".  By 'object', do you mean a mesh?  How about a light? I don't know why anyone would use a light as a target, but still curious. :)

 

The docs claim it takes a 'type'.  Users won't know what  a 'type' is.  Should that be adjusted in the API for more clarity?  Is Temechon the man for that?  Should I PM him?  Should I post in documentation thread?  Is he still alive?  Thoughts?

 

This is what I see:

 

freecamera.setTarget() - vector3 ONLY, and does not lock.

freecamera.lockedtarget - vector3 or 'object'  (light too?) - locks - no extra args for offsets like mesh.lookAt()

freecamera.lookAt() - not planned.

 

mesh.lookAt() - vector3 ONLY + extra args for offsets.  NEVER a mesh, camera or light, but can use mesh.position, camera.position, or light.position - a-ok.  Never locks.

 

mesh.lockedtarget - not planned.

mesh.setTarget() - not planned. mesh.lookAt() is identical except for optional yaw, pitch, roll offsets.

 

I think that's all correct, yes?  :)  (My brain hurts)  hehe

 

Did anyone think about making freecam.setTarget allow a second arg - boolean?

 

freecam.setTarget(mesh_or_vector3, locked_or_not)?

 

Then add a .target to freecam, and if its set, its a locked target.  If .target is clear, no locked target.  Then remove .lockedtarget, and use .target instead.  *shrug*  (Just ignore me)  :)

 

By the way, I really LOVE the pageup and pagedown keys I added to a freecam... for a project i was doing.  Should freecam have 6 keys?  Maybe SHIFTED up-cursor and SHIFTED down-cursor... make the freecam go up and down the Y-axis?  (I think that's called ped-up and ped-down in the TV industry... abbreviation for pedestal-up and pedestal-down).  That would make a freecam... be REALLY 'free', eh? (Again, just ignore me)  :)

 

Moving onward:

 

I will re-install touchCam in basic 5.  What about deviceOrientation? 

 

Should I talk about the other 4 cameras at all, in Basic #5?  These 'special articles' for occulus and VJ... should I link to those?  Do those special articles need to be made into md?  Someday? 

 

Does deviceOrientation have a special article? Link to it, from tutorial #5?  Will IT need a convert to md, someday? Or just talk about it in tutorial 5, maybe just after touchCam?

 

Occulus, Touch, VirtualJoyStick, and deviceOrientation, are really types of inputs for cam control, right?

 

Anaglyph is NOT just a different input control cam, it is a completely different situation, yes?

 

For example, an anaglyph cam... can be controlled by Occulus, Touch, deviceOrientation, or VirtualJoystick, right?

 

SO many questions, huh?  *nod*.  Sorry.

Share this post


Link to post
Share on other sites

Ok, 3:15 AM, and I am starting to see "the big picture" of the touchCamera.  It represents a step into the world of DOM gesture events (pointer events, and static gesture events, and manipulation gesture events).  There are 'layers' involved here which include our inputControllers, hand.js, and the code for the touchCamera itself.

 

The system looks like it is meant to keep the average user from being concerned over the details.  i think we will try to do the same (not too many details), in our tutorial, if everyone agrees.  I will probably give them a small "taste" of the system behind the touchCamera... just to give them some search 'fodder' (key words) so they can learn more on their own.  Someday, in a separate document, we could probably diagram how DOM gestures work with handJS and work with our inputControllers.  But for now, I think the users of the Playpen Series tutorials just want to know how it works for them.

 

But, I do think that somewhere, somehow, I need to include a quick sentence about hand.js intercepting eventListener additions and removals... and maybe telling them to see hand.js for more information about that.  Anyone have thoughts about that?  (thx)  I have seen some users try to build their own camera controllers, and maybe they will want to know that hand.js is involved in their eventListeners  *shrug*.

 

I suppose we need an "Everything You Ever Wanted to Know About Hand.js - For Dummies" document, someday.  :)

 

PS: Who is Simon Ferquel?  What does he call himself... here?  Is he here?  Apparently he has done quite a bit of work with oculus and anaglyph operations, which appear to be IE11 only.  I did an English translate of a blog post by him.  And it looks like davrous... and a chap named Eric Vernie are involved, too.  Trailblazers!  Yay!  Thanks, you guys!

Share this post


Link to post
Share on other sites

Too many questions :)

 

By object I mean something with a position property

 

Temechon is the good guy for docs

 

DeviceOrientation could be in tutorial #5

For special camera I think links are good ideas

 

anaglyph can not be controlled by oculus. It is a really independent camera

 

 

Agree about details; Keep it simple :)

 

And for instance hand.js is not required anymore for ie11 (because ie11 directly support pointer events)

Share this post


Link to post
Share on other sites

Hi.  Thanks for info.

 

The new camera tutorial is installed.  I lightly-covered all 8 cameras (2 anaglyph).  I will be proofreading and testing links for a few days. Please do not make edits for 2-3 days, but you can tell me things that need changing, adjusting, and fixing... here in this topic thread... and I will do them.   I am sure I have made many mistakes.

 

I did not include a 'constructor' for the OculusOrientedCamera, yet. I am still a bit short on knowlege about it, but I provided plenty of links for users to learn about it.  I may need some expert help with the constructor.  :)

 

Also, we can drop "specialty" too... I made that up, of course.  They just seemed "special" to me... special purpose.  *shrug* :)

 

Be well!

Share this post


Link to post
Share on other sites

Ok, "specialty" is removed... but I just replaced it with "unique purpose".  There is still a separation between the first 2 cameras and the last 5 cameras... in the document.  I can completely remove that separation and let the document flow right to the bottom without any sub-catergorizing of the last 5 cameras.  I'm easy.  :)

 

That would make the bottom 5 cameras... not seem 'unique' at all.  It would make those bottom 5 cameras look just as common as any other bjs camera.  Thoughts?  thx.

 

The last 5 cameras ARE each 'unique', though... as they all require unique gear.  touch - uses a touch pad/screen, devOrient... uses a mobile device with tilt sensors, oculus uses the O.R. headset, anaglyph needs 3d glasses, and virtualJoystickCam puts 'things' on the user's screen.  Maybe think carefully before 'grouping' the 'unique purpose' cameras...  in-with freecam and arcrotate.  *shrug*

 

Speaking of vjcam and the virtual joysticks, I need to add some more teaching about that.

 

What if user wants them reversed... cyan on right, yellow on left?

 

What if user wants axes inverted? 

 

What if user's scene.clearColor is cyan or yellow? 

 

If user wanted to explore BABYLON.VirtualJoystick in the API, could it be done?  Could they find VirtualJoysticksCamera.leftjoystick._joystickColor and change its value?  Would they know it used a string instead of a Color3?  Would they know which strings are allowed?  Do I ask enough questions?  ;)

 

It IS kind of cool that a virtual joystick is a piece of canvas, and not mesh.  That is why web colors and dom node stuff is used on vjCanvas.   Pretty cool.  I like it.  That overlayed-canvas system is good for all kinds of gui uses. 

 

Do you know of a document where you and/or davrous talk about the virtual joysticks?  thx.  No need to answer my questions about how they work.  :)  I understand them.  I could write a separate tutorial about them, I think. 

 

But the api needs updating so users can browse a BABYLON.VirtualJoystick.... and learn its properties.  Our API is only rated to 1.9.0, though.  Work work work.  :)  Should I donate some money to the Temechon beer fund?  Are we going to have a picnic sometime this summer?  Can you get Microsoft to buy my plane ticket to it?  :)  PARTY!!  I'd love to meet you guys.  Someday, maybe.

Share this post


Link to post
Share on other sites

Yeah, I found a document about the virtual joysticks... http://blogs.msdn.com/b/davrous/archive/2013/02/22/creating-an-universal-virtual-touch-joystick-working-for-all-touch-models-thanks-to-hand-js.aspx.  As far as Temechon goes, I don't want to bother him.  Last time I PM'd him, i told him about .fov being listed twice, and it never got repaired, and I haven't seen him since then.  I am scared that he is fed up with maintaining the API.  It is now two BJS versions behind, and I don't know anything about him.  I take it he is not one of your colleagues in the Paris MS offices?

 

I worry about Temechon.  Maybe he is overwhelmed and/or has some real life troubles.  I would hate to lose his knowledge, good hard work, and friendliness.

Share this post


Link to post
Share on other sites

Ok, I guess I am done editing on the camera tutorial for the moment.  Anyone else who wants to edit on it, fix my mistakes, clarify, add things, please feel free to do so.  Thanks for the delay.  Maybe I/we should shrink that new picture?  It's pretty big and "in your face".  :)

 

I was also thinking about a small picture down in the oculus and anaglyph area...  showing a stereoscopic scene... to show the red-cyan shift of 'eyeSpace'.  *shrug*  But its real easy to construct an anagylphFreeCamera or an anaglyphArcRotateCamera using the constructors I have provided... and SEE the red-cyan shift for themselves.  And then users can play with the .eyeSpace property and it's all real easy... 3D glasses or not.

 

I have not played with the Virtual Joysticks very much, but it seems they could be used with ANY camera.  Our VirtualJoysticksCamera activates Virtual Joysticks automatically, though. 

 

I don't know much about the Oculus Rift, but I suspect that IT controls the camera... via head-tracking.  So it would not be a good candidate for VJ (virtual joysticks).  But anaglyphFreeCamera could use them nicely.

 

Although I have no touch devices, I think the VJ camera is touch-ready.  I used my mouse to control them during my testing of the constructor in the tutorial.  In David Rousset's video, we don't see his thumbs on the screen, but I think that is because of the way the video is recorded.  I think he is controlling those VJ's... with his thumbs.  He has both joysticks active at the same time, and that would require two mouse, two joysticks, or... a pair of David Rousset thumbs.  :)

 

Hope everyone is well.

Share this post


Link to post
Share on other sites

Wake up, Tutorial Talk topic!  :)

 

   Hi gang.  I have noticed that all of our github wiki-based tutorials have a "menu" (pages) on the right side, these days (like this).  This reduces the available width for tutorial content.  It makes many of our code examples in our tutorials... word wrap.  Did a template get changed somewhere, and do we editors... or readers... have the power to remove the right side menu?  Anyone know?  Thanks for any information on that.

 

To go a step further, github wiki pages do not 'scale'.  What I mean by 'scaling' is... the text on the pages does not re-FLOW... when using control-mousewheel to change font sizes, or when resizing (restoring) the browser window.  This often happens when the CSS uses "px" for its sizings... instead of percentages.  Does anyone know if we can do our own stylesheets for the Babylon.js wiki pages, or maybe use a stylesheet loaded last thing, so we can do our own style over-rides?  I can read about it myself, too.  I was just hoping to take a shortcut to knowledge, here.  :)

 

For those who have never seen great scaling...

http://www.blender.org/documentation/blender_python_api_2_63_2/info_overview.html

 

You can control-mousewheel that puppy to HUGE fonts before that scollbar on the bottom of the screen turns-on.  That's because the text re-flows.  I like webpages that do good scaling.  Maybe it is because I am getting old and need big fonts.  :)

 

My primary concern, though, is the loss of page width caused by the right-side PAGES menu and I am wondering IF that was something chosen by Babylon.js admin, or IF it is something that github wiki admin forced upon us, or what.  All comments welcome, as always.  I hope everyone is well.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.