• Content count

  • Joined

  • Last visited

  • Days Won


dbawel last won the day on August 28 2016

dbawel had the most liked content!


About dbawel

  • Rank
    Advanced Member
  • Birthday

Profile Information

  • Gender
  • Location
  • Interests
    Film, Television, Animation, Technology

Recent Profile Visitors

1,940 profile views
  1. This is the real deal: https://www.amazon.com/POWER-MOGA-Pro-Electronic-Games/dp/B00FB5RBJM/ref=pd_sbs_63_t_0?_encoding=UTF8&psc=1&refRID=SPS0CX8V0DNMMH3AWQVA DB
  2. @MrVR - The MOGA Pro controller is the best on the market, and cheap to buy. It has two different modes to connect to your phone or device, and mode A will act as close to the Xbox controller as possible. It's completely plug and play, and even if you aren't developing, will provide the best control for any VR game you have. But for devs, it wil provide all of the standard XBOX control schemes. DB
  3. @letsbrostudio - I hope this doesn't piss too many people off, but WebGPU can bite my sack. Apple has been the biggest nightmare and control freaks I've had to deal with for more than 20 years. They have cost me loads of money and resources trying to play their game strictly by their rules, and I wised up years ago and never looked back. At least @Pryme8 is looking forward to WebCL, which looks promising, but let's not get ahead of ourselves. As for WebGPU, I wouldn't touch this with anothers dev's coding stick. Just my opinion, so please don't hate on me. Although, I still hold complete contempt for Apple and wish they would be willing to join the rest of us and get off their unrealistic Godly high horse; or simply go away. Enough said. DB
  4. @juanmajr93 - Try this - http://www.babylonjs-playground.com/#27FN5R#19 DB
  5. @MrVR - Here is a PG with the camera streaming as a video texture: https://www.babylonjs-playground.com/#1R77YT#4 DB
  6. @juanmajr93 - It appears you're working within the Oculus environment as seen in the reference image you provided above. However, the way I work is to avoid plugging in the Samgung phone into the GearVR, and simply launch the browser in landscape mode. I then use a MOGA bluetooth controller to control not only my phone's UI, but also my content or game. This is the fastest and simplest method I've found to develop for multiple devices quicky and efficiently, and then once the game is functioning as desired, it's easy to port to practically any VR device. It simply allows speed and flexability working in this method. DB
  7. @hunts - It appears you're looking to use pop() as you might expect - to achieve the previous state of a mesh or property. However I don't believe that pop() alone will achieve this in BJS. It's been a while since I've attempted this, but pop() didn't behave as I might have expected - although I haven't attempted to utilize this since prior to babylon.js v2.0. Now that it is clear what you want to achieve, I expect other devs on this forum will explaing either why this is not working as might be traditionally expected, or how you can achieve this without having to dispose and restore or push manually. I wish I could be more help currently, but @Deltakosh, @davrous, or one of the big guns can most likely help us both better understand the usage and/or limitations of using pop() in BJS. I'll be looking forward to the answer as well. But mentioning these user names specificaly in this post should get their attention - so I might expect an answer soon. I could also list a few others who I'm certain know the answer, but I don't like to name names. But you guys know the list. Cheers, DB
  8. @juanmajr93 and @MackeyK24 - I have been working with the white GearVR for over a year, and find it simple to develop for. I also have the black GearVR, and it doesn't improve anything for me personally - as well as to realize that most users currently use the white GearVR whic is your target audience. I also have the S5, S6, and S7 - and the main difference other than GPU speed is that the S6 is prone to overheat. So with any of the 3 (as they can all overheat with intensive use and shut down on their own) is to buy cheap gel packs and don't freeze them, but simply place the room temperature gel pack on the back of the phone after pluging into the GearVR. Otherwise, if you're pushing any heavy use of your GPU, you'll shut down in about 20 minutes which sucks if you're trying to work (develop). As for my own personal process, I generally use the Samsung internet browser, with the browser elements off-display to take my media full screen. And certainly don't launch the Oculus app as it is a paing to develop through this medium. I don't find any advantage in using the WebVR API, as I build my scenes natively in BJS and javascript, and use the WebVRCamera - it's that simple. In my opinion and after developing for more than a year on the GearVR, this is the quickest and easiest process to quickly build content, as using the WebVR API means to slow down my workflow until I have the app developed. Then I can port wherever I need - whether it's GearVR, Cardboard, Vive, etc. Just my opinion, but I personally would advise keeping it as simple as possible until the content is practically complete and working natively in the Samsung browser for GearVR. And for other VR/AR devices, I keep it equally s simple. DB
  9. @Hans - Yes, this can become more complex than necessary - but you can orient your camera and or scene gravity to create the appearance of X or Z negative mass. So if yourphysics simulations are relatively uniform throught your scene and in time, then you can often achieve the behaviors you require by setting scene.gravity to create a negative mass for all objects - which can be changed conditionally in time as well. In addition, you can set proceedural animation for each mesh in an array or for seperate arrays and conditions which will work in addition to any impulses you might be including in your simulation. A typical example is to create a condition which if you mesh is moving at a velocity less than your desired velocity in any specific vector, then add a value to the object's own mesh.position.z or mesh.position.x and this will translate your mesh in the direction of whatever vector you desire - and your impulses will still behave as expected simultaniously with an addition to the object's own position independant of the impulse (subjectively). Many devs haven't yet discovered that you can add values to a mesh's own position to create movement for the mesh in addition to any physics impulses already in play on the mesh's imposter - which allows you to either change direction (position) of movement, or to set a speed limit on collisions - and many other possibilities for animation. So there are many solutions to your what might be percieved s a problem, but these can all be overcome. And I've learned from working on projects with @Pryme8 that it is better to set your own object's movement, velocity, etc. - or change in position - and to avoid using impulses altogether in your physics simultations. However, both of these methods to translate your mesh through world space are valid - seperately and together. It simply takes a little experimentation to create the desired movement for your specific scene or instance. However, in my experience, I now find far more control in avoiding the use of impulses, and to conditionally set the change in position, acceleration, and velocity for any mesh in a physics simultation using my choice of physics engine; maintaining every other aspect of the physics simulation - as this allows me far more flexability in any simulation and scene. And I personally find cannon.js to be easier to use in these circumstances. However, Oimo.js is great, but a bit more complex to navigate in such conditions. DB
  10. @Rolento - Unfortunately, I don't see how you might apply a third texture to the mesh in your PG scene to accomplish your desired goal of smoothly blending the texture channels in your multi-materials. The simplest solution I can think of doing quickly and easily is to "generate" a seperate mesh duplicating the faces of your existing mesh which you wish to blend materials, and apply a texture to this duplicate mesh which by masking blends the multi-materials on your current PG mesh, or - you can create a simple shader to accomplish this. Take a look at ShaderBuilder in BJS, and you should have all you need to accomplish this - however, there are issues with shaders and lights (such as shadows) in BJS, so be aware of the limitations in utilizing shaders before you try and impliment - so you don't perhaps waste your time. So in my opinion, the first suggestion is the quickest and most flexible I'm personally aware of. DB
  11. @hunts - As you're already setting the U and V scale for the diffuseTexture your in your material properties, what is it you are trying to achieve specifically, which you are not currently able to do? DB
  12. @Hans - I'm not certain why you might want to change the Workd Scale gravity dynamically, but this is certainly something you can accomplish; and I find this action more controlable using Oimo instead of Cannon - but this is based upon many experiments taking all settings in each physics extension into consideration. But if I was personally attempting to alter the affect of gravity using either Oimo or Cannon as the physics engine, I would change the mass of my objects using a variable to pass to each meshes' physics imposter; which would allow a single change in the variable to affect as many objects as required at the time. This also allows you to set the behavior of each mesh or group of meshes to behave very specificaly - as opposed to to achieving the desired behavior using a single global setting - which is far more difficult to achieve the desired result from the physics simulation in detail. So it is far more flexible and simpler to change the imposter's mass over time and/or to make a single adjustment to each imposter's mass - however, make certain you utilize mesh.physicsImpostor.forceUpdate() to apply the change in mass to each object's physics imposter. There are other methods you can apply, such as scene.gravity = new BABYLON.Vector3(X, Y, Z); but I personally find that changing the mass for your physics imposters provide far more flexability and more specific control over the objects in your scene. DB
  13. @david028 - As long as you set your camera at a far enough distance from your target mesh(s), the entire area within your canvas will be used to display the area defined by yur top, bottom, left, and right. So if you simply wanted to display a plane which has the dimensions of 180 X 90 which was necessary for my scene, I used valuesfrom -90 to 90 in X (180 world units), and values from -45 to 45 in Y (90 world units). This resulted in an ortographic viewport which fit my planar mesh perfectly. If you set up a playground scene, I and others can help you solve any additional parameters you might need to replicate an ortho view seen in 3D apps outside of the WebGL environment. Simply start with the settings above in a PG scene and test - and we can work from there quickly to accomplish whatever you might require. For me, the PG helped me to quickly understand all I needed to know about the ortho camera in BJS. I wish I could find the PG scene I built, but I cannot. However, if you are truly stuck on an aspect of this, then I can builtd you a PG scene. But it's always best to build yourself as you'll answer your own questions quickly. DB
  14. @david028 The simplest method I've found is to define your own orthographic camera in BJS. Below is code I just striped from a recently delivered project: var camera = new BABYLON.FreeCamera("camera1", new BABYLON.Vector3(0, 100, 0), scene); self.camera = camera; camera.setTarget(BABYLON.Vector3.Zero()); //camera.attachControl(canvas, false); //Not needed unless debuging in camera or for some other reason. camera.mode = BABYLON.Camera.ORTHOGRAPHIC_CAMERA; camera.orthoTop = 45; camera.orthoBottom = -45; camera.orthoLeft = -90; camera.orthoRight = 90; Just remember to define the size of the viewport relative to your camera's position from the target (I highly recommend setting a target,) and/or it's position in space relative to the geometry you're rendering. This allows not only camera control, but any size in BJS world scale units that you wish your orhto view to be. DB
  15. @Deltakosh DK - Thank God you came along, as I spent over an hour yesterday trying to solve this for @tranlong021988. I came back today becuse I reconstructed the scene several times and got the same response. I simply didn't want to look like the village idiot, as there aren't often bugs that pop up of this nature with known lights on standard materials. Next time, I'll post any results I find sooner. DB