• Content Count

  • Joined

  • Last visited

Everything posted by Amarth2Estel

  1. Wow ! Very fast ! Thank you very much @Deltakosh ! I really appreciate it.
  2. Hi @Deltakosh ! Thank you for your answer ! If it could be move inside the renderingGroup, it would have the perfect behavior to me. What I still don't get is why, in the PG, before the first click, the scene is has I want it to be (with the bounding box back lines not rendered). It looks like the "showBackLines' attribute of the BoundingBoxRenderer is not taken in account when renderingGroupIds are set.
  3. Hi guys ! I reach an unexpected behavior with the bounding box renderer and I wonderer if it is a bug or if I just don't get the point about it. What I do : I disable showBackLines of the bounding box renderer. I create 2 boxes, one in front of the other, and make them show their bounding boxes. On click, I make the front box rendering group Id to switch between 0 (initial) and 1. What happens : Before the first click, the scene is has expected : Only the visible edges are drawn for both boxes. After the first click (front box rendering group Id set to 1), The front box still has only its visible edges displayed, but the behind one (which rendering group Id is still 0) has ALL the edges displayed. After the second click (font box rendering group Id back to 0), both boxes has ALL their edges displayed. Here is a PG showing this behavior : Every answer is welcome to help me understand how to keep the "before click" behavior with only the visible edges drawn, even if there is multiple rendering group Ids in the scene. Thanks you very much
  4. Hello Distraub ! I did some test with your PG. First thing I can say is that even with a negative Y position, the picked point may be the one you are looking for : When using pickWithRay method instead of intersectWithMesh, even with not-working-Y-position on your alternate sphere, it is working as expected. I think it is because pickWithRay get the first hit from source to target while intersectWithMesh has another behaviour : Hope it will help
  5. Hi Wingnut ! Thank you for your answer ! Yes, this is an augmented reality project and you are completly right, I should have mention it. I will edit my post. But, from what I saw, it seems that most of AR apps uses a video stream and a marker with a specific pattern. Using pattern recognition, the marker can be found.. and the magic of math appends ! I won't use a specific marker because I cannot ask users to print a carpet-wide pattern and because I only want to use static images. However, I am well aware of the relation between these two approaches. I found an algorithm named "POSIT" which seems to fit my needs and data. It seems compute the pose of the camera from points taken in 2D.. It exists for non coplanar points (the case of my problem) and coplanar points (markers with specific patterns). Video Source code Maths and algorithm alternatives I will go on eating math all day until I found an answer ! I will post here! About your freeway-reduising-noise-project, it seems great ! Too bad you cannot continue this work 😕 ! I completely agree with you about spatialization data. I hope it will become commonplace as soon as possible !
  6. Hi everybody! I'm working on an Augmented Reality problem that involves math too complex for me. To be honest, I'm not sure there's a solution to this problem. Any help is of course welcome (even if this help consists in telling me "Impossible, you cannot go from an N dimension to an N+1 dimension). I would like to place a 3D object on a photo uploaded by a user. I have the intuition that by asking the user to place the origin and the axes of the coordinate data system, specifying for each of the axes a "real" length from a known reference frame, it could be possible to determine the sequence of transformations that led to such a projection of the 3D coordinate system. And deduce the position of the corresponding camera etc... to get the correct parameters to simulate the presence of a 3D object on a picture. This could of course only work if the user is able to draw the 3 axes and define their real length on the picture. Here is a nice drawing of the goal I would like to achieve : The user has entered the 3 axis on the projection picture, and thanks to known distance, has dimensioned them. The magic of math goes through there... We are able to draw a 3D object in this scene : here is a perfectly drawn unit cube on the XOZ plane😁 Here is a PG that reproduces the first part (the data entered by the user are directly entered in the code😞 I don't own that background, I just needed an image to show the problem. I thank you VERY MUCH in advance for your help ! PS : I know that the field of view of the camera is something to be taken in account as well, but I think a simple slider may allow user to change it to match picture's fov.
  7. Hello Jose Vicente ! I don't know why your code doesn't have the expected behaviour. BUT, to deal with your face orientation issue, I think you should use the flipFaces method on your mesh. Here is a PG : Because cloned mesh uses the same vertexData as the "original" mesh it has been cloned from, flipping the faces of the clone will flip the faces of the original mesh as well. The same thing apply for the instanced mesh. To change the orientation of the cloned mesh only, you should try to use makeGeometryUnique on it to un-link its vertexData and the vertexData of the original mesh.. but it will loose the interest of cloning. You can comment line 42 to see this behaviour To apply a change (scaling, rotation or position) in the vertexData directly, then you can use bakeCurrentTransformIntoVertices. Keep in mind that if you call this method on a mesh, all the meshes sharing its geometry (instances and clones) will also be affected. Hope it will help !
  8. Thank you very much @Deltakosh ! This perfectly fits my needs !
  9. Hi everybody, I am currently working on a project for which I use different levels of detail for my meshes. I also need to use CSG to perform Boolean operations. As you can see on this PG, despite refreshing the CSG at each frame, only the high-poly mesh is used in the intersection, even if it is a low-poly mesh that is displayed. I have already considered several methods to achieve the desired result: In the case of CSG with static meshes, I could of course perform my Boolean operations upstream, and use the results directly in my LODs system. In the case of CSG with dynamic meshes (whose vertex data, position, rotation or scale can evolve), I would have to recover the event of the passage from one LOD to one other (is there an observable?) in order to do the boolean operation between the actually displayed meshes. If the switch between LODs is not listenable/observable, I will probably have to recode part of the LOD system, but why not. Do you have other ideas to effectively combine these two features? It would be great if Babylon could handle this natively! Thanks in advance!
  10. Hello Babbleon ! I don't know about dealing with Texture only, but the dispose method of material accepts additionnal parameters to force the dispose of textures. Take a look at :
  11. Hi Leanderr ! Optimization is the key ! To say it fast, to improve performances, you should reduce the draw calls. You can use instances, merge meshes, using light (not heavy) texture files etc.. You should freeze matrices and materials, when it is possible to avoid useless computation. Especially in static scene, there is a lot of "manual" optimization you can do to improve the performance. Of course, having as less lights as possible in your scene. 1 can be enough in a lot of scenario if you use tricks like ambient lighting or illumination maps. From the screenshot of the debug layer, I can see that there is 6 materials for 16 active meshes. You should try to merge meshes sharing the same material. ------- Once every little optimization done in your scene, if the hardware is still not powerful enough to render the scene smoothy, you may disable rendering features. BabylonJS has its own Scene Optimizer to do it. You should have a look there : and here : To conclude, you can render your scene in a tiniyer resolution than your canvas, but the result will be streched to the canvas resolution so it will of course be blurry. In my opinion, this is something you should do only if you know you have already done every other optimization. To do so, just use engine.setHardwareScalingLevel = n (with n < 1); See Hope it helps!
  12. Hi Legallon ! Concerning the import from you git repo to the playground, I think that is only a CORS problem. Concerning the wrong material, I think the problem is that your .OBJ says which material to use (line 3 : mtllib camion.mtl). Just remove the link of .mtl when you export from Blender (or directly remove this line from your .OBJ) to make sure the material used will be the one you define with babylon. This should fix your problem. Hope it helps
  13. Hi Jouerose ! Don't know mush about GUI, but from what I understand : linkWithMesh makes your Control link to a mesh, so the control position in 2D is recomputed each frame as the position in 2D (projection) of the mesh changes at every camera move. moveToVector3 makes your Control at a given 3D position, so 2D projection when you call it and do nothing at next camera moves. I think your camera, or scene, is not ready in your playground, that's why the projection of your Vector3 is not well computed and is set to (0,0,0). With executeWhenReady, you can see that your moveToVector3 is working.. but it doesn't update your 2D position when camera moves : If you want to have to same behaviour as linkWithMesh, you may use a moveToVector3 each frame : Or you can create a mesh, set its absolute position and disable it. Maybe a bit heavier but may be easier to understand, using this "pivotMesh" as 3D pivot for your Control
  14. Hi Timetocode ! Well, for points 1 and 2.. I don't think there is a perfect solution. it depends of what you need. I suggest that if you want to have a 'master' tree model to spawn on your scene, then using a OBJ loader ( should be enough. But maybe GLTF or .babylon could be better, especially if you want additional information like lights etc... 3. You just have to disable the mesh. try setEnabled(false) and your mesh won't appear. It is better way than just setting the opacity to 0 or isVisible to false because a disabled mesh is not taking in account during hidden babylon computation. 4. You may use instances : From my experience, you will get better performances by cloning your 'master' mesh, and then merging all the clones : It is incredibly powerful if theses clones don't need to be edited later, but then it is gonna be harder to remove/modify individual objects as said in 5. (Do not forget to make sure clones or instances are enabled !) 5. Easy using instances. You may use subMesh to do it while working with a big mesh made from merged clones, but it is way heavier. To conclude : I guess you should use instances if you know your loaded-and-spawned meshes will be modified later. Clones and Merge otherwise. Hope it will help !
  15. Hi Matdav ! One solution could be to use 2 different meshes as ground : -> 1 to display shadow only, using the shadowOnlyMaterial. -> 1 to display reflection only, using a material with mirrorTexture and without diffuse or specular attributes You can see a PB of this solution here : Of course, there may be a more efficient way, perhaps by using a single one couple mesh and material... Have fun !
  16. Hello BlackMojito ! I copy-paste below a part of : This is a very interesting page where you can find almost all the possibilities about shadows. Freezing shadows in static world In case you have a static game world (objects which cast shadows) - there is no need to do the same shadow calculations 60 times per second. It could be enough to create and place a shadowMap only once. This greatly improves performance, allowing higher values of shadowMap's resolution. Shadow generators can be frozen with: shadowGenerator.getShadowMap().refreshRate = BABYLON.RenderTargetTexture.REFRESHRATE_RENDER_ONCE; Ask the light to not recompute shadow position with: light.autoUpdateExtends = false;
  17. Hi ! Your shadows are here. But because of the camera direction you don't see them. Using an arc rotate with control, you can see your shadows are projected, but outside of your original camera view. You will see that the shadow is not natural.. it is because your character is litteraly flying. There is 2 babylon units between him and the ground (Y componant). Try working on their position By the way, don't forget to set sizeAuto to false when you create your default environment, otherwise your skyboxSize and groundSize are not taken.
  18. Hello ! I think you may use the 'onSuccess' callback of your sceneLoader to do your material changes. This way, the scene is not ready. Then, you just have to stop the renderLoop if the scene is not ready, and wait until it is ready to start it again. I was not able to test it, because well.. my internet connection is too fast (I can't believe I am actually complaining about that). I guess you can also force the LoadingUI to be displayed instead of stoping the renderLoop I think that in your own code (working out of the playground) might be easier as you have complete control on the moment you launch the renderLoop.
  19. Hi efxlab ! You may use camera.setTarget(yourMesh.getBoundingInfo().boudingBox.centerWorld) or camera.lockedTarget = yourMesh; If your mesh is still not visible, perhaps is your camera.maxZ too small, then try to increase it. PS : Depending on the transformations you applied, you might need to use the method refreshBoudingInfo() to update the bounding info and to avoid your mesh being considered out of the frustum and therefore not drawn !
  20. Hi maecky ! I think you will find interesting info here : To fix your problem, you just need to set mesh.material.needDepthPrePass to true PG fixed : Edit : As written in doc linked above, if you have to work with "real" transparency (I mean, not 0.9999) , then setting separateCullingPass to true is better! PG with alpha 0.2 :
  21. Hi efxlab ! BABYLON.Mesh has no attribute PBRMaterial but a single attribute material to deal with StandardMaterial and PBRMaterial. You can get material directly this way and dispose it (it will dispose the material only) Moreover, there is parameters in the method dispose of Material to force disposing its textures. The same way, there is parameters in the method dispose of Mesh to dispose its material (if exist) as well. Here is a PG showing these 2 solutions : With solution 1, textures are still in memory. With solution 2, disposing meshes cleans also all the now unnecessary data.
  22. Hi Art Vandelay ! There may be a more efficient way to do, but here is a solution : Create a real mesh from your bounding box data (you will find in THIS post how to do it) Disable this mesh to avoid unnecessary computation and to be sure it is not drawn Change the predicate function on your picking method to ensure you pick only 'MeshBuildFromBoundingBoxData' (you might use Tags if you have many meshes/bounding box in your scene) This way you will get the wanted intersection point !
  23. Hi Everybody ! I am not sure this is a bug or a wanted behavior, that is why I did not post on the bug section. When you set the property infiniteDistance to true on a skybox created from the meshBuilder, then behavior is as expected : But when you set the same property of a skybox create from an EnvironmentHelper, then it changes nothing : I saw in the source code of the TransformNode (line 735) that the parent attribute of the skybox is tested. Of course with the EnvironmentHelper it is not null. It is on purpose ? "Solution" found while writing this post : Obviously I can get the skybox from the environmentHelper and set its parent to null, but it doesn't looks clean coding to me. Except this, EnvironmentHelper is awesome to set up clean scene really fast ! I love it.
  24. Hello Jean-Philippe ! Here is a playground showing working on camera.rotation.z is enough. The problem is that you are passing a BABYLON.Angle (which is an Object) to the attribute z of the Vector3 rotation, while it expects a float (angle in radians). You can use the radians() method on your BABYLON.Angle to get the correct data !
  25. Hello Yokewang ! When you do a new BABYLON.Plane, you create a "mathematical" plane in 3D-space. It is of course not rendered because it is infinite. When you do a BABYLON.MeshBuilder.CreateGround(..) you create a mesh which is coplanar to the XOZ Plane of your scene, with a given size, vertices number etc... To make your mesh fit your mathematical data, you need to align your mesh with your data's normal (the rotation) and change your mesh position to fit the offset of the math Plane. To do it, you can use : alignWithNormal method on AbstractMesh scale method on Vector3 Here is a simple playground showing this solution.