Amarth2Estel

Members
  • Content Count

    45
  • Joined

  • Last visited

About Amarth2Estel

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Amarth2Estel

    showBoundingBox & renderingGroupId

    Wow ! Very fast ! Thank you very much @Deltakosh ! I really appreciate it.
  2. Amarth2Estel

    showBoundingBox & renderingGroupId

    Hi @Deltakosh ! Thank you for your answer ! If it could be move inside the renderingGroup, it would have the perfect behavior to me. What I still don't get is why, in the PG, before the first click, the scene is has I want it to be (with the bounding box back lines not rendered). It looks like the "showBackLines' attribute of the BoundingBoxRenderer is not taken in account when renderingGroupIds are set.
  3. Hi guys ! I reach an unexpected behavior with the bounding box renderer and I wonderer if it is a bug or if I just don't get the point about it. What I do : I disable showBackLines of the bounding box renderer. I create 2 boxes, one in front of the other, and make them show their bounding boxes. On click, I make the front box rendering group Id to switch between 0 (initial) and 1. What happens : Before the first click, the scene is has expected : Only the visible edges are drawn for both boxes. After the first click (front box rendering group Id set to 1), The front box still has only its visible edges displayed, but the behind one (which rendering group Id is still 0) has ALL the edges displayed. After the second click (font box rendering group Id back to 0), both boxes has ALL their edges displayed. Here is a PG showing this behavior : https://playground.babylonjs.com/#695QX6 Every answer is welcome to help me understand how to keep the "before click" behavior with only the visible edges drawn, even if there is multiple rendering group Ids in the scene. Thanks you very much
  4. Amarth2Estel

    Having an issue with Ray.intersectsMesh

    Hello Distraub ! I did some test with your PG. First thing I can say is that even with a negative Y position, the picked point may be the one you are looking for : https://www.babylonjs-playground.com/#KNB069#1 When using pickWithRay method instead of intersectWithMesh, even with not-working-Y-position on your alternate sphere, it is working as expected. I think it is because pickWithRay get the first hit from source to target while intersectWithMesh has another behaviour : https://www.babylonjs-playground.com/#KNB069#2 Hope it will help
  5. Hi Wingnut ! Thank you for your answer ! Yes, this is an augmented reality project and you are completly right, I should have mention it. I will edit my post. But, from what I saw, it seems that most of AR apps uses a video stream and a marker with a specific pattern. Using pattern recognition, the marker can be found.. and the magic of math appends ! I won't use a specific marker because I cannot ask users to print a carpet-wide pattern and because I only want to use static images. However, I am well aware of the relation between these two approaches. I found an algorithm named "POSIT" which seems to fit my needs and data. It seems compute the pose of the camera from points taken in 2D.. It exists for non coplanar points (the case of my problem) and coplanar points (markers with specific patterns). Video Source code Maths and algorithm alternatives I will go on eating math all day until I found an answer ! I will post here! About your freeway-reduising-noise-project, it seems great ! Too bad you cannot continue this work 😕 ! I completely agree with you about spatialization data. I hope it will become commonplace as soon as possible !
  6. Hi everybody! I'm working on an Augmented Reality problem that involves math too complex for me. To be honest, I'm not sure there's a solution to this problem. Any help is of course welcome (even if this help consists in telling me "Impossible, you cannot go from an N dimension to an N+1 dimension). I would like to place a 3D object on a photo uploaded by a user. I have the intuition that by asking the user to place the origin and the axes of the coordinate data system, specifying for each of the axes a "real" length from a known reference frame, it could be possible to determine the sequence of transformations that led to such a projection of the 3D coordinate system. And deduce the position of the corresponding camera etc... to get the correct parameters to simulate the presence of a 3D object on a picture. This could of course only work if the user is able to draw the 3 axes and define their real length on the picture. Here is a nice drawing of the goal I would like to achieve : The user has entered the 3 axis on the projection picture, and thanks to known distance, has dimensioned them. The magic of math goes through there... We are able to draw a 3D object in this scene : here is a perfectly drawn unit cube on the XOZ plane😁 Here is a PG that reproduces the first part (the data entered by the user are directly entered in the code😞 https://playground.babylonjs.com/#2XZ6M5 I don't own that background, I just needed an image to show the problem. I thank you VERY MUCH in advance for your help ! PS : I know that the field of view of the camera is something to be taken in account as well, but I think a simple slider may allow user to change it to match picture's fov.
  7. Amarth2Estel

    Mesh clone and BackFace culling

    Hello Jose Vicente ! I don't know why your code doesn't have the expected behaviour. BUT, to deal with your face orientation issue, I think you should use the flipFaces method on your mesh. Here is a PG : https://playground.babylonjs.com/indexStable.html#VI428J#4 Because cloned mesh uses the same vertexData as the "original" mesh it has been cloned from, flipping the faces of the clone will flip the faces of the original mesh as well. The same thing apply for the instanced mesh. To change the orientation of the cloned mesh only, you should try to use makeGeometryUnique on it to un-link its vertexData and the vertexData of the original mesh.. but it will loose the interest of cloning. You can comment line 42 to see this behaviour To apply a change (scaling, rotation or position) in the vertexData directly, then you can use bakeCurrentTransformIntoVertices. Keep in mind that if you call this method on a mesh, all the meshes sharing its geometry (instances and clones) will also be affected. Hope it will help !
  8. Amarth2Estel

    LODs and CSG

    Thank you very much @Deltakosh ! This perfectly fits my needs ! http://www.babylonjs-playground.com/#4LX0VC#1
  9. Amarth2Estel

    LODs and CSG

    Hi everybody, I am currently working on a project for which I use different levels of detail for my meshes. I also need to use CSG to perform Boolean operations. As you can see on this PG, despite refreshing the CSG at each frame, only the high-poly mesh is used in the intersection, even if it is a low-poly mesh that is displayed. I have already considered several methods to achieve the desired result: In the case of CSG with static meshes, I could of course perform my Boolean operations upstream, and use the results directly in my LODs system. In the case of CSG with dynamic meshes (whose vertex data, position, rotation or scale can evolve), I would have to recover the event of the passage from one LOD to one other (is there an observable?) in order to do the boolean operation between the actually displayed meshes. If the switch between LODs is not listenable/observable, I will probably have to recode part of the LOD system, but why not. Do you have other ideas to effectively combine these two features? It would be great if Babylon could handle this natively! Thanks in advance!
  10. Amarth2Estel

    dispose unused textures

    Hello Babbleon ! I don't know about dealing with Texture only, but the dispose method of material accepts additionnal parameters to force the dispose of textures. Take a look at : https://doc.babylonjs.com/api/classes/babylon.material#dispose
  11. Amarth2Estel

    Best performing material/light combinaton

    Hi Leanderr ! Optimization is the key ! To say it fast, to improve performances, you should reduce the draw calls. You can use instances, merge meshes, using light (not heavy) texture files etc.. You should freeze matrices and materials, when it is possible to avoid useless computation. Especially in static scene, there is a lot of "manual" optimization you can do to improve the performance. Of course, having as less lights as possible in your scene. 1 can be enough in a lot of scenario if you use tricks like ambient lighting or illumination maps. From the screenshot of the debug layer, I can see that there is 6 materials for 16 active meshes. You should try to merge meshes sharing the same material. ------- Once every little optimization done in your scene, if the hardware is still not powerful enough to render the scene smoothy, you may disable rendering features. BabylonJS has its own Scene Optimizer to do it. You should have a look there : https://doc.babylonjs.com/how_to/optimizing_your_scene and here : https://doc.babylonjs.com/how_to/how_to_use_sceneoptimizer To conclude, you can render your scene in a tiniyer resolution than your canvas, but the result will be streched to the canvas resolution so it will of course be blurry. In my opinion, this is something you should do only if you know you have already done every other optimization. To do so, just use engine.setHardwareScalingLevel = n (with n < 1); See http://doc.babylonjs.com/api/classes/babylon.engine Hope it helps!
  12. Amarth2Estel

    Change the color of an imported obj

    Hi Legallon ! Concerning the import from you git repo to the playground, I think that is only a CORS problem. Concerning the wrong material, I think the problem is that your .OBJ says which material to use (line 3 : mtllib camion.mtl). Just remove the link of .mtl when you export from Blender (or directly remove this line from your .OBJ) to make sure the material used will be the one you define with babylon. This should fix your problem. Hope it helps
  13. Amarth2Estel

    moveToVector3 on a control. how to make it work?

    Hi Jouerose ! Don't know mush about GUI, but from what I understand : linkWithMesh makes your Control link to a mesh, so the control position in 2D is recomputed each frame as the position in 2D (projection) of the mesh changes at every camera move. moveToVector3 makes your Control at a given 3D position, so 2D projection when you call it and do nothing at next camera moves. I think your camera, or scene, is not ready in your playground, that's why the projection of your Vector3 is not well computed and is set to (0,0,0). With executeWhenReady, you can see that your moveToVector3 is working.. but it doesn't update your 2D position when camera moves : https://www.babylonjs-playground.com/#XCPP9Y#524 If you want to have to same behaviour as linkWithMesh, you may use a moveToVector3 each frame : https://www.babylonjs-playground.com/#XCPP9Y#527 Or you can create a mesh, set its absolute position and disable it. Maybe a bit heavier but may be easier to understand, using this "pivotMesh" as 3D pivot for your Control
  14. Hi Timetocode ! Well, for points 1 and 2.. I don't think there is a perfect solution. it depends of what you need. I suggest that if you want to have a 'master' tree model to spawn on your scene, then using a OBJ loader (https://doc.babylonjs.com/how_to/obj) should be enough. But maybe GLTF or .babylon could be better, especially if you want additional information like lights etc... 3. You just have to disable the mesh. try setEnabled(false) and your mesh won't appear. It is better way than just setting the opacity to 0 or isVisible to false because a disabled mesh is not taking in account during hidden babylon computation. 4. You may use instances : https://doc.babylonjs.com/how_to/how_to_use_instances From my experience, you will get better performances by cloning your 'master' mesh, and then merging all the clones : https://doc.babylonjs.com/how_to/how_to_merge_meshes It is incredibly powerful if theses clones don't need to be edited later, but then it is gonna be harder to remove/modify individual objects as said in 5. (Do not forget to make sure clones or instances are enabled !) 5. Easy using instances. You may use subMesh to do it while working with a big mesh made from merged clones, but it is way heavier. To conclude : I guess you should use instances if you know your loaded-and-spawned meshes will be modified later. Clones and Merge otherwise. Hope it will help !
  15. Amarth2Estel

    Shadows and mirroring

    Hi Matdav ! One solution could be to use 2 different meshes as ground : -> 1 to display shadow only, using the shadowOnlyMaterial. -> 1 to display reflection only, using a material with mirrorTexture and without diffuse or specular attributes You can see a PB of this solution here : https://www.babylonjs-playground.com/#QHN8ZT Of course, there may be a more efficient way, perhaps by using a single one couple mesh and material... Have fun !