• Content count

  • Joined

  • Last visited

About Amarth2Estel

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Amarth2Estel

    Having an issue with Ray.intersectsMesh

    Hello Distraub ! I did some test with your PG. First thing I can say is that even with a negative Y position, the picked point may be the one you are looking for : When using pickWithRay method instead of intersectWithMesh, even with not-working-Y-position on your alternate sphere, it is working as expected. I think it is because pickWithRay get the first hit from source to target while intersectWithMesh has another behaviour : Hope it will help
  2. Hi Wingnut ! Thank you for your answer ! Yes, this is an augmented reality project and you are completly right, I should have mention it. I will edit my post. But, from what I saw, it seems that most of AR apps uses a video stream and a marker with a specific pattern. Using pattern recognition, the marker can be found.. and the magic of math appends ! I won't use a specific marker because I cannot ask users to print a carpet-wide pattern and because I only want to use static images. However, I am well aware of the relation between these two approaches. I found an algorithm named "POSIT" which seems to fit my needs and data. It seems compute the pose of the camera from points taken in 2D.. It exists for non coplanar points (the case of my problem) and coplanar points (markers with specific patterns). Video Source code Maths and algorithm alternatives I will go on eating math all day until I found an answer ! I will post here! About your freeway-reduising-noise-project, it seems great ! Too bad you cannot continue this work 😕 ! I completely agree with you about spatialization data. I hope it will become commonplace as soon as possible !
  3. Hi everybody! I'm working on an Augmented Reality problem that involves math too complex for me. To be honest, I'm not sure there's a solution to this problem. Any help is of course welcome (even if this help consists in telling me "Impossible, you cannot go from an N dimension to an N+1 dimension). I would like to place a 3D object on a photo uploaded by a user. I have the intuition that by asking the user to place the origin and the axes of the coordinate data system, specifying for each of the axes a "real" length from a known reference frame, it could be possible to determine the sequence of transformations that led to such a projection of the 3D coordinate system. And deduce the position of the corresponding camera etc... to get the correct parameters to simulate the presence of a 3D object on a picture. This could of course only work if the user is able to draw the 3 axes and define their real length on the picture. Here is a nice drawing of the goal I would like to achieve : The user has entered the 3 axis on the projection picture, and thanks to known distance, has dimensioned them. The magic of math goes through there... We are able to draw a 3D object in this scene : here is a perfectly drawn unit cube on the XOZ plane😁 Here is a PG that reproduces the first part (the data entered by the user are directly entered in the code😞 I don't own that background, I just needed an image to show the problem. I thank you VERY MUCH in advance for your help ! PS : I know that the field of view of the camera is something to be taken in account as well, but I think a simple slider may allow user to change it to match picture's fov.
  4. Amarth2Estel

    Mesh clone and BackFace culling

    Hello Jose Vicente ! I don't know why your code doesn't have the expected behaviour. BUT, to deal with your face orientation issue, I think you should use the flipFaces method on your mesh. Here is a PG : Because cloned mesh uses the same vertexData as the "original" mesh it has been cloned from, flipping the faces of the clone will flip the faces of the original mesh as well. The same thing apply for the instanced mesh. To change the orientation of the cloned mesh only, you should try to use makeGeometryUnique on it to un-link its vertexData and the vertexData of the original mesh.. but it will loose the interest of cloning. You can comment line 42 to see this behaviour To apply a change (scaling, rotation or position) in the vertexData directly, then you can use bakeCurrentTransformIntoVertices. Keep in mind that if you call this method on a mesh, all the meshes sharing its geometry (instances and clones) will also be affected. Hope it will help !
  5. Amarth2Estel

    LODs and CSG

    Thank you very much @Deltakosh ! This perfectly fits my needs !
  6. Amarth2Estel

    LODs and CSG

    Hi everybody, I am currently working on a project for which I use different levels of detail for my meshes. I also need to use CSG to perform Boolean operations. As you can see on this PG, despite refreshing the CSG at each frame, only the high-poly mesh is used in the intersection, even if it is a low-poly mesh that is displayed. I have already considered several methods to achieve the desired result: In the case of CSG with static meshes, I could of course perform my Boolean operations upstream, and use the results directly in my LODs system. In the case of CSG with dynamic meshes (whose vertex data, position, rotation or scale can evolve), I would have to recover the event of the passage from one LOD to one other (is there an observable?) in order to do the boolean operation between the actually displayed meshes. If the switch between LODs is not listenable/observable, I will probably have to recode part of the LOD system, but why not. Do you have other ideas to effectively combine these two features? It would be great if Babylon could handle this natively! Thanks in advance!
  7. Amarth2Estel

    dispose unused textures

    Hello Babbleon ! I don't know about dealing with Texture only, but the dispose method of material accepts additionnal parameters to force the dispose of textures. Take a look at :
  8. Amarth2Estel

    Best performing material/light combinaton

    Hi Leanderr ! Optimization is the key ! To say it fast, to improve performances, you should reduce the draw calls. You can use instances, merge meshes, using light (not heavy) texture files etc.. You should freeze matrices and materials, when it is possible to avoid useless computation. Especially in static scene, there is a lot of "manual" optimization you can do to improve the performance. Of course, having as less lights as possible in your scene. 1 can be enough in a lot of scenario if you use tricks like ambient lighting or illumination maps. From the screenshot of the debug layer, I can see that there is 6 materials for 16 active meshes. You should try to merge meshes sharing the same material. ------- Once every little optimization done in your scene, if the hardware is still not powerful enough to render the scene smoothy, you may disable rendering features. BabylonJS has its own Scene Optimizer to do it. You should have a look there : and here : To conclude, you can render your scene in a tiniyer resolution than your canvas, but the result will be streched to the canvas resolution so it will of course be blurry. In my opinion, this is something you should do only if you know you have already done every other optimization. To do so, just use engine.setHardwareScalingLevel = n (with n < 1); See Hope it helps!
  9. Amarth2Estel

    Change the color of an imported obj

    Hi Legallon ! Concerning the import from you git repo to the playground, I think that is only a CORS problem. Concerning the wrong material, I think the problem is that your .OBJ says which material to use (line 3 : mtllib camion.mtl). Just remove the link of .mtl when you export from Blender (or directly remove this line from your .OBJ) to make sure the material used will be the one you define with babylon. This should fix your problem. Hope it helps
  10. Amarth2Estel

    moveToVector3 on a control. how to make it work?

    Hi Jouerose ! Don't know mush about GUI, but from what I understand : linkWithMesh makes your Control link to a mesh, so the control position in 2D is recomputed each frame as the position in 2D (projection) of the mesh changes at every camera move. moveToVector3 makes your Control at a given 3D position, so 2D projection when you call it and do nothing at next camera moves. I think your camera, or scene, is not ready in your playground, that's why the projection of your Vector3 is not well computed and is set to (0,0,0). With executeWhenReady, you can see that your moveToVector3 is working.. but it doesn't update your 2D position when camera moves : If you want to have to same behaviour as linkWithMesh, you may use a moveToVector3 each frame : Or you can create a mesh, set its absolute position and disable it. Maybe a bit heavier but may be easier to understand, using this "pivotMesh" as 3D pivot for your Control
  11. Hi Timetocode ! Well, for points 1 and 2.. I don't think there is a perfect solution. it depends of what you need. I suggest that if you want to have a 'master' tree model to spawn on your scene, then using a OBJ loader ( should be enough. But maybe GLTF or .babylon could be better, especially if you want additional information like lights etc... 3. You just have to disable the mesh. try setEnabled(false) and your mesh won't appear. It is better way than just setting the opacity to 0 or isVisible to false because a disabled mesh is not taking in account during hidden babylon computation. 4. You may use instances : From my experience, you will get better performances by cloning your 'master' mesh, and then merging all the clones : It is incredibly powerful if theses clones don't need to be edited later, but then it is gonna be harder to remove/modify individual objects as said in 5. (Do not forget to make sure clones or instances are enabled !) 5. Easy using instances. You may use subMesh to do it while working with a big mesh made from merged clones, but it is way heavier. To conclude : I guess you should use instances if you know your loaded-and-spawned meshes will be modified later. Clones and Merge otherwise. Hope it will help !
  12. Amarth2Estel

    Shadows and mirroring

    Hi Matdav ! One solution could be to use 2 different meshes as ground : -> 1 to display shadow only, using the shadowOnlyMaterial. -> 1 to display reflection only, using a material with mirrorTexture and without diffuse or specular attributes You can see a PB of this solution here : Of course, there may be a more efficient way, perhaps by using a single one couple mesh and material... Have fun !
  13. Hello BlackMojito ! I copy-paste below a part of : This is a very interesting page where you can find almost all the possibilities about shadows. Freezing shadows in static world In case you have a static game world (objects which cast shadows) - there is no need to do the same shadow calculations 60 times per second. It could be enough to create and place a shadowMap only once. This greatly improves performance, allowing higher values of shadowMap's resolution. Shadow generators can be frozen with: shadowGenerator.getShadowMap().refreshRate = BABYLON.RenderTargetTexture.REFRESHRATE_RENDER_ONCE; Ask the light to not recompute shadow position with: light.autoUpdateExtends = false;
  14. Hi ! Your shadows are here. But because of the camera direction you don't see them. Using an arc rotate with control, you can see your shadows are projected, but outside of your original camera view. You will see that the shadow is not natural.. it is because your character is litteraly flying. There is 2 babylon units between him and the ground (Y componant). Try working on their position By the way, don't forget to set sizeAuto to false when you create your default environment, otherwise your skyboxSize and groundSize are not taken.
  15. Hello ! I think you may use the 'onSuccess' callback of your sceneLoader to do your material changes. This way, the scene is not ready. Then, you just have to stop the renderLoop if the scene is not ready, and wait until it is ready to start it again. I was not able to test it, because well.. my internet connection is too fast (I can't believe I am actually complaining about that). I guess you can also force the LoadingUI to be displayed instead of stoping the renderLoop I think that in your own code (working out of the playground) might be easier as you have complete control on the moment you launch the renderLoop.