UP_AND_ATOM

Members
  • Content Count

    17
  • Joined

  • Last visited

About UP_AND_ATOM

  • Rank
    Member

Profile Information

  • Gender
    Not Telling
  1. I think the way to go is checking the mesh's normals and seeing if they're facing the camera. I'm working on that now.
  2. I'm trying to find if it's being blocked by anything - other meshes or parts of the same mesh.
  3. Just out of curiosity, is there a recommended way to check if a given vertex is visible to the camera? I've tried using scene.pick(x, y).distance and comparing the actual distance between the camera and vertex, but that seems a little hacky and probably won't scale too well for larger meshes.
  4. I'm a JS developer using Babylon.js for the first time, and right now I'm just trying develop an understanding of what is happening under the hood. I've been programming forever, but have limited 3D experience, so Babylon.js looks like a perfect learning opportunity. I put together a simple program that takes a mesh, finds the screen coordinates of each of its vertices, and overlays triangles on a canvas on top of Babylon.js's renderCanvas. I'm getting some strange results, though. Most of the vertices are right where they should be, but others are completely wrong. When I do this with a cube, all the verticies look right, but if I open up the vertices array, what I'm seeing on-screen doesn't match the data. Screenshot: http://i.imgur.com/POtcnBr.png It's a bit simpler with a plane. All four vertices make a square using only X and Y, which is exactly what I would expect. For some reason when I run my program, the scene coordinates for vertices 0 and 1 end up floating in space, while 2 and 3 are right where they should be. Screenshot: http://i.imgur.com/s4e0HlH.png Not sure if it helps, but it gets even weirder with a sphere. Screenshot: http://i.imgur.com/pQU9yXg.png I guarantee I'm just missing something simple, probably in the getScreenCoords function, but so far I haven't had any luck. Am I just misusing the indices array? It seems that each number in the indices array corresponds to a specific vertex. I have a feeling that's what is failing, but so far I haven't been able to nail it down. Here's the function that does the work: engine.beginFrame = function() { box.rotation.y += 0.005; box.rotation.x += 0.003; ctx.clearRect(0, 0, drawCanvas.width, drawCanvas.height); ctx.fillStyle = 'rgba(255, 43, 0, 0.25)'; ctx.strokeStyle = 'black'; var vertexCoords = []; var vertices = box.getVerticesData(BABYLON.VertexBuffer.PositionKind); var indices = box.getIndices(); for (var i=0, len=indices.length; i<len; i+=3) { for (var v=0; v<3; v++) { var index = indices[i+v]; if(!vertexCoords[index]) { vertexCoords[index] = getScreenCoords(BABYLON.Vector3.FromArray(vertices, index), box); ctx.fillRect(vertexCoords[index].x-4, vertexCoords[index].y-4, 8, 8); } } ctx.beginPath(); ctx.moveTo(vertexCoords[indices[i+2]].x, vertexCoords[indices[i+2]].y); ctx.lineTo(vertexCoords[indices[i+0]].x, vertexCoords[indices[i+0]].y); ctx.lineTo(vertexCoords[indices[i+1]].x, vertexCoords[indices[i+1]].y); ctx.lineTo(vertexCoords[indices[i+2]].x, vertexCoords[indices[i+2]].y); ctx.stroke(); ctx.fill(); ctx.closePath(); } }; var getScreenCoords = function(vertex, mesh) { var coords = BABYLON.Vector3.Project( BABYLON.Vector3.TransformCoordinates(vertex, mesh.getWorldMatrix()), BABYLON.Matrix.Identity(), scene.getTransformMatrix(), camera.viewport.toGlobal(engine) ); return coords; };
  5. I can't find references to scene.animate in the code or docs. Does it go by another name?
  6. I want to update the location of each mesh that's visible by the camera, but avoid any overhead from lighting and rendering, since I'm not using Babylon.js to actually do any displaying. I just want to be able to get the x and y (screen) coords of each vertex of certain meshes. I'm familiar with requestAnimationFrame, but I haven't been able to figure out what exactly I need to update every frame to keep the Babylon.js simulation going. Does that make sense?
  7. I haven't been able to locate that method. Do you happen to know where I can find it?
  8. I've got a fairly good understanding of how meshes keep track of their vertices, but how do they store data associated with faces? I've been searching through the documentation and digging into the mesh object locally, but I'm not understanding this yet. Are faces stored in a separate array, or is there an array that stores edge connections between vertices? How are faces stored and rendered? I've been able to extract screen coordinates for each vertex of a moving mesh, but I'm hoping to understand how that is turned into triangles during rendering. Thanks!
  9. I actually hadn't thought to override the render function but I'll try doing that next. I'm sure the cost of rendering isn't very high but I did want to run some tests. Thanks for the idea!
  10. Sure, but that doesn't reduce the overhead for actually doing the rendering. I can hide it easily but I'm hoping to avoid a performance hit for calculating data that I won't end up needing.
  11. Well what I'd like to do is draw lines on a different canvas based on the position of the 3D objects in the scene. I don't want to actually show the scene, but instead just use it to calculate the locations of objects. When I do what you suggested, it just hides the background of the scene but the objects are still rendering. Is there a method that does everything that scene.render() does, but without actually drawing to the canvas?
  12. Wait, I got it! I had to do this with the results: BABYLON.Vector3.Project(coords, BABYLON.Matrix.Identity(), scene.getTransformMatrix(), camera.viewport.toGlobal(engine));It's working perfectly now. Thanks so much for your help.
  13. Great, this is very helpful. The results I'm seeing are slightly off, but much closer than what I had before. I've got two canvases positioned on top of each other, one for 2D and one for 3D. The 3D one has a cube that's being translated, rotated, and scaled, and attached to a parent sphere that is rotating along the X axis. The camera is bobbing up and down. I wanted to get as many things changing the on-screen coordinates of the cube as possible so I could be sure I didn't leave anything out of the equation. The coordinates I get from TransformCoordinates are all very small, though. I tried multiplying them by 50 and when I do, it looks like they're mirrored from what is being rendered on the Here's the relevant function: var sphere = BABYLON.Mesh.CreateSphere("Sphere", 16, 1, scene); var box = BABYLON.Mesh.CreateBox('Box', 2, scene); box.parent = sphere; var cameraMovingUp = true; box.position.x = 2.5; engine.beginFrame = function() { if(cameraMovingUp) { camera.rotation.x += 0.0025; box.scaling.x += 0.01; box.scaling.y += 0.005; box.position.x += 0.025; if(camera.rotation.x > 0.15) { cameraMovingUp = false; } } else { camera.rotation.x -= 0.0025; box.scaling.x -= 0.01; box.scaling.y -= 0.005; box.position.x -= 0.025; if(camera.rotation.x < -0.15) { cameraMovingUp = true; } } sphere.rotation.y += 0.05; ctx.clearRect(0, 0, drawCanvas.width, drawCanvas.height); ctx.beginPath(); ctx.fillStyle = '#ff4500'; var arr = box.getVertexBuffer(BABYLON.VertexBuffer.PositionKind)._data; var len = arr.length; for (var i=0; i<len; i+=3) { var vertex = BABYLON.Vector3.FromArray(arr, i); var coords = BABYLON.Vector3.TransformCoordinates(vertex, box.getWorldMatrix()); ctx.fillRect(coords.x - 2, coords.y - 2, 4, 4); } ctx.strokeStyle = '#FF4500'; ctx.stroke(); };And a screenshot: http://i.imgur.com/qAMsh8Z.png You can see the red rectangles grouped together in the upper-left corner. I know I'm missing something important, but I can't figure out what. Can you point me in the right direction?