Search the Community

Showing results for tags 'solved'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • HTML5 Game Coding
    • News
    • Game Showcase
    • Facebook Instant Games
    • Web Gaming Platform
    • Coding and Game Design
  • Frameworks
    • Phaser 3
    • Phaser 2
    • Pixi.js
    • Babylon.js
    • Panda 2
    • melonJS
  • General
    • General Talk
  • Business
    • Collaborations (un-paid)
    • Jobs (Hiring and Freelance)
    • Services Offered

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Twitter


Skype


Location


Interests

Found 1638 results

  1. Hi, before I start, I have to be a pain... I cant share my work due to strong restrictions at my job so i have to describe it as well as possible. Also to recreate this issue... tldr its not possible / takes too long / waste of time, not to seem rude! i will be very greatful at any help! so... I am coming across this error - this._emissiveTexture.isReadyOrNotBlocking is not a function just wondered if there is any reason why this would happen that would pop to mind? from searching i can only find this and this. if the answer is in one of those two posts, it must be escaping me haha. so to get this error, all I (think i am) doing is applying an emissive texture to a standard material that doesnt have one to begin with so before I "apply" it, its value is undefined. if I think of any more info I will update asap many thanks EDITS: maybe this will help - after adding a load of console.logs to see when the error triggers, it happens after i do scene.render(); actually seems to happen even if i remove this line
  2. Me again! I needed a solution to create a screen capture of multiple sides of an object, and I felt like multi view would have been perfect for this. Except, the screen capture functions only take input from a single camera when in multi view situations you end up with an array of cameras w/ different viewport rects. I've created a playground with the setup of what I'd like to do. One thing to note, the GUI seems to display on all 4 views, and the only active button is the one in the upper right view. Is there a better solution that i'm not seeing, or something that I can improve to make this work? https://www.babylonjs-playground.com/#9DHWBU Thanks!
  3. My goal is a 3d minimap or radar. My starting point was to use a simple arrow as a compass or objective pointer, which I tied to the camera and positioned appropriately: compass.parent = camera; The problem is when the resolution changes, ex. going from landscape to portrait mode, my compass in the top left disappears off the screen. So, is there a preferred way to position meshes relative to the screen, as in a UI element? I've considered using a second camera and setting up a room with just my minimap elements, or installing a second canvas with absolute positioning relative to the page, but these don't seem so great. I would also like to have my camera pan away from a character until that character hits the edge of the screen, which would prevent further panning in that direction. These seem like a similar problem, perhaps in reverse. Maybe each frame I shoot a ray into my scene and test for collision with the character? Or set up a plane to shoot a ray at to determine the correct world position to correctly place the UI elements/stop the camera with the given viewport? Any insight is greatly appreciated!
  4. Hi, I'm doing an Adobe extension using Adobe CEP (Common Extensibility Platform). Adobe CEP uses CEF to enable HTML5 content within an Adobe application, so any HTML/Javascript code within it should behave just as in a browser. I've created a simple extension by copy/pasting a babylonjs sample: <!doctype html> <!-- /************************************************************************* * ADOBE CONFIDENTIAL * ___________________ * * Copyright 2014 Adobe * All Rights Reserved. * * NOTICE: Adobe permits you to use, modify, and distribute this file in * accordance with the terms of the Adobe license agreement accompanying * it. If you have received this file from a source other than Adobe, * then your use, modification, or distribution of it requires the prior * written permission of Adobe. **************************************************************************/ --> <html> <head> <meta charset="utf-8"> <link rel="stylesheet" href="css/styles.css" /> <link id="hostStyle" rel="stylesheet" href="css/theme.css" /> <title></title> <style> html, body { overflow: hidden; width: 100%; height: 100%; margin: 0; padding: 0; } #renderCanvas { width: 100%; height: 100%; touch-action: none; } </style> <script src="https://code.jquery.com/pep/0.4.3/pep.js"></script> <script src="https://preview.babylonjs.com/loaders/babylonjs.loaders.min.js"></script> <script src="https://cdn.babylonjs.com/babylon.js"></script> </head> <body> <canvas id="renderCanvas" touch-action="none"></canvas> <script> var canvas = document.getElementById("renderCanvas"); // Get the canvas element var engine = new BABYLON.Engine(canvas, true); // Generate the BABYLON 3D engine /******* Add the create scene function ******/ var createScene = function () { // Create the scene space var scene = new BABYLON.Scene(engine); // Add a camera to the scene and attach it to the canvas var camera = new BABYLON.ArcRotateCamera("Camera", Math.PI / 2, Math.PI / 2, 2, new BABYLON.Vector3(0,0,5), scene); camera.attachControl(canvas, true); // Add lights to the scene var light1 = new BABYLON.HemisphericLight("light1", new BABYLON.Vector3(1, 1, 0), scene); var light2 = new BABYLON.PointLight("light2", new BABYLON.Vector3(0, 1, -1), scene); // Add and manipulate meshes in the scene var sphere = BABYLON.MeshBuilder.CreateSphere("sphere", {diameter:6}, scene); return scene; }; /******* End of the create scene function ******/ var scene = createScene(); //Call the createScene function // Register a render loop to repeatedly render the scene engine.runRenderLoop(function () { scene.render(); }); // Watch for browser/canvas resize events window.addEventListener("resize", function () { engine.resize(); }); canvas.addEventListener("mouseup", function(){ console.log("Mouse UP!"); }); canvas.addEventListener("mousedown", function(){ console.log("Mouse DOWN!"); }); canvas.addEventListener("click", function(){ console.log("Mouse Click!"); }); </script> </body> </html> This displays what it's supposed to display (a sphere), but dragging the mouse on top of the canvas doesn't rotate the camera. Scrolling the mouse wheel zooms in/out as it's supposed to though. I've added some listeners for mouse events and they all work. I tried doing something similar with threejs: <!DOCTYPE html> <html lang="en"> <head> <title>three.js webgl - orbit controls</title> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0"> <style> body { color: #000; font-family: Monospace; font-size: 13px; text-align: center; font-weight: bold; background-color: #fff; margin: 0px; overflow: hidden; } #info { color: #000; position: absolute; top: 0px; width: 100%; padding: 5px; } a { color: red; } </style> </head> <body> <div id="info"> <a href="http://threejs.org" target="_blank" rel="noopener">three.js</a> - orbit controls example </div> <script src="js/three.js"></script> <script src="js/OrbitControls.js"></script> <script src="js/WebGL.js"></script> <script>if ( WEBGL.isWebGLAvailable() === false ) { document.body.appendChild( WEBGL.getWebGLErrorMessage() ); } var camera, controls, scene, renderer; init(); //render(); // remove when using next line for animation loop (requestAnimationFrame) animate(); function init() { scene = new THREE.Scene(); scene.background = new THREE.Color( 0xcccccc ); scene.fog = new THREE.FogExp2( 0xcccccc, 0.002 ); renderer = new THREE.WebGLRenderer( { antialias: true } ); renderer.setPixelRatio( window.devicePixelRatio ); renderer.setSize( window.innerWidth, window.innerHeight ); document.body.appendChild( renderer.domElement ); camera = new THREE.PerspectiveCamera( 60, window.innerWidth / window.innerHeight, 1, 1000 ); camera.position.set( 400, 200, 0 ); // controls controls = new THREE.OrbitControls( camera, renderer.domElement ); //controls.addEventListener( 'change', render ); // call this only in static scenes (i.e., if there is no animation loop) controls.enableDamping = true; // an animation loop is required when either damping or auto-rotation are enabled controls.dampingFactor = 0.25; controls.screenSpacePanning = false; controls.minDistance = 100; controls.maxDistance = 500; controls.maxPolarAngle = Math.PI / 2; // world var geometry = new THREE.CylinderBufferGeometry( 0, 10, 30, 4, 1 ); var material = new THREE.MeshPhongMaterial( { color: 0xffffff, flatShading: true } ); for ( var i = 0; i < 500; i ++ ) { var mesh = new THREE.Mesh( geometry, material ); mesh.position.x = Math.random() * 1600 - 800; mesh.position.y = 0; mesh.position.z = Math.random() * 1600 - 800; mesh.updateMatrix(); mesh.matrixAutoUpdate = false; scene.add( mesh ); } // lights var light = new THREE.DirectionalLight( 0xffffff ); light.position.set( 1, 1, 1 ); scene.add( light ); var light = new THREE.DirectionalLight( 0x002288 ); light.position.set( - 1, - 1, - 1 ); scene.add( light ); var light = new THREE.AmbientLight( 0x222222 ); scene.add( light ); // window.addEventListener( 'resize', onWindowResize, false ); } function onWindowResize() { camera.aspect = window.innerWidth / window.innerHeight; camera.updateProjectionMatrix(); renderer.setSize( window.innerWidth, window.innerHeight ); } function animate() { requestAnimationFrame( animate ); controls.update(); // only required if controls.enableDamping = true, or if controls.autoRotate = true render(); } function render() { renderer.render( scene, camera ); }</script> </body> </html> The code above works fine and I'm able to control the camera. Do you have any idea why I'm being unable to control camera rotation with babylonjs? How can I debug this to figure out why the camera is not rotating? I'm an experienced C++ developer but very inexperienced in Javascript. Thanks in advance, Alex
  5. I've been working on an open world project that involves a lot of loading and unloading of assets. (A virtual museum of Earth history.) SceneLoader.LoadAssetContainerAsync() seems to be the only means of tracking which and when new assets have been loaded, as SceneLoader.LoadAsync() returns only the current scene. Where I'm running into trouble is the end of the life cycle. AssetContainer does not have a dispose() method. My attempts to manually create one have thus far ended in halt & catching fire. The removeAllFromScene() method will dispose of the assets* but I'm still left with an AssetContainer object from every SceneLoader call. If I clear the assets in a different way, the assets won't garbage collect, because the AssetContainer itself still references them. Has anyone worked on a similar scenario? How did you manage large volumes of load -> use -> forget? *AssetContainer.removeAllFromScene() seems to have a problem removing instanced objects, but I've been too lazy to create a proper bug report. I promise I will!
  6. Hello, I'm trying to create a text block in a container with any defined size, and when the text is larger, the size of the textBlock increases in the fixed-size content. Until then no problem. My problem is that I can not get the calculations necessary for the cursor, once down, to arrive correctly at the last line of text. On this PG we can see that it works correctly, as I wish. But the calculations are adapted to the size of this container but not to others : https://playground.babylonjs.com/#T2LHL3#1 Now, if I change the size of the container in width and/or height, it is no longer good up and down. I would like the result to be like the first PG regardless of the size of the container, but I can not do it. I do not do the right thing, I guess. https://playground.babylonjs.com/#T2LHL3#2 Thanking you in advance for your help.
  7. How do I convert float32Array data into Babylon.js Matrix? I read real device camera data in float32Array format, how do I convert it into babylonjs Matrix?
  8. Having some texture inconsistencies with *.obj from MagicaVoxel. Original: Texture problem (in browser). The problem is the magenta roof's color is bleeding into the light blue walls. and making weird triangles. Imported from MagicaVoxel to Blender just to test a render: Any idea what the problem might be, or how to test further? I made a few other meshes and had the same problem, somewhere on there one of the colors starts to do some weird blendy thing with its neighboring voxels, but most of of the model maintains crisp edges.
  9. Hi everyone, I've been trying to put together a small interactive scene where the user can click a babylon.gui button to activate an animation. The main issue I'm having is the connection between GUI elements and the per-object event/action system. My idea was to run a javascript function on the button, that then fires the action event, however after looking at documentation and lots of google, it doesn't appear like this can be done, at least not from the given triggers (excluding checking every frame). Is there a better method of doing this? Thanks in advance
  10. Hi All, I have strange behaviour in NullEngine using setPivotPoint with CSG Same Code, different results? (Blue background in Playground, grey background NULL ENGINE) https://www.babylonjs-playground.com/#8PIKRZ @Deltakosh do you have any idea? Thank you in advance.
  11. I am having a hard time getting a depth map to work within my project. I understand that the depth map should be a grey scale image with the elements closer to the camera being colored darker. In my project though, I only get a 1 bit image (the plane on the left has the depthMap as a diffuseTexture). I tried my depth map code on the playground and I get the desired outcome, but I can't port my whole project to the playground. Does anyone know what could cause this issue?
  12. Hello, I'm new to BabylonJS. Is it possible to scale an object down but then have it think that its new scale is now 1? I am building an object out of unit cubes (scale 1 base cubes). I then merge the objects into a new mesh. Then I want to scale it down to be a new unit sized cube with scale 1. I will be using this in a recursive fractal routine that resizes the base cube but it expects it to be scale 1. I hope that makes sense. I've tried googling but perhaps I don't know the proper terminology. Thanks for any feedback. Skrapper
  13. Hello everyone, I am wondering if the "mesh.roation" attribute work differently on built-in mesh and imported mesh? As shown in this PG: https://playground.babylonjs.com/#WQGZB5 As I set the same rotation attribute: scene.registerBeforeRender(function() { rotationtest.rotation.y = 0.02; box.rotation.y = 0.02; }); The imported mesh rotates with a constant speed while the built-in box stays put. It apears as if the imported mesh rotates incrementally by local axis and the built-in box rotates by directly setting the rotation vector. What's happening under the hood? Can I make the imported mesh rotate like the built-in mesh (which makes much more sense to me)? Hope someone can point me to the right direction. Thanks alot!
  14. Hi, Im fairly new to babylonjs. I need give meshes materials which alpha/color properties depend on the mesh'es location/distance/placement relative to each other( or some internal mesh property that is different for each mesh ) The scene setup is very simple. I have thousands of cubes( all sharing the same material ). Now, i want to make it so that the cube's alpha and color values depend on how close a mesh is to another mesh in the scene. This would create a dynamic range of alpha/color values which are always changing( as meshes change ). I know I could give each mesh its own material, but that would hit the performance( and i need the meshes to share almost all material properties except alpha/color ) Is there a way to share a material between thousands of meshes, but allow each mesh have its own alpha value ?
  15. Hi guys. I'm new to BabylonJS and blender. I was trying out animations exported from Blender to Babylonjs. Everything is working correctly more or less. But when I apply a copyLocation constraint to a cube whose target is a bone, it is not taking effect in the babylon scene in the browser. Please help me to know what I'm missing. Are all the constraints supported in Blender supported in Babylonjs as well? From the docs, I saw only trackTo constraint... I've attached the .blend file. Plz have a look. Any help very much appreciated. Thanks! full-body.ani-boned-posed-rest.animated.constrained.blend
  16. Other local server links such as http://localhost:1338/Playground/index-local.html are working fine but with http://localhost:1338/tests/validation/index.htm I have Cannot GET /tests/validation/index.htm
  17. function resetPack(scene) { console.log(scene.meshes.length); for (currentMesh = 0; currentMesh < scene.meshes.length; currentMesh++) { // console.log(scene.meshes[currentMesh].geometry.getTotalVertices()); if (scene.meshes[currentMesh].geometry.getTotalVertices() == 24) { console.log(currentMesh + ") Removing " + scene.meshes[currentMesh].name); index = scene.removeMesh(scene.meshes[currentMesh], true); console.log(index + " Removed"); } else { console.log(currentMesh + ") Not removing (" + scene.meshes[currentMesh].name + ") with (" + scene.meshes[currentMesh].geometry.getTotalVertices() + ") geometry"); } } } In the above code, I am trying to remove all cuboid forms from my scene. All of my cuboid meshes are made with the same code: function createPackObject(shapeName, shape, dimensionsJSON, diffuseColor, offsetX, offsetY, alpha, scene) { // console.log('Creating ' + shapeName); switch (shape) { case "cuboid": // console.log('Creating ' + shapeName + ' as a cuboid'); var packObject = BABYLON.MeshBuilder.CreateBox(shapeName, { width: dimensionsJSON[0].value, depth: dimensionsJSON[1].value, height: dimensionsJSON[2].value }, scene); packObject.position = new BABYLON.Vector3(offsetX, ((dimensionsJSON[2].value / 2) + offsetY), 0); packObject.material = new BABYLON.StandardMaterial(shapeName + "Material", scene); packObject.material.diffuseColor = diffuseColor; packObject.material.alpha = alpha; packObject.material.wireframe = false; break; default: console.log('Shape "' + solutionSet.containers[container].spaceToFill.shape + '" not handled'); } } function loadSolution(scene, camera, solutionID) { // ... // create shipping containers for (currentContainer = 0; currentContainer < solutionSet.containers.length; currentContainer++) { // ... createPackObject(containerShapeName + "I", containerInnerShape, containerInnerDimensions, new BABYLON.Color3(1, 0.87, 0.386), containerOffset, wallThickness, 0.55, scene); createPackObject(containerShapeName + "O", containerOuterShape, containerOuterDimensions, new BABYLON.Color3(1, 0.87, 0.386), containerOffset, 0, 0.55, scene); // ... for (currentItem = 0; currentItem < solutionSet.containers[currentContainer].items.length; currentItem++) { createPackObject(itemShapeName, itemOuterShape, orientedDimensions, itemColor, itemOffset, 0, 1, scene); } } } For some reason when I get to a mesh created for the containers the next item mesh I try to remove exits the function. Does anyone have any idea what might be happening, or if there is a better way to remove a particular type of mesh from a scene?
  18. Could someone explain a little bit about how meshes are positioned and when this information is on the CPU vs the GPU? How do animations affect this? I used to think that all movement was essentially changing the position, rotation, or scale of a mesh on the CPU, and then that this information would be sent to the GPU for rendering. But I've recently learned that bone-based animations leverage GPU, which implies to me that the rendered position of a mesh (when using bone animations) is not going to be the same as that mesh position as set in our javascript code. Is that true? I ask because I'm doing a lot of movement of objects in NullEngine (where this is no rendering) and I'm trying to very accurately sync the transforms of meshes across a network. Everything is working great so far, but I am doing my animations somewhat tediously on the CPU to reduce any potential errors (100% transform-based in javascript, no rigs/bones/animations). Here's a rough version of the type of code being used to move things, but I am curious to what degree bones and other types of animations can be used. https://www.babylonjs-playground.com/#YYH1CJ#13 (shows 100% babylon-based animation... just using the transforms)
  19. I am getting the error: "Cannot merge vertex data that do not have the same set of attributes." when I try to merge a BABYLON.Mesh created using Vertex data with ones created with MeshBuilder. What attributes are different/am I missing? https://playground.babylonjs.com/#AGL702 (Line 26) Thanks
  20. Hello! I'm upgrading my code, StandardMaterial to ShaderMeterial. my scene colors are scene.clearColor = new BABYLON.Color3(0,0,0); scene.ambientColor = new BABYLON.Color3(1,1,1); and before standardMaterial initializing is like mat = new BABYLON.StandardMaterial('material', scene); mat.ambientColor = new BABYLON.Color3(1,1,1); mat.diffuseTexture = texture.texture; mat.opacityTexture = texture.texture; and new shaderMeterial initializing is like var route = { vertex: "custom", fragment: "custom", }; var options = { needAlphaBlending : true, needAlphaTesting: true, attributes: ["position", "normal", "uv"], uniforms: ["world", "worldView", "worldViewProjection", "view", "projection"] }; mat = new BABYLON.ShaderMaterial('shader', scene, route, options); mat.backFaceCulling = false; mat.setTexture('map', texture.texture); shader code is below BABYLON.Effect.ShadersStore["customVertexShader"]= "precision highp float;\r\n"+ "// Attributes\r\n"+ "attribute vec3 position;\r\n"+ "attribute vec2 uv;\r\n"+ "attribute vec4 color;\r\n"+ "// Uniforms\r\n"+ "uniform mat4 worldViewProjection;\r\n"+ "// Varying\r\n"+ "varying vec2 vUv;\r\n"+ "varying vec4 vColor;\r\n"+ "void main() {\r\n"+ " vUv = uv;\r\n"+ " vColor = color;\r\n"+ " vec4 p = vec4( position, 1. );\r\n"+ " gl_Position = worldViewProjection * p;\r\n"+ "}\r\n"; BABYLON.Effect.ShadersStore["customFragmentShader"]= "precision highp float;\r\n"+ "varying vec2 vUv;\r\n"+ "varying vec4 vColor;\r\n"+ "uniform sampler2D map;\r\n"+ "void main(void) {\r\n"+ " gl_FragColor = texture2D(map, vUv)*vColor;\r\n"+ "}\r\n"; but results are different. before vs after how to make shaderMaterial color like standardMaterial color?
  21. Hello, I want to control the camera position with 'WASD' keys. More specifically the targetScreenOffset. I managed to do it but now I want to add animation so the movement is smooth. However, when you hold the key down animations overlap each other causing some stutter. Also when I click rapidly different keys there are some "jumps" from one position to another. Here's the example: https://playground.babylonjs.com/#92LK47#7. How could I fix it, maybe with some throttling?
  22. Hello everyone, I have been struggling with a strange problem exporting scene from 3ds Max to Babylon. Some of the meshes end up having strange see-through effects in babylon, like this (you can see the wall and floor in the other room): Experimenting with the materials, I found that: Without albedo / basecolor map, the effect is gone (check out the slates on the ground, they are looking normal): With albedo / basecolor map, strange see-through effect happens (you can see the hidden faces of the slates, giving the illusion of steps): Related information: 1. 3ds max 2017 2. Max2Babylon 1.3.0 3. Metallic/Roughness PBR workflow. All materials in max are physical materials 4. Meshes have 2 uv channels with occlusionTexture (AO) on channel 2 5. Meshes both with and without multi-material might have this problem (as far as I can see, this effect is pretty random) 6. In the see-through, only the surfaces with alebedo / basecolor maps can be seen, surfaces defined by color properties cannot be seen 7. The see-through happens after a certain glaring angle, meaning if looking straight / perpendicular at the surface there is no see-throught, however as the line of sight gets more parallel to the surface to a certain degree, see-through happens. Has anyone had similar problems? Hope someone can point me to the right direction. Thanks a lot! [Edit] - Further discovery: converting the problematic albedo / basecolor maps from .jpg to .png in photoshop solved the problem. There must be something wrong with the .jpg file or some compatibility issue somewhere in the pipeline. I have attached the .jpg files, maybe they are overly compressed?
  23. Hello everyone! Currently when I'm adding vertex attributes other than those specified in VertexBuffer (PositionKind etc.) via Mesh.setVerticesData(), it throws an error: Uncaught (in promise) Error: Invalid kind 'test' I think it would be useful to be able to add attributes of any kind. For example: I have a terrain mesh with a custom shader that takes "color" (vec3) and "moisture, temperature" (vec2) vertex attributes. It uses moisture and temperature to mix between 4 greyscale ground textures (wet and hot; dry and hot; wet and cold; dry and cold) and colorizes it. Now I have two options: - use UV2Kind, UV3Kind.. - it will work in this use case, but they are all vec2 and I may need a couple of vec3 in the future - manually create a Buffer and attach it to the mesh via mesh.setVerticesBuffer() - I'm trying to figure out how to do it. InstancedMesh is using setVerticesBuffer(): https://github.com/BabylonJS/Babylon.js/blob/c5e53c8f89b115a4c151d8dfce2e9eba12bd08b6/src/Mesh/babylon.mesh.ts#L1387 I'm unsure how to use it with a typical, non-instanced mesh. I would be very thankful for an example. I wasn't sure whether to post in Bugs or Questions, please move the thread if you consider it more a question than a bug. EDIT: I figured out a way: var vertexBuffer = new VertexBuffer( this.engine, builtMesh.groundData, "test", false, false, 2); mesh.geometry!.setVerticesBuffer(vertexBuffer); But still, wouldn't it be more convenient with setVerticesData?
  24. Hello! I was wondering, is there still no way of utilizing the .mtl file that is exported with an OBJ file in babylon js? I was reading the objectfile loader and assets manager documentation and it seemed like this was the case; however, maybe I misunderstood or its outdated? Thank you!
  25. Hello, Could you tell me how to use the blend equation other than GL_FUNC_ADD and GL_FUNC_SUBTRACT? For example I want to use the GL_MIN and GL_MAX. I've read about the Material.alphaMode property but cannot find the suitable option for my case.