Jump to content

Search the Community

Showing results for tags 'babylon.js'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • HTML5 Game Coding
    • News
    • Game Showcase
    • Facebook Instant Games
    • Web Gaming Standards
    • Coding and Game Design
    • Paid Promotion (Buy Banner)
  • Frameworks
    • Pixi.js
    • Phaser 3
    • Phaser 2
    • Babylon.js
    • Panda 2
    • melonJS
    • Haxe JS
    • Kiwi.js
  • General
    • General Talk
    • GameMonetize
  • Business
    • Collaborations (un-paid)
    • Jobs (Hiring and Freelance)
    • Services Offered
    • Marketplace (Sell Apps, Websites, Games)

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Twitter


Skype


Location


Interests

  1. Hi. I'm looking for someone who is able, willing and have time to create a 3d model web-viewer for obj files. In the past (some years ago) i played a bit with babylon.js (but i'm not a js coder and not a professional in such things) This is the only one option i know to this time to do this on a website , what i want have - web viewer for a obj file (game-level/game-map) - the POV/cam view should be movable so visitor can fly through the map (caves) ... attatchment: a obj file , a map who should be vidible/browsabel by a website mp_vault.obj
  2. Hello, I'm new to Babylon.js. I was wondering is there a way to trigger event when camera and mesh intersects? I tried put camera in to a box, and when these two meshes(camera container and object) met I want to trigger some events, but it's not working as I thought. camera mesh collisions event | Babylon.js Playground (babylonjs.com) Is there any other possible solution? Thanks!
  3. Hi, does anybody know a professional/real mobile game made with Three.js/Babylon.js/Playcanvas? It seems like no one is betting for these technologies and goes for Unity or similar.
  4. I have followed almost all courses for three.js and babylon.js game programming on Udemy and have created several small games in Unity3D aswell. Most of the courses are targeted at beginners and/or are very brief. Regardless of the high price tag, are there any indepth advanced HTML5 3D game programming courses for three.js or babylon.js?
  5. I have trouble getting the code to work properly in the playground, so I hope you get an idea of what I mean. I'm using cannon direcly, and not as the plugin in Babylon. I have a cannon body and a sphere shape, and I'm controlling the movement on the X and Z axis using keyboard input. On top of that I have a sphere mesh in order to verify the changes to the body. Now, I use the below handler to rotate the body itself according to the mouse. window.addEventListener("mousemove", function(e) { // console.log("mouse"); var movementX = event.movementX || event.mozMovementX || event.webkitMovementX || 0; var movementY = event.movementY || event.mozMovementY || event.webkitMovementY || 0; this.sphereBody.rotation.y += movementX * 0.002; this.sphereBody.rotation.x += movementY * 0.002; }); Now, the problem is, that if I rotate, let's say 50 degrees to the left, and press the key bound to the forward movement, the body moves directly forward, and not in the direction that the body rotated. I'm a little unsure as of what and how to accomplish this. This should be able to run server-side using node, which it already does, but this has it's limits, as I'm not planning to run babylon on the node server. I need a way to assign the Z and X axis to the way the body is currently rotated. I guess I could use a camera on the client, but then the rotation has to be useable on the body server-side. I already checked quite a few examples, one being the Cannon.js FPS example. I can upload the files somewhere if needed.
  6. Please bear with me, I am completely new to game networking. Any help and improvise in this question will be appreciated!!! I have been playing around with BJS (babylon.js) for a while now, wanting to make a real-time multiplayer game. I have been searching around the web about game networking, and read this: https://github.com/gafferongames/gafferongames/blob/master/content/post/what_every_programmer_needs_to_know_about_game_networking.md It gave me a clear idea of how should I go ahead. But in all the forums, I hear people saying game networking is very hard, not to waste time on it rn, and stuff. So I thought to give it a try. I used node.js for server with socket.io to communicate to the client back and forth. What I did was, when I receive the server's update about the location, I would compare that to the location of the client (which was predicted, or more like comprehended on the client side) to check whether the difference was under 0.1 (or any number). If it was not, I would redirect the player back to location sent by the server. (there is still some minor jitter in the gameplay) After doing the above, I felt it like a piece of cake. I felt on top of the world! But, I was still trying to figure out what people really meant by "hard". So after bit of more research, I found out nengi.js, a game networking engine for node.js. I noticed people comparing socket.io and nengi.js. Aren't they two different things? socket.io is used for bidirectional communication between client and server, and nengi.js is a game networking engine! This has created a huge confusion in my head. Could anyone please help me with this? Also, please clarify whether the process I did above for client prediction is correct or not? If you need anymore details/info, please let me know! Thanks a lot for reading through! Thanks in advance!
  7. There are two buttons in my page for switching to ArcRotate camera and to Universal camera. When I click the button for Universal camera, it immediately switches to the same. But it always takes too many clicks on the ArcRotate camera button in order to switch to it. Why is it so? var cameraType = { FREE: 0, ARCROTATE: 1, WALKTHROUGH: 2, UNIVERSAL: 3 }; var cameraMode = { PERSPECTIVE: 0, ORTHOGRAPHIC: 1 }; this.setCameraAsArcRotate = function () { _cameraType = cameraType.ARCROTATE; }; this.setCameraAsUniversal = function () { _cameraType = cameraType.UNIVERSAL; };
  8. hi all; i am trying to draw dynamic line by selecting mouse picking points and also i want to show the length(size) above the line. How i can achieve this is it possible in babylon.js..?kindly suggest me some test case to achieve this.. thanks
  9. Hello, I can teach you using Skype. If you share your screen I can tell you what you need to install and press. I can show you how to: set breakpoints in VSCode in the project with a few files. We will use AMD and RequireJS bundle your source files to use them in Browser. We will use Browserify and UglifyJS compile Node.js server TypeScript scripts connect your server with client that is written in pure WebGL, Phaser, Pixi.js, Three.js or Babylon.js write Jasmine Specs (Unit Tests) for client and server set breakpoints in Jasmine Specs deploy you TypeScript server and client on Heroku connect your project with GitHub to automation deploying after push More about unit tests. I use TypeScript. I will give you my boilerplate for Jasmine and I will show you how to set up it. I will instruct you: How to set breakpoints in TS code in VSCode How to write Mock objects for dependencies that was injected How to build your unit tests in production (in bundle.min.js) I use AMD-build and RequireJS to debug mode (to set breakpoint in VSCode) and I use CommonJS-build and Browserify/UglifyJS to build unit tests to production My time rate is $10 per hour.
  10. Hello, I thought to place this on the demos and projects thread, however I decided to post this here as it is more a topic for which framework to use and why. I was hired by an elite software development group at Sony Electronics to help them navigate through WebGL to build a pipeline to deliver content for the South By Southwest convention and to create a foundation to quickly develop games and online media for future projects. In short, I was tasked to escape the limitations of 2D media and help Sony move forward into 3D content taking advantage of the WebGL rendering standards. This was no esay task, as I was hired Dec. 11th, and was given a hard deadline of March 5 to deliver 2 multiplayer games which were to be the focus of Sony's booth at SXSW in Austin Texas. But first I had to run a quick evaluation and convince a very proficient team of Engineers which framework was the best fit for Sony to invest considerable resources into for SXSW and which was the right coice to take them into future projects. Yhis wa a huge consideration as the WebGL framework which was to be chosen was to play a much greater role at Sony Electronics considering the group I was assigned to works well ahead of the rest of the industry... developing what most likely will be native intelligent applications on Sony devices (especially smartphones) in the near future. These are applications which benefit the consumer in making their day to day interactions simple and informative. Thus the WebGL framework to be chosen needed to be an element in displaying information as well as entertainment for a greater core technology which is developing daily in a unique tool set used by the software engineers to build applications which allows Sony to remain the leader not only in hardware technology, but in the applications which consumers want to use on Sony devices. But as I was working for Sony, I also had a greater task as there were existing expectations in developing a game on Sony devices which needed to be on par with what consumers already were experiencing with their Playstation consoles. As unrealistic as this might initially appear, that had to be the target as we couldn't take a step back from the quality and playability the consumer was already accustomed to. So back to the first task... selecting the WebGL framework for Sony Electronics to use moving forward. Rather than telling a story, I'll simply outline why there was little discussion as to which framework to choose. Initially Sony requested someone with Three.js experience as is more than often the case. So when they approached me for the position, I told them I would only consider the position if they were open to other frameworks as well. They were very forthcoming to open their minds to any framework as their goal was not political in any way - as they only cared about which framework was going to provide them with the best set of tools and features to meet their needs. And one might certainly assume that since Sony Playstation is in direct competition with Microsoft Xbox, and Microsoft is now providing the resources in house to develop babylon.js, that Sony Electronics might see a PR conflict in selecting babylon.js as their WebGL development framework. However, I'm proud to say that there was never a question from anyone at Sony. I was very impressed that their only goal was to select the very best tools for the development work, and to look beyond the perceived politics and to develop the very best applications for the consumer and to fulfill their obligations to their shareholders in building tools that consumers want on their smartphones and other electronic devices. So once again... Three.js vs. Babylon.js. This was a very short evaluation. What it came down to was that three.js had far more libraries and extensions - however, this was not the strength of three.js since there is no cohesive development cycles with three.js and although many libraries, tools, and extensions exist, more than often they are not maintained. So it was easy to demonstrate that practically any tool or extension we would require for the SXSW production would require myself or the team updating the extension or tool to be compatible with the other tools we might use on the project. This was due to the failings of the framework since each developer who writes an extension for three.js is writing for a specific compatibility for their own project needs... and not for the overall framework... as this is not within the scope of any developer or group of developers. Thus I find that it requires weeks if not months of of maintenance in three.js prior to building content, just to ensure compatibility between all of the tools and extensions needed to use for most projects. As for babylon.js, the wheel is not generally re-invented as it is with three.js, as most extensions are quickly absorbed into a cohesive framework quickly - provided they have universal appeal - and this integration ensures compatibility as there are fewer and fewer extensions to use, but instead an integrated set of tools which are thoroughly tested and used in production revealing any incompatibilities quickly. The bottom line is that there are no alpha, beta, and development cycles in three.js, thus no stable releases. Whereas the opposite exists with babylon.js. There is a cohesive development of the tools, and Sony is smart enough to see beyond the politics and to realize that having Microsoft support the development of babylon.js is a huge bonus for an open source framework. And if anyone had to choose a company to support the development of a WebGL or any framework, who better than Microsoft? With practically every other useful WebGL framework in existence spawned by MIT, most all are barely useful at best. And why would anyone pay to use a limited WebGL framework such as PlayCanvas when Babylon.js is far more functional, stable, and free? This baffles me and most anyone who chooses one project using babylon.js. The only argument against babylon.js is that the development of the framework is now supported in house by Microsoft. But for myself and others, this is a positive, not a negative. I've been assured by the creators and lead developers of babylon.js that they have secured an agreement with Microsoft ensuring the framework remain open source and free. This ensures that anyone is able to contribute and review all code in the framework, and that it remains in the public domain. Sony gets this and we quickly moved forward adopting babylon.js as the WebGL framework within at least one division of Sony Electronics. At the end of this post I'll provide a link on youtube to a news report of not only the games we built for SXSW, but the exciting new technology on built on Sony phones which uses the phones camera to capture a hight resolution (yet optimized) 3D scan of a person's head. This is only a prototype today, but will be a native app on Sony phones in the future. So our task was not only to develop multiplayer games of 15+ players simultaneous in real-time, but to have a continuous game which adds a new player as people come through the booth and using a Sony phone, has their head scanned. This was an additional challenge, and I must say that I was very fortunate to work with a group of extremely talented software engineers. The team at Sony is the best of the best, I must say. All in all, it was an easy choice in choosing babylon.js for the WebGL framework at Sony Electronics in San Diego. Below is a news report from SXSW which shows the new scanning technoogy in use, as well as a brief example of one of the games on the large booth screen. And using Electron (a stand-alone version of Chromium), I was able to render 15 high resolution scanned heads, vehicles for each head, animation on each vehicle, particles on each vehicle, and many more animations, collisions, and effects without any limitations on the game - all running at approx. 40 fps. The highlight of the show was when the officers from Sony Japan came through the booth... which are the real people we work for... gave their thumbs up, as they were very happy with hat we achieved in such a short time. And these were the people who wanted to see graphics and playability comparable to what the Playstation delivered. And they approved. Link: Thanks to babylon.js. DB
  11. Hi there I downloaded Blender 2.8 with the new GLTF exporter. Now i'm stuck and my babylon.js script won't open the 3d model anymore. If i drop the model in the three.js viewer it looks fine, an old model made with blender 2.79 works fine also in babylon.js but the new GLTF (separate) does not. Here's the console of the chrome browser: Thank you
  12. For now, I'm basically just trying to understand some of the code behind the "drag and drop" sample in the Babylon.js playground. I would eventually like to create a 3d arcade style game, starting here by being able to move the meshes around WITHOUT the camera moving. But first things first tho, I need to understand the code thats here. I've read as much of the documentation and tutorials as I could, and I didn't see any previous post really related to what I'm looking for. There's are some things I dont get with the event listeners: getGroundPosition(), onPointerDown, onPointerMove. I altered the scene and added it to my hosting. I removed everything but the sphere. http://portfolio.blenderandgame.com/draganddrop.html Posting the code here, I commented the parts of the code I dont get: With getGroundPosition(), I dont get the predicate. Not sure i entirely grasp the concept of "predicate". I know that its a callback that returns a boolean value: returns true if ground was clicked. I just dont know what purpose that boolean serves. What other piece of code is that impacting? Not sure how it ties in. I notice that if you comment that part out, so that x and y are the only parameters, then the meshes will "sink" in into the ground instead of staying on the surface. I want to understand it on a code level tho. var getGroundPosition = function () { var pickinfo = scene.pick(scene.pointerX, scene.pointerY, function (mesh) { return mesh == ground; }); //?? dont understand purpose of predicate if (pickinfo.hit) { return pickinfo.pickedPoint; } return null; }I have the same question about the scene.pick predicate in onPointerDown (this one returns true if the ground was NOT what was clicked on. ) One OTHER question about this: startingPoint is assigned a value of getGroundPosition(evt). I know its passing the event. But getGroundPosition was defined WITHOUT parameters. So why is a parameter being passed to it? And what difference does that make? I've never seen an example like that with addEventListener. var onPointerDown = function (evt) { if (evt.button !== 0) { return; } var pickInfo = scene.pick(scene.pointerX, scene.pointerY, function (mesh) { return mesh !== ground; }); //dont understand purpose of predicate again if (pickInfo.hit) { currentMesh = pickInfo.pickedMesh; startingPoint = getGroundPosition(evt); //dont understand why argument is passed if fucntion was defined with no parameters.. if (startingPoint) { setTimeout(function () { camera.detachControl(canvas); }, 0); } } } Because of those two questions. There is a LOT I dont get about onPointerMove. But hopefully once I understand the above, it will help me figure out the rest. Dont want to bombard with my questions here on my first post Muchas Gracias to anyone that can help. Can't get any further with this and I dont want to give up and move on. Everything else I see with regard to movement is actually just moving the camera, not translating a mesh with a static camera view. This is the only sample I've come across that actually disengages the camera (at least only when the mesh is being moved).
  13. My Babylon.js scene was working finely yesterday...but today it showing me following errors. I definitely tell that this error is due to babylon.gui.js file. I have imported the scripts in following manner. and I used it to create gui as: var advancedTexture = BABYLON.GUI.AdvancedDynamicTexture.CreateFullscreenUI("UI"); Please help me with this..how can I resolve the error and get my scene working smoothly..?? Thanks if you can help me..!
  14. I have created an 3D model of a HPC cluster rack using Blender and I have successfully loaded that moel in Babylon.js with animations. All is working fine. But I want to have a mouse hover effect on my model which will pop out the text/label of particular mesh. How I can achieve it, please give me some idea..!!
  15. My aim is to render a 3D model on a web page and change its textures. I am using Babylon.js for the same. So far I am able to render the 3D model and change its texture but I am stuck in end beautification of the final view also known as post processing. I want to improve the quality of my rendered 3D model. I am enclosing the screenshot of what I have done(Figure 1) and what I want to achieve(Figure 2) so that you can get a clear picture of the issue.
  16. i have zip file contains multiple obj and mtl files on a node js server i was wondering how load the content of the obj files inside the zip file
  17. Hi, I'm new to the babylon.js. I've worked with Three.js and it was an awesome library. But I've realized that I cannot generate a mesh with points easily and there were limitations. Is there a feature in Babylon.js to generate a triangulated mesh with a pointcloud? You can view a sample pointcloud here. It's just a curved surface for the simplicity and a Three.js model. I just want to make know if this is possible to do with babylon.js https://codepen.io/brabbit640/pen/ZmpKpJ Thank you
  18. Dear all, we are working on this platform that is in Babylon.js and React: http://grafos.website/grafos/ Unfortunately it works only on some mobile devices but not on all. How can we fix this? We had the basic model in Three.Js and it worked fine across mobile devices but we achieved all our functionality in Babylon. How can we make it work like this: https://www.grafosdesign.com/3dVisual_Interact/BioracerDEMO/index.html Thank you for you ideas/input Cristina
  19. A part of my game's post-process render pipeline: Downscale render to 25% size Do some post-processing on the downscaled image Pass both the image before step 1 and the image after step 2 into a GLSL fragment shader with effect.setTextureFromPostProcessOutput(...) Fragment shader outputs the low-res processed image overlaid on top of the original high-res render Problem: The final render is pixelated. I guess the initial downscale made it so the shader doesn't use the higher-res input texture as the "base resolution"? What's going on here? How do I properly set fragment shader input textures of different resolutions in a post-processes?
  20. The default bloom post process extracts all pixels brighter than a threshold, blurs that image, and overlays it onto the original render. The problem is that it simply adds the RGB values of the 2 images together, resulting in already bright areas becoming way too bright. I want to use BABYLON.Engine.ALPHA_MAXIMIZED as the blend mode for bloom. How do I do that? Do I need to write a whole new bloom shader from scratch?
  21. Hello BJS community, I wrote and submitted the first Wikipedia article for babylon.js late last year. I've submitted it to Wikipedia twice now, and the last time they had reviewed it and I was told it should only be a week or two and it would be approved and posted. Even today, when I look at my submission online, it is still pending. I've communicated with several advisors for Wikipedia the past 3 months since they changed their submission tools, but no one has been able to truly assist as of yet. I believe I got caught in the middle of their switching the submission methods - which is why I've submitted twice now. But I don't believe I should submit again as I'm sure this would certainly cause problems, as Wikipedia is the most understaffed and difficult website to deal with - although we all respect what they do only through limited donations and funds. It has been over 4 months now, and advisors have notified me that the waiting list is not more than 4 to 6 weeks. If there is anyone in this community who has sucessfully posted a Wikipedia page and might be able to assist me in submitting and following up, it would be greatly appriciated by the whole community - since it then opens the door to others to add to my first article, and to also post their own articles about babylon.js in their native languages. As you might know, the first page on any topic is the most difficult to have reviewed and approved. DK is the only one yet to read the article, and I believe we need the world to be able to research and understand what babylon.js is, the people who created it, and the history of it's genesis. Thanks, DB
  22. Hi everybody! Is It possible to create an online editor with babylon.js where users could create their own meshes and send the object by e-mail? Thank you so much in advance!
  23. wn voFolks, I am currently using phaser js but now i am try to implement babylonjs, I need to create stage 720*480 static stage that will not transferred into 3D. Stage should be STILL. An it is responsive based on the aspect ratio of the browser or the screen. I tried various option but i didnt get any success. Thank in ADVANCED
  24. While importing meshes designed in software like Autocad and Blender, I noticed that the color of a previously imported mesh is getting applied to many meshes that are later imported. Then I realized that that this is due to conflicting material IDs. Is there a way to modify material ID of an imported mesh or is there any other solution to this problem?
  25. Hey all, Something wired happens when I am using createInstance API of the Mesh class Please check this https://www.babylonjs-playground.com/#XSCH6V#2 An instance and a clone are created in the callback from the loaded meshes. The clone looks perfect, but the instance renders differently after I have tried backFaceCulling = false and flipFaces. So my question is how can I create an instance that looks the same as the one I get from clone()? Thanks a lot.
×
×
  • Create New...