Search the Community

Showing results for tags 'three.js'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • HTML5 Game Coding
    • News
    • Game Showcase
    • Coding and Game Design
  • Frameworks
    • Phaser 3
    • Phaser 2
    • Pixi.js
    • Babylon.js
    • Panda 2
    • melonJS
    • Haxe JS
    • Kiwi.js
  • General
    • General Talk
  • Business
    • Collaborations (un-paid)
    • Jobs (Hiring and Freelance)
    • Services Offered

Found 65 results

  1. Hey guys. This is a simple RPG demo featuring characters from "Fate/Grand Order" (derivative work), powered by "System Animator 11" (WIP) written by myself. PLAY: "System Animator" is originally a desktop gadget project, a fully customizable system monitor/music visualizer/animated wallpaper with focus on visuals and fun. It runs on "Electron", which is basically a Chrome browser, and no wonder System Animator itself is basically HTML5. In this upcoming WIP version, I plan to make it fully online and add some gaming features so that it can be used to make browser-based 3D games (mainly RPG for now). If you want to know more about System Animator itself, check out the following page. For more info about the game itself (controls/copyright/license/credits/etc), check out the following README file. The game has only been tested on Google Chrome and Firefox. It doesn't work on Edge right now, but it may work on other modern browsers. Bug reports and commments are most welcomed.
  2. Hi! First post here. I'm currently experimenting with the gamepad API and I'm trying to capture the motion/orientation of my dualshock 3 to build controls for a new game project. I was wondering if anyone had manage such a thing and managed to make it work in the browser? It seems that WebVR makes use of the motion data using the gamepad.pose object and so far I've been unlucky to reach it on a regular controller. Has anyone managed such a thing and could help? Thank you!
  3. Diamond Defense

  4. I want to make a 3D RPG. I already have the storyline planned out. I also already have a 3rd person camera set up in playcanvas, but since i'm not amazing at programming yet we can use whatever game engine the programmer wants. The RPG will have 4 characters, including the main character. It will have a lot of action elements and I want it to be very atmospheric. I also want the graphics to be low poly. With the programming I can try to help a little, but ill probably wont be of any use. Thank you and i hope you want to join our team!
  5. I want to make a 3D RPG. I already have the storyline planned out. I also already have a 3rd person camera set up in playcanvas, but since i'm not amazing at programming yet we can use whatever game engine the programmer wants. The RPG will have 4 characters, including the main character. It will have a lot of action elements and I want it to be very atmospheric. I also want the graphics to be low poly. With the programming I can try to help a little, but ill probably wont be of any use. Thank you and i hope you want to join our team!
  6. DogfightX Browser 3D HTML5 game, you can play PVP and teamfight. Play with or against your friends and overcome original quests involving fast paced combat, puzzle and skill. No installation required.Survive and shoot at others while trying to keep your own airplane alive!
  7. Babylon Voxel.js

    Hello! As my profile states I am new here and rather new with Babylon.js as well. I found its ease of use and performance (over Three.js) good reasons to work on it. Currently, I have been working on a voxel game (i.e. minecraft-ish) and I have been using Three.js, as there are so many libraries already out there for voxels. On the other hand, pretty much nothing for Babylon. For this reason, I would like to fill the void and, perhaps, find someone who is interested in helping out on the quest. I started with creating a small library for creating snow (called `voxel-snow`) and called it `babylon-voxel-snow` ( The idea is to make the transition from Three.js to Babylon.js as easy and as painless as possible for people (like me) who have been using it for their voxel projects. Adding the prefix `babylon-`, would make it extremely easy to find the counter part for Babylon. Here are some other voxel libraries which are currently only in Three.js: ☑ Voxel Snow ( --> Babylon Voxel Snow ( ☐ Minecraft skin ( ☑ Voxel walk ( --> Babylon Voxel Player ( ☐ Voxel creature ( ☑ Voxel critter ( -> Babylon Voxel Critter ( ☐ Voxel builder ( -> Unneeded as it can be imported with the Babylon Voxel Critter ☐ Voxel use ( ☐ Voxel mine ( ☐ Voxel carry ( ☐ Voxel chest ( ☐ Voxel inventory creative ( ☐ Voxel items ( ☑ Voxel clouds ( --> Babylon Voxel Clouds ( ☑ Voxel skybox --> Babylon Voxel Skybox ( As I go, I will try to slowly implement them for Babylon, so hit me up if anyone would like to help out
  8. Three.js subdivision surfaces

    Hi there ! I'm trying to add a subdivision surface on a loaded json file with Three.js and SubdivisionModifier.js or BufferSubdivisionModifier.js without any result. I searched and found various code and topic but none of them worked. That's why I'm looking for help here. my code with the BufferSubdivisionModifier.js : function initMesh() { var loader = new THREE.JSONLoader(); loader.load('js/cube.json', function(cubeGeometry, materials) { cubegeometryClone = cubeGeometry.clone(); cubegeometryClone.mergeVertices(); cubegeometryClone.computeFaceNormals(); cubegeometryClone.computeVertexNormals(); var modifier = new THREE.BufferSubdivisionModifier(1); smoothCube = modifier.modify( cubegeometryClone ); mesh = new THREE.Mesh(smoothCube, new THREE.MultiMaterial(materials)); // of course cube appears if you replace smoothCube by cubeGeometry mesh.scale.x = mesh.scale.y = mesh.scale.z = 0.95; mesh.translation =; scene.add(mesh); }); } no errors in console my code with the SubdivisionModifier.js which is the example I found mostly : function initMesh() { var loader = new THREE.JSONLoader(); loader.load('js/cube.json', function(cubeGeometry, materials) { smooth = cubeGeometry.clone(); smooth.mergeVertices(); smooth.computeFaceNormals(); smooth.computeVertexNormals(); var modifier = new THREE.SubdivisionModifier(1); modifier.modify(smooth); // if I comment this line, cube show but, of course, with no surface smoothing mesh = new THREE.Mesh(smooth, new THREE.MultiMaterial(materials)); mesh.scale.x = mesh.scale.y = mesh.scale.z = 0.95; mesh.translation =; scene.add(mesh); }); } console show : TypeError: undefined is not an object (evaluating 'v.x') any help will be apreciated, I'm getting mad after 2 days of tries thanks
  9. Clone gltf object in three js scene.

    Hello everyone,I've search for two days for ways to clone a gltf object but no one works. I've trired deep clone the object, but no thing works. It seems like the object only be added once to the glTF render list when it be loaded. The clone body can't be render in screen. Here is the result of scene.add(obj.clone()); `var gltfLoader = new THREE.GLTFLoader(); gltfLoader.load('assets/model/gltf/tree/tree.gltf', function ( data ) { var gltf = data; var gltfobj = gltf.scene !== undefined ? gltf.scene : gltf.scenes[ 0 ]; gltfobj.position.z += 5; = "tree"; scene.add(gltfobj); var tree2 = gltfobj.clone(); tree2.position.x+=1; scene.add( tree2 ); }); ` The cloned object only show shadow in the scene. I've test the colladaLoader and the daeobject is working well, so now I don't know what is going wrong.So,what should I do to clone it in three js scene? If anybody can help me?Thanks!
  10. I have a scene with two individual meshes. It looks like this: this.loadFiles("gras", (gras) => { var particleMaterial = new THREE.MeshPhongMaterial(); = THREE.ImageUtils.loadTexture("models/planets/gras.jpg"); particleMaterial.side = THREE.DoubleSide; this.mesh = new THREE.Mesh(gras,particleMaterial); this.loadFiles("rocks", (rocks) => { var particleMaterial = new THREE.MeshPhongMaterial(); = THREE.ImageUtils.loadTexture("models/planets/rocks.jpg"); particleMaterial.side = THREE.DoubleSide; = new THREE.Mesh(rocks,particleMaterial); callback(this); }); }); Now I want to merge the meshes together. But how can i combine the textures? this.loadFiles("gras", (gras) => { this.loadFiles("rocks", (rocks) => { var geometry = new THREE.Geometry; THREE.GeometryUtils.merge(geometry,gras); THREE.GeometryUtils.merge(geometry,rocks); var particleMaterial = new THREE.MeshPhongMaterial(); = THREE.ImageUtils.loadTexture("models/planets/gras.jpg"); particleMaterial.side = THREE.DoubleSide; this.mesh = new THREE.Mesh(geometry,particleMaterial); callback(this); }); });
  11. Hey there, I've recently started to dig my way more into three.js in order to build my own image-viewer-app as my first three.js project. I'm using three.js r83 and both the EffectComposer aswell as the Shader/RenderPass from the three.js examples. (View on github) Since I'm familiar with other programming languages I was able to figure out a lot of stuff on my own, but currently I'm struggling with this specific problem: My App should be able to add post-processing effects to the currently viewed image. The post-processing part already works like a charm, but I would like to add more effects as I want to test/experiment around with some new sorts of possibilities for an image-viewer. Since I'm obsessed with performance, I came up with some ideas on how to scale the post-processing into different EffectComposers in order to keep weight (Number of Shaders to render) on each Composer low and therefore it's performance high. What I did: After debugging both the EffectComposer and Shader/RenderPass from the three.js examples, I came up with the idea to render a texture, that I'm able to re-use as a uniform in another Composer later on. This would enable me to encapsulate and pre compute whole post-processing chains and re-use them in another Composer. While I was debugging through the ShaderPass, I found what I think is the key element to get this to work. I won't post the Code here as it's accessible via github, but if you have a look into the ShaderPass.js on Line 61 you can see the classes' render function. The parameter writeBuffer is a WebGLRenderTarget and, afaik, it is used to store what the composer/renderer would usually put out to the screen. I've created 2 identical Composers using the following code: var txt = testTexture; var scndRenderer = new THREE.WebGLRenderer({ canvas: document.getElementById("CanvasTwo"), preserveDrawingBuffer: true }); scndRenderer.setPixelRatio(window.devicePixelRatio); var containerTwo = $("#ContainerTwo")[0]; scndRenderer.setSize(containerTwo.offsetWidth, containerTwo.offsetHeight); console.log("Creating Second Composer."); console.log("Texture used:"); console.log(txt); var aspect = txt.image.width / txt.image.height; var fov = 60; var dist = 450; // Convert camera fov degrees to radians fov = 2 * Math.atan(( txt.image.width / aspect ) / ( 2 * dist )) * ( 180 / Math.PI ); var scndCam = new THREE.PerspectiveCamera(fov, aspect, 1, 10000); scndCam.position.z = dist; var scndScene = new THREE.Scene(); var scndObj = new THREE.Object3D(); scndScene.add(scndObj); var scndGeo = new THREE.PlaneGeometry(txt.image.width, txt.image.height); var scndMat = new THREE.MeshBasicMaterial({ color: 0xFFFFFF, map: txt }); var scndMesh = new THREE.Mesh(scndGeo, scndMat); scndMesh.position.set(0, 0, 0); scndObj.add(scndMesh); scndScene.add(new THREE.AmbientLight(0xFFFFFF)); //PostProcessing scndComposer = new THREE.EffectComposer(scndRenderer); scndComposer.addPass(new THREE.RenderPass(scndScene, scndCam)); var effect = new THREE.ShaderPass(MyShader); effect.renderToScreen = false; //Set to false in order to use the writeBuffer; scndComposer.addPass(effect); scndComposer.render(); I then modified three's ShaderPass to access the writeBuffer directly. I added a needsExport property to the ShaderPass and some logic to actually export the writeBuffers texture: renderer.render(this.scene,, writeBuffer, this.clear); //New Code if (this.needsExport) { return writeBuffer.texture; } I then simply set the needsExport for the last pass to true. After rendering this pass, the texture stored in the writeBuffer is returned to the EffectComposer. I then created another function inside of the EffectComposer to just return the writeBuffer.texture, nothing too fancy. The Issue: I'm trying to use the writeBuffers texture (which should hold the image that would get rendered to screen if I would have put renderToScreen to true) as a uniform in another EffectComposer. As you can see in code block 1, the texture itself isn't resized or anything. The used texture got the right dimensions to fit into a uniform for my second composer, however I'm constantly receiving a black image from the second composer no matter what I do. This is the code I'm using: function Transition(composerOne, composerTwo) { if (typeof composerOne && composerTwo != "undefined") { var tmp = composerOne.export(); //Clone the shaders' uniforms; shader = THREE.ColorLookupShader; shader.uniforms = THREE.UniformsUtils.clone(shader.uniforms); var effect = new THREE.ShaderPass(shader); //Add the shader-specific uniforms; effect.uniforms['tColorCube1'].value = tmp; //Set the readBuffer.texture as a uniform; composerTwo.passes[composerTwo.passes.length - 1] = effect; //Overwrite the last pass; var displayEffect = new THREE.ShaderPass(THREE.CopyShader); displayEffect.renderToScreen = true; //Add the copyShader as the last effect in Order to be able to display the image with all shaders active; composerTwo.insertPass(displayEffect, composerTwo.passes.length); composerTwo.render(); } } Conclusion: To be completely honest, I don't have a clue about what I'm doing wrong. From what I've read, learned while debugging and from what I've figured out so far, I would argue that this is a bug. I would be really glad if someone could prove me wrong or submit a new idea on how to achieve something like what I'm already trying to do. If there are any more informations needed to solve this question, please let me know! Regards, Michael
  12. I am working for a Wearable Computing and Augmented Reality Startup in Bremen, Germany: For improvements of our (PIXI.js powered) Web-Editor that configures our Augmented Reality solutions we are looking for a Web Application Developer (m/f) to join our team in Bremen (job permit for the EU is required). It says full-time in the job description but students looking for an internship are also very welcome! We are a team of people from all over the globe, so everyone in our team speaks English fluently but German is a big plus. The Job description (in German) is attached to this post. Please apply with your full resume including school and other certificates as well as code examples (e.g. github links) and references to . Feel free to ask me for further details on the job. 162810_Ubimax_Stellenauschreibung_WebApplicationDeveloper.pdf
  13. Hello Freelancers, My name is Dwayne and I am the technical account manager for Mass Ideation. I actually joined this forum years when I was learning Pixi but right now I'm contacting you because we are looking for a developer with Pixi.Js and/or Three.Js skills. It's roughly a one two month project It's due approximately October 7th - 14th. There may be some testing/qa after for about a week. Its a web app that takes a screenshot of a living room, the scene will be viewable by different camera views, and the users will upload images in this app. We would need to make an editable scene that allows people to pull in images then place them in the scene. Users can select from a variety of Christmas trees, pick and place photos (as ornaments), customize the background/ setting, add a wreath on the door, and even upload a family photo to hang on the wall. If you are interested, please fill out this survey with your skills and hourly rate Best, Dwayne
  14. I have absolutely nothing to do with the following code, I just thought I wanted to share it, and I was slightly unsure as to which section this was best suited. it.s 1.6GB with models, sounds etc. I haven't been able to load the hosted game, though, so I guess something is down. I just wanted to share it, as it seems to implement an authoritative model, with physics and hit detection done on the server. And I'm pretty sure this is something quite a few users are looking for, based on threads on the forums. The server uses three.js too, but that's only for some basic vertex manipulation for the cannon heightfield I guess. I have trouble installling canvas on node, as a Lot of people seem to have, so I haven't tested it yet.
  15. complex games with three.js?

    Hey! I want to create a game thats can run on as many platforms natively. So being a front-end developer i turned to webGL. But i have a few questions: 1 ) Is possible to create big complex and demanding games like: Unturned Risk of Rain Hotline miami Minecraft Terraria Nidhogg Battle block theater Both Graphical as Technical. 2 ) If you use a html wrapper to create a .exe, .apk etc. is the source code protected? Also can you compile to consoles? 3 ) I have read that you can code in c++ and compile it to javascript is that functional? Also is it possbile to write in an high-level strong language and compile that to javascript (i do not like weakly typed languages, and C++ is to low-level for me) 4 ) How does it come that i can not find any big games made in webGL (only tech demo's, fancy websites and games on this forum) 5 ) When i looked around this forum i dident saw any three.js based games. Why is that? i looked at the tech demo's of many engines and three.js looked the most promising. Or is there something i missed? 6 ) Is webGL for my project a smart choice? At the end i dont want my game to be playeble on the web. Only for stand alone on pc, linux, mac. (mobile and console if the project succeeds) 7 ) What is the best engine to use for a 2d/2.5d with some nice light effects? 8 ) Does the steam SDK for achievements, joining friends, steam controller etc. work well with webGL? Thanks for reading, it would mean a great deal to me if you know a answer to one of my questions!
  16. I started an app about 4 months ago in three.js as a proof of concept. Its not a game but a ship yard inventory system. It maintains the position of over 20,000 containers in 3 basic sizes, and reflects the movement of over 200 vehicles with 5 different types. The prototype app positions the vehicles and the moves the containers around the yard. The vehicles and containers are using models in the js format. The containers are wrapped in about 15 different textures that have the company logo in 3 basic sizes and along with a couple default textures when we don't have the a matching company logo. Now the problems. With Three.js I have noticed massive memory leak that I think is caused by the adding and removal of containers. For performance I placed all the containers in just a few concatenated pieces of Geometry to reduce the draw calls. The vehicles I left alone. To remove a container I modify the geometry on the fly. That being said, not sure if I should even do it that way and no place just to ask questions like can I do this, or does this work for three js. The other issue I have is it seems to be changing versions with internal alterations. As my code becomes larger modifying it becomes more difficult. Then I did a search on three.js vs ? and found Babylon read a few posts about it being more industrial strength and seeing a board that actually discusses the language I am more intrigued. Does it make sense to change? Is the language similar or different. I saw the multi-thousand spheres but they had no textures is it possible to apply image textures from an array. Does babylon have a js loader for models and png loaders for textures. Is there light that is like sunlight? Feel free to ask questions etc. thanks
  17. Is WebGL ready for production

    Do you think that WebGL is mature enough to be used in production? What I mean by ready is stability, performance and so on. Also are there any WebGL frameworks that are mature enough to be used in production? Say you have a client who orders 3d car configurator application that runs in a browser. Would you be willing to try and create three.js app that renders all objects in 3D? Or would you prefer to use 360-photo-based approach or some plugin? I'm asking because: First - I don't see many truly commercial WebGl applications. Secondly - it is really hard to find some good feedback about WebGl among web developers. It seems like they're not really interested in this IMO fascinating technology.
  18. we have designed a multiplayer arena game with Christmas theme , please check it out at we use pixi.js for 2D rendering, node.js for multiplayer to function. since It has just launched, there aren't many users online. you might want to find a friend to try the game together. we use typescript for both client side and server side development, it saved us a lot of efforts when constructing complicated javascript project. any comments are appreciated ~ Thanks !
  19. Hy, I am trying to create 2 cameras and control them with TrackballControls & OrthographicTrackballControls. The problem is that when I go from Perspective to Orthographic, the Orthographic controls do not work anymore. When changing I tried using control.dispose() and every time I switch I reinit the control. Another problem is that if I start with Orthographic I can zoom, pan and rotate the camera, then if I switch to Perspective and then back to Ortho again, I can't work with the camera.
  20. Three.js Spine runtime with FFD?

    Since Spine is 2D I guess I'll just post it here. Is there anyone yet to implement Three.js Spine runtime with FFD? I found one by makc but it seems like it doesn't have FFD yet. Thanks!
  21. I'm new on Three.js and I can't find how quaternion works: it's like it always refers on local part referential and not the global one. I've illustrated it here : The rotation quaternion of the green box is via the vector (0,0,1) but it's not in the global referential, it's the one of the projected referential of the green cube. How can I project the quaternion back on the global referential? So that the green cube rotates via the (0,0,1) vector of the scene?
  22. HI Folks, I'm putting together a Meetup in San Francisco on Nov/20 ( and I'm looking for speakers on Babylon.js and Three.js. The idea of the group is to foster 3D on the web and since we are starting we want to show the members 2 different libraries and let them tools do decide how to move on. It should be a 'friendly' battle. Any volunteers? Cheers, Silvio Autodesk API evangelist
  23. tween rotate a camera in three.js

    I'm trying to develop a globe in three.js. Pretty much got everything I want working but I'm have trouble getting a camera to rotate from one position to another when I click a button. I know that the cordinates are correct because if I use some simple code the camera jumps to the new position when I do this camera.position.set(posX,posY,posZ); camera.lookAt(new THREE.Vector3(0,0,0)); So give that my destination is correct I struggling to understand why the following function that uses tween.js doesn't work. I'd appreciate any help as I'm really struggling with this var from = { x : camera.position.x, y : camera.position.y, z : camera.position.z }; var to = { x : posX, y : posY, z : posZ }; var tween = new TWEEN.Tween(from) .to(to,600) .easing(TWEEN.Easing.Linear.None) .onUpdate(function () { camera.position.set(this.x, this.y, this.z); camera.lookAt(new THREE.Vector3(0,0,0)); }) .onComplete(function () { camera.lookAt(new THREE.Vector3(0,0,0)); }) .start(); Many thanks
  24. Hi, i have just started to study three.js and i am having some trouble to write a function that takes as arguments an object position (Vector3) and a time in milliseconds, and gradually rotate the camera to face it in that time. Substantially a lerp version of the builtin lookAt method. First i've tried using tweenjs to get smooth rotate transition. For the start and end parameters i've created a dummy object and set its position, rotation and quaternion the same as the camera, then i have use the lookAt function on it to face towards the object and i've stored its quaternion in a new variable "targetQuaternion". Then i have used this variable as the target parameter in the TWEEN.Tween method to update camera.quaternion. I've tried before with quaternions to avoid gymbal lock and then with rotation, but none works fine. function rotateCameraToObject(object3D, time) { var cameraPosition = camera.position.clone(); // camera original position var cameraRotation = camera.rotation.clone(); // camera original rotation var cameraQuaternion = camera.quaternion.clone(); // camera original quaternion var dummyObject = new THREE.Object3D(); // dummy object // set dummyObject's position, rotation and quaternion the same as the camera dummyObject.position.set(cameraPosition.x, cameraPosition.y, cameraPosition.z); dummyObject.rotation.set(cameraRotation.x, cameraRotation.y, cameraRotation.z); dummyObject.quaternion.set(cameraQuaternion.x, cameraQuaternion.y, cameraQuaternion.z); // lookAt object3D dummyObject.lookAt(object3D); // store its quaternion in a variable var targetQuaternion = dummyObject.quaternion.clone(); // tween start object var tweenStart = { x: cameraQuaternion.x, y: cameraQuaternion.y, z: cameraQuaternion.z, w: cameraQuaternion.w }; //tween target object var tweenTarget = { x: targetQuaternion.x, y: targetQuaternion.y, z: targetQuaternion.z, w: targetQuaternion.w }; // tween stuff var tween = new TWEEN.Tween(tweenStart).to(tweenTarget, time); tween.onUpdate(function() { camera.quaternion.x = tweenStart.x; camera.quaternion.y = tweenStart.y; camera.quaternion.z = tweenStart.z; camera.quaternion.w = tweenStart.w; }); tween.start();}So this does not work. I've also tried another approach, computing the angle between camera vector and object vector and use that angle as target rotation: function rotateCameraToObject(object3D, time) { // camera original position var cameraPosition = camera.position.clone(); // object3D position var objectPosition = object3D.position.clone(); // direction vector from camera towards object3D var direction = objectPosition.sub(cameraPosition); // compute Euler angle var angle = new THREE.Euler(); angle.setFromVector3(direction); /* * tween stuff */ var start = { x: camera.rotation.clone().x, y: camera.rotation.clone().y, z: camera.rotation.clone().z, } var end = { x: angle._x, y: angle._y, z: angle._z, } var tween = new TWEEN.Tween(start).to(end, time); tween.onUpdate(function() { camera.rotation.y = start.x; camera.rotation.y = start.y; camera.rotation.y = start.z; }); tween.start(); }This doesn't work neither, eventually camera rotate towards the object but the rotation is not right. Any help? What is the correct way to have a lerp rotate function for the camera? Thanks in advance!
  25. The problem: In the awesome Three.js, I can't figure out how to convert an EllipseCurve into a path that I can extrude along. In the example below, if I uncomment the LineCurve3, my square extrudes along it nicely. If I run it as the EllipseCurve, there are no errors but nothing shows on screen. I have tried zooming the camera right out to make sure it's not off the screen for any reason. I know the EllipseCurve is being generated correctly as I can write it out with a line material (not shown in the code below). The code var radius = 1100; var degreesStart = 75; var degreesEnd = 30; var radiansStart = (degreesStart * Math.PI) / 180; var radiansEnd = ((degreesEnd) * Math.PI) / 180; // this won't seem to work as an extrude path, but doesn't give any errors var path = new THREE.EllipseCurve(0, 0, radius, radius, radiansStart, radiansEnd, true); // this works fine as an extrude path //var path = new THREE.LineCurve3(new THREE.Vector3(0, 0, 0), new THREE.Vector3(1000, 1000, 0)); var extrusionSettings = { steps: 100, bevelEnabled: false, extrudePath: path }; // draw a square to extrude along the path var sectionSize = []; sectionSize.push(new THREE.Vector2(0, 0)); sectionSize.push(new THREE.Vector2(1000, 0)); sectionSize.push(new THREE.Vector2(1000, 1000)); sectionSize.push(new THREE.Vector2(0, 1000)); var sectionShape = new THREE.Shape(sectionSize); var componentGeometry = new THREE.ExtrudeGeometry(sectionShape, extrusionSettings); var component = new THREE.Mesh(componentGeometry, material); group.add(component); scene.add(group); What I have tried: My attempts to make it work have all tried to extract the points from the curve into a path to use in the extrusion. The closest I felt I got was var ellipsePath = new THREE.CurvePath(path.getSpacedPoints(20)); // where 'path' is my EllipseCurve in the code above // (and then changed the extrusion settings to use 'ellipsePath ' instead). This gave the error "Cannot read property 'distanceTo' of null". I can't seem to get my head around how the EllipseCurve relates to points that relate to a path. Can anyone point me in the right direction please, or have code where you've come across the same problem? Many thanks.