Jump to content

Shaders with THREE.JS - What's the best approach?


Paul Brzeski
 Share

Recommended Posts

I've just started using shaders in my game to portray water. I'm currently building a class to manage a stack of re-usable THREE.Material objects.

 

How does everyone else feel about the performance of the EffectsComposer in THREEJS? I'm currently just using a ShaderMaterial and then pushing some uniform values around in my animation block, but I'm wondering if there are any performance gains I'm missing out on. 

 

There seems to be a disparate gap in terms of reading materials too - you can either read a very thorough tutorial on colouring on one end of the scale, or try to make sense of one of the many masterful pieces of work on ShaderToy. There doesn't seem to be a good set of instructions for people that have gone beyond that first step of setting up a shader and making things happen, and then building on that knowledge with more complex stuff.

 

I'm thinking of going into some GLSL ES 2.0 specific tutorials and courses, but I'm a bit worried I'll have to learn way more than I need for a few simple SFX.

Link to comment
Share on other sites

I tend to have the same problem when it comes to shaders. THREE.JS has a 'general purpose material' shader that has most of the usual things one might want to put on there, but it was missing the ability to have an emission map (or 'glow' map). Because of how it works, it seems like you can't just go and 'add a bit of code in' without finding their shader, pulling it out, and re-doing it all yourself. The advice I found online about this was to basically use console.log to dump their shader to the console and then to go and grab that text.

 

It feels like a lot of shader operations should be something you can add at the code level without having to use intermediate buffers, and that kind of modular approach would make things a lot easier for people to get into it. I could imagine that some kind of node-based shader builder where you can put together individual bits of shaders to make the final effect would be a really valuable tool, sort of like how Blender's material editor works.

 

Honestly, such a thing probably exists already, I just don't know what its called.

Link to comment
Share on other sites

I suggest animating a normal map to create water, either generate it using perlin noise or animating a pre-created normal map of water. 

If you're looking into full on water simulation I suggest checking out some stuff on voxel based water approximation, definitely the fastest way of doing water physics.

 

I tend to have the same problem when it comes to shaders. THREE.JS has a 'general purpose material' shader that has most of the usual things one might want to put on there, but it was missing the ability to have an emission map (or 'glow' map). Because of how it works, it seems like you can't just go and 'add a bit of code in' without finding their shader, pulling it out, and re-doing it all yourself. The advice I found online about this was to basically use console.log to dump their shader to the console and then to go and grab that text.

 

It feels like a lot of shader operations should be something you can add at the code level without having to use intermediate buffers, and that kind of modular approach would make things a lot easier for people to get into it. I could imagine that some kind of node-based shader builder where you can put together individual bits of shaders to make the final effect would be a really valuable tool, sort of like how Blender's material editor works.

 

Honestly, such a thing probably exists already, I just don't know what its called.

Seriously? One of the most popular tutorials for three.js is on glow maps and god rays with ShaderMaterial and EffectComposer.

http://bkcore.com/blog/3d/webgl-three-js-volumetric-light-godrays.html

Link to comment
Share on other sites

That is in fact the tutorial I eventually found and followed (it was a bit out of date, too, since all of the compositing/etc stuff has been moved into separate .js files in the more recent three.js versions). Note that it doesn't just add an emission layer to textures though, it has to create a duplicate of the geometry, render to a buffer, then it composites the buffer with the static model.

 

Basically, my problem was that in order to do something like:

 

fragColor = threeJSPhongMaterial + myStuff

 

I really had to either re-implement the Phong material, or do geometry duplication, multiple render passes, etc.

 

Because of the way shaders are loaded in, you can't really do a simple 'take this thing and add/multiple/whatever without knowing what 'this thing' actually is' that and you have to do the compositing trick or make a copy of the Phong material and add your incremental adjustment. My thought is that ideally, that combined shader should be created dynamically by software that lets you compose shaders by combining eachothers' outputs.

 

The model is that each 'shader plugin' if you will takes in the usual stuff passed from a fairly thorough vertex shader, such as UV coordinates, textures, position data, light source coordinates and colors, etc, as well as the output of other fragment shader plugins.  The software then sticks together all the code that is used for this particular shader along with all the inline composition effects and gives you a single .fs file that you can then use. The shader plugins themselves would have code exposed, so you can do code-level tweaks and new effects.

Link to comment
Share on other sites

That is in fact the tutorial I eventually found and followed (it was a bit out of date, too, since all of the compositing/etc stuff has been moved into separate .js files in the more recent three.js versions). Note that it doesn't just add an emission layer to textures though, it has to create a duplicate of the geometry, render to a buffer, then it composites the buffer with the static model.

In the pass where the glow map is displayed all the geometry is rendered without textures or colors except for the geometry with a glow map, this cuts down on rendering speeds immensely. Also its way faster then passing the rendered scene through a bloom shader.

Link to comment
Share on other sites

We're talking about different effects. I'm not talking about bloom or anything like that. All I wanted was to be able to, for any given model, have a texture channel that was displayed independent of illumination and added to the texture channels that were illumination-based. I know how to implement this, but I'm just pointing out that the way shaders work makes that kind of simple addition annoying - you have to add it to each shader that you want to support it.

 

So it would be useful to create a framework that naturally handles combining effects for you without needing multiple passes. Multiple passes only make sense when you're performing some non-local task on a subset of the pixels, like a blur that only hits the glowy bits, a bloom shader, etc. But its kind of silly that if I wanted to, e.g., take the output of a phong shader and invert it, I have to make a full copy of the phong shader's code and then put the inversion in there (also ensuring that if the original shader is ever updated, I don't immediately benefit from that update).

 

Thats basically what I'm getting at - software that lets you treat shaders less like independent atomic things and more like modules that can be linked together at the code level.

Link to comment
Share on other sites

Ahhh - TheHermit, are you referring to doing multiple render() calls in your animation block? I saw some shader examples that did that.

 

What I really want to know is how the GLSL reflect() function can be implemented to perform a water reflection. It sounds like the reflection has to be processed on a second render pass though. 

 

I was really hoping for a way to implement stemkoski's example (http://stemkoski.github.io/Three.js/Camera-Texture.html) where I could somehow push a camera's viewpoint as a texture that could be pushed into a shader and just blended in... but I'm not 100% on how to do that.

 

Here's my current water shader:

http://langenium.com/play

 

Code:

https://github.com/paulbrzeski/Langenium/blob/master/public/game/scripts/effects/water.js

https://github.com/paulbrzeski/Langenium/blob/master/views/game/shaders/water.fragmentShader

https://github.com/paulbrzeski/Langenium/blob/master/views/game/shaders/water.vertexShader

 

I'm using a pre-generated noise map and layering it in different ways. I based this on a stemkoski shader for lava (http://stemkoski.github.io/Three.js/Shader-Fireball.html)

Link to comment
Share on other sites

Ahhh - TheHermit, are you referring to doing multiple render() calls in your animation block? I saw some shader examples that did that.

 

What I really want to know is how the GLSL reflect() function can be implemented to perform a water reflection. It sounds like the reflection has to be processed on a second render pass though. 

 

I was really hoping for a way to implement stemkoski's example (http://stemkoski.github.io/Three.js/Camera-Texture.html) where I could somehow push a camera's viewpoint as a texture that could be pushed into a shader and just blended in... but I'm not 100% on how to do that.

 

Here's my current water shader:

http://langenium.com/play

 

Code:

https://github.com/paulbrzeski/Langenium/blob/master/public/game/scripts/effects/water.js

https://github.com/paulbrzeski/Langenium/blob/master/views/game/shaders/water.fragmentShader

https://github.com/paulbrzeski/Langenium/blob/master/views/game/shaders/water.vertexShader

 

I'm using a pre-generated noise map and layering it in different ways. I based this on a stemkoski shader for lava (http://stemkoski.github.io/Three.js/Shader-Fireball.html)

Nice water thats exactly what I was talking about with the animated texture, are you applying it as a texture or a normal/bump-map?Aand that Camera -> Texture example you posted from stemkoski is the only way I can see doing a planar reflection in three.js. Use the render to texture code from that example and instead of the camera being at the cubes position have it at the water position looking at whatever the camera is looking at. 

Link to comment
Share on other sites

Nice water thats exactly what I was talking about with the animated texture, are you applying it as a texture or a normal/bump-map?Aand that Camera -> Texture example you posted from stemkoski is the only way I can see doing a planar reflection in three.js. Use the render to texture code from that example and instead of the camera being at the cubes position have it at the water position looking at whatever the camera is looking at. 

 

Both :)

 

There's 2 textures being used to distort the water. There's a 3rd to distort the height of the water. Strangely - the frame rate drops more dramatically than I would've expected when I increase the resolution of the height map.

Link to comment
Share on other sites

Both :)

 

There's 2 textures being used to distort the water. There's a 3rd to distort the height of the water. Strangely - the frame rate drops more dramatically than I would've expected when I increase the resolution of the height map.

Are you using a bump-map or a normal map? Bump maps are about 2x as slow as normal maps.

Although its expected that the frame-rate would drop considering 85% of your scene is water.

Link to comment
Share on other sites

Are you using a bump-map or a normal map? Bump maps are about 2x as slow as normal maps.

Although its expected that the frame-rate would drop considering 85% of your scene is water.

 

To be honest - I'm not 100% sure. Due to the performance issues though, I've opted to switch off vertex transforms in the water for now.

 

I'm working on the reflection and it seems to be getting there. 

 

Current changes:

- Ocean camera that tracks the player 

- Temporarily swapped out water material for virtual THREE.webGLRenderTarget objects to be filled by camera output

- Secondary scene with orthographic camera

 

As per an example about camera textures from stemkoski's site, I setup the second scene so that I could accurately reflect the current scene. There are now 3 render passes though, which I'm not too happy about. I'm hoping to maybe reduce it to 2 and forget about the 2nd scene (just do the reflection in the shader, somehow). I think I also need to fix the ocean camera's rotation/position as regardless of whether I use the 1st or 2nd render targets the reflection is turning in the wrong direction (1st is a direct feed from ocean camera, the 2nd is from the secondary scene that *should* deliver a reflection). 

 

I haven't pushed my code out as it's definitely a very rough WIP, but here's a screeny (that ship on the water is a reflection, scale is completely screwed though...)

 

1172473_546874758711024_820040473_o.png

Link to comment
Share on other sites

  • 4 weeks later...

Hello all,

 

There has been a lot of work in Three.js with mirror shaders, water effects, etc. in the past few weeks, in particular with the release of v61. Check out https://github.com/mrdoob/three.js/issues/3856 and http://threejs.org/examples/#webgl_mirror for more information and demos.

Link to comment
Share on other sites

Hello all,

 

There has been a lot of work in Three.js with mirror shaders, water effects, etc. in the past few weeks, in particular with the release of v61. Check out https://github.com/mrdoob/three.js/issues/3856 and http://threejs.org/examples/#webgl_mirror for more information and demos.

 

That is fantastic news!

 

Also, thanks heaps for this (http://stemkoski.github.io/Three.js/FlatMirror-Water.html) and all your other work. I constantly refer back to your examples - especially because they're frequently updated alongside THREE.JS itself.

 

It's resources like your site that makes it possible for web devs like me to break into the games space :)

Link to comment
Share on other sites

  • 3 weeks later...
  • 1 month later...

Below is an ocean scene I created using oTakhi Platform (which in turn is based on Three.js).

 

(You can visit the actual scene by clicking on the image.)

 

ocean.png

 

The technique is called Sum of Sines and it is all done by the shader.

 

Behind the scene, It uses multiple sine waves travelling at different directions with different amplitude and wave length to displace a plane's surface.

The surface is then modulated by a normal map.

 

you can study the theory here:   Effective Water Simulation from Physical Models

And the Cg code can be found here: nVidia Shader Library

 

It calculates reflection from an envMap (cubeMap) and refraction by blending a deep and shallow water colors.

To reflect and refract in real time, you can add a cubeCamera to dynamically generate cubemap and sent it to the shader per rendering loop.

 

Steve

Link to comment
Share on other sites

  • 1 year later...

Sorry for bump, but as announced in the project releases forum, I recently built a tool to help author GLSL / Three.js shaders. It's called ShaderFrog http://shaderfrog.com . It can also import from ShaderToy and GLSL Sandbox, and then export to Three.js.

 

The tool lets you compose shaders without writing any code. You can see what that means in the example on the homepage, or dive right in by playing around with making composed shaders http://shaderfrog.com/app/editor/new .

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...