damian2taylor Posted September 6, 2017 Share Posted September 6, 2017 Hello, I'm trying to create my own SSAO shader in forward rendering (not in post processing) with GLSL. I'm encountering some issues, but I really can't figure out what's wrong with my code. It is implemented as a BABYLON.ShaderMaterial and set in a RenderTargetTexture, and it is mainly inspired by this renowned SSAO tutorial: http://john-chapman-graphics.blogspot.fr/2013/01/ssao-tutorial.html For performance reasons, I have to do all the calculation without projecting and unprojecting in screen space, I'd rather use the view ray method described in the tutorial above. - First, I calculate my four camera far plane corners positions in my JS code. They might be constants every time as they are calculated in view space position. // Calculating 4 corners manually in view space var tan = Math.tan; var atan = Math.atan; var ratio = SSAOSize.x / SSAOSize.y; var far = scene.activeCamera.maxZ; var fovy = scene.activeCamera.fov; var fovx = 2 * atan(tan(fovy/2) * ratio); var xFarPlane = far * tan(fovx/2); var yFarPlane = far * tan(fovy/2); var topLeft = new BABYLON.Vector3(-xFarPlane, yFarPlane, far); var topRight = new BABYLON.Vector3( xFarPlane, yFarPlane, far); var bottomRight = new BABYLON.Vector3( xFarPlane, -yFarPlane, far); var bottomLeft = new BABYLON.Vector3(-xFarPlane, -yFarPlane, far); var farCornersVec = [topLeft, topRight, bottomRight, bottomLeft]; var farCorners = []; for (var i = 0; i < 4; i++) { var vecTemp = farCornersVec[i]; farCorners.push(vecTemp.x, vecTemp.y, vecTemp.z); } - These corner positions are sent to the vertex shader -- that is why the vector coordinates are serialized in the farCorners array to be sent in the vertex shader. - In my vertex shader, position.x and position.y signs let the shader know which corner to use at each pass. - These corners are then interpolated in my fragment shader for calculating a view ray, i.e. a vector from the camera to the far plane (its z component is, therefore, equal to the far plane distance to camera). I get my depth buffer as a BABYLON.RenderTargetTexture with the DepthRenderer.getDepthMap() method. The kernel samples are disposed in a hemisphere with random floats in [0,1], most being distributed close to origin with a linear interpolation. As I don't have a normal texture, I calculate them from the current depth buffer value with getNormalFromDepthValue() (source: http://theorangeduck.com/page/pure-depth-ssao). Finally, my getDepth() function allows me to get the depth value at current UV in 32-bit float. Here is my fragment shader code: uniform mat4 projection; // Projection matrix uniform float radius; // Scaling factor for sample position, by default = 1.7 uniform float depthBias; // 1e-5 uniform vec2 noiseScale; // (SSAOSize.x / noiseSize, SSAOSize.y / noiseSize), with noiseSize = 4 varying vec3 vCornerPositionVS; // vCornerPositionVS is the interpolated position calculated from the 4 far corners void main() { // Get linear depth in [0,1] with texture2D(depthBufferTexture, vUV) float fragDepth = getDepth(depthBufferTexture, vUV); float occlusion = 0.0; if (fragDepth < 1.0) { // Retrieve fragment's view space normal vec3 normal = getNormalFromDepthValue(fragDepth); // in [-1,1] // Random rotation: rvec.xyz are the components of the generated random vector vec3 rvec = texture2D(randomSampler, vUV * noiseScale).rgb * 2.0 - 1.0; // [-1,1] rvec.z = 0.0; // Random rotation around Z axis // Get view ray, from camera to far plane, scaled by 1/far so that viewRayVS.z == 1.0 vec3 viewRayVS = vCornerPositionVS / far; // Current fragment's view space position vec3 fragPositionVS = viewRay * fragDepth; // Creation of TBN matrix vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); for (int i = 0; i < NB_SAMPLES; i++) { // Get sample kernel position, from tangent space to view space vec3 samplePosition = tbn * kernelSamples[i]; // Add VS kernel offset sample to fragment's VS position samplePosition = samplePosition * radius + fragPosition; // Project sample position from view space to screen space: vec4 offset = vec4(samplePosition, 1.0); offset = projection * offset; offset.xy /= offset.w; // Perspective division -> [-1,1] offset.xy = offset.xy * 0.5 + 0.5; // [-1,1] -> [0,1] // Get current sample depth: float sampleDepth = getDepth(depthTexture, offset.xy); float rangeCheck = abs(fragDepth - sampleDepth) < radius ? 1.0 : 0.0; // Reminder: fragDepth == fragPosition.z // Range check and accumulate if fragment contributes to occlusion: occlusion += (samplePosition.z - sampleDepth >= depthBias ? 1.0 : 0.0) * rangeCheck; } } // Inversion float ambientOcclusion = 1.0 - (occlusion / float(NB_SAMPLES)); ambientOcclusion = pow(ambientOcclusion, power); gl_FragColor = vec4(vec3(ambientOcclusion), 1.0); } A horizontal and vertical gaussian shader blur clears the noise generated by the random texture afterwards. My parameters are: NB_SAMPLES = 16 radius = 1.7 depthBias = 1e-5 power = 1.0 The result has artifacts on its edges, and the close shadows are not very strong... Would anyone see something wrong or weird in my code? Thanks a lot! Quote Link to comment Share on other sites More sharing options...
damian2taylor Posted September 7, 2017 Author Share Posted September 7, 2017 Question linked to this topic: having looked in both documentation and source code of BJS, I can't figure it out at all, but maybe someone could help me on this: Getting my depth buffer as a BABYLON.RenderTargetTexture with the DepthRenderer.getDepthMap() method, is this depth function linear?? Thanks! Quote Link to comment Share on other sites More sharing options...
GameMonetize Posted September 7, 2017 Share Posted September 7, 2017 Hello! Here is how we generate it: https://github.com/BabylonJS/Babylon.js/blob/master/src/Shaders/depth.vertex.fx#L32 Btw, perhaps you should provide a repro in the PG so we can try to experiment with you Quote Link to comment Share on other sites More sharing options...
damian2taylor Posted September 15, 2017 Author Share Posted September 15, 2017 Hello, Well, I understand that this depth is non-linear (so that it is more precise for near distances for optimization reasons). My question is now: how do I get linear depth in range [0.0, 1.0]? Thanks!! PS: I read that post but I didn't get it all, it is quite complex and seems more like a debate between experienced BJS developers Plus it is kinda old Quote Link to comment Share on other sites More sharing options...
GameMonetize Posted September 15, 2017 Share Posted September 15, 2017 To get the depth as linear I would recommend computing it in camera space and just dividing it by (maxZ - minZ) Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.