Search the Community

Showing results for tags 'rendertargettexture'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • HTML5 Game Coding
    • News
    • Game Showcase
    • Coding and Game Design
  • Frameworks
    • Phaser
    • Pixi.js
    • Babylon.js
    • Panda.js
    • melonJS
    • Haxe JS
    • Kiwi.js
  • General
    • General Talk
  • Business
    • Collaborations (un-paid)
    • Jobs (Hiring and Freelance)
    • Services Offered

Found 7 results

  1. Hello, When cloning a material - say, a BABYLON.StandardMaterial -, everything is fine except for renderTargetTexture. PG: https://playground.babylonjs.com/#BLG0FL material.clone() function calls renderTargetTexture.clone() function. But renderTargetTexture.clone() function creates a new empty RenderTargetTexture with the same properties. Btw, I'm sure that adding "scene.customRenderTargetTextures.push()" at the end of the clone function would resolve this issue. But I don't think that people want to create a new RTT each time they clone a material (it would kill the app, especially on iOS devices). Not sure about which behaviour adopt here, any idea ? Thanks !
  2. Hi everybody ! First of all, I'm not sure if I should begin a new thread or up these ones : http://www.html5gamedevs.com/topic/19285-create-screenshot-with-post-processing/ http://www.html5gamedevs.com/topic/17404-babylon-screenshot/ With the new support of WebGL 2 by BabylonJS, I got interested in the multisample render target. Until now, I used my own version of createScreenshotUsingRenderTarget() function which performs MSAA manually (actually, it pixel perfect downscales a 4x bigger renderTargetTexture than the requested size). Now, I would like to rewrite it using the latest available features. What should I do to get an antialiased screenshot using multisample render target ? Just increasing the samples property of the RenderTargetTexture is not enough, I'll always get the following error : [.Offscreen-For-WebGL-098CA170]GL ERROR :GL_INVALID_OPERATION : glReadPixels: Which comes obviously from this line : var data = engine.readPixels(0, 0, width, height); https://github.com/BabylonJS/Babylon.js/blob/master/src/Tools/babylon.tools.ts#L627 Here is a PG where I copy/pasted a lighter version of createScreenshotUsingRenderTarget() function. The only real difference is the sampleCount parameter which is used to increase the samples property : http://www.babylonjs-playground.com/#SDTEU#29 Just press 1, 2, 3 or 4 key to call the createScreenshot() function with 1, 2, 3 or 4 samples. Thanks for your help !
  3. Hi I was experimenting with RenderTargetTexture and have some problems. Here is playground http://www.babylonjs-playground.com/#1OJL6I#7 Scene is rendered on texture using camera txcam. And then same objects are rendered on main screen buffer. Here are problems: 1) Aspect ratio and camera for RenderTargetTexture When you drag the splitting line (the one betwean editor and scene) or just disable editor using "-Editor" button, the image on texture changes its aspect ratio, which is not good in my case. So i added false ( true actually does nothing there) for parameter doNotChangeAspectRatio in RenderTargetTexture constructor ( line 38 in playground ) http://www.babylonjs-playground.com/#1OJL6I#8 Now the texture indeed do not changes aspect ratio but texture completely ignores the camera which i chosen for it (txcam) and uses the main camera used in scene. 2) When you look back on first playground ( http://www.babylonjs-playground.com/#1OJL6I#7 ) and you move the camera around with mouse, specular light on rendered texture moves, but should not move at all. That black ball is actually a light source ( it appears in main render as well) , its position is not moving so specular should not too. I will appreciate if someone helps. Maybe those are bugs, or I am do something in wrong way ? regards ceedee
  4. Hi I created the topic first in questions sections, but it seems to look like a bug now. so here is the original topic:
  5. Hi there, I have a minimap implemented by having a secondary camera in a RenderTargetTexture applied on a plane. In the 2.6 preview version since a change committed on Dec 21st 2016, something has been broken. So when I dispose an object in my scene, the entire plane which holds the minimap (the RenderTargetTexture of a camera) disappears too. This is obviously not the expected behaviour. I traced the problematic commits and they happened to be these: commit 49e419016287753a9720a46114bbc605df69db79 Merge: 55f1083 d0fbbc9 Author: David Catuhe <david.catuhe@live.fr> Date: Wed Dec 21 12:23:33 2016 -0800 Merge pull request #1622 from haxiomic/UseOpenGLProjectionMatricies Use OpenGL projection matricies over Direct3D for better depth precision To reproduce the problem, please download the attached zip and run index.html. The code has setTimeout that will dispose the sphere. You will see that the minimap disappears too. Then when you modify index.html to reference good-babylon.js instead of bad-babylon.js you will see that when the sphere disappears, the minimap will not disappear. bad-babylon.js is the current preview version built from master, whereas good-babylon.js is from a version before the problematic commit. I hope someone knowledgeable of the changed area of code fix the issue for us. bug.zip
  6. Hi everybody ! This week I faced with a strange behaviour. I generally lose time looking for a solution on my own, I did it again, but today I have the unpleasant feeling to be powerless. I'm trying to precompute kind of PCSS map (soft shadows) to get nice-looking shadows in static scenes. For this purpose, I use renderTargetTextures with refreshRate = RENDER_ONCE. And I use three shaders, called in this order : The one of shadowGenerator which gives me the shadowMap of the blockers. The PCSS one, which uses the shadowMap and gives me a PCSSMap. The material's shader, which simply display the PCSSMap in real time. After each renderTargetTexture created, I call scene.customTargets() to order the calls. Okay : This works great ! Now, I would like to correct small artifacts and I need to repeat step 1 & 2 for each blocker separatly. And here comes the drama. Let's take a look at the new call order : 1. ShadowGenerator creates blockerMap with the first blocker. 2. PCSSGenerator creates PCSSMap for the first blocker. 1bis. ShadowGenerator creates blockerMap with the second blocker. 2bis. PCSSGenerator creates PCSSMap for the second blocker AND mix it with last PCSSMap (the first one). 1ter. ShadowGenerator creates blockerMap with the third blocker. 2ter. PCSSGenerator creates PCSSMap for the third blocker AND mix it with last PCSSMap (the second one). 3. Display result. My issue is : I can't grab the last PCSSMap. After a few tests, I highlighted when my issue appears and when it doesn't. So here is a big simplification of my PCSS fragment shader (it only outputs one of the two textures it takes in uniform) : uniform samplerCube blockerMap; uniform samplerCube previousPCSSMap; // This function samples the blockerMap. We don't mind about the result. float sampleFunction() { for (int i = 0.0; i < POISSON_COUNT; i++) { vec4 sample = textureCube(blockerMap, direction); } return 1.0; } void main(void) { // To fill } And here are 4 usecases : 1. It returns blockerMap, everything is OK. void main(void) { //sampleFunction(); gl_FragColor = textureCube(blockerMap, direction); //gl_FragColor = textureCube(previousPCSSMap, direction); } 2. It returns the previous PCSSMap, everything is OK. void main(void) { //sampleFunction(); //gl_FragColor = textureCube(blockerMap, direction); gl_FragColor = textureCube(previousPCSSMap, direction); } 3. It returns the blockerMap, everything is OK. void main(void) { sampleFunction(); gl_FragColor = textureCube(blockerMap, direction); //gl_FragColor = textureCube(previousPCSSMap, direction); } 4. It returns the blockerMap instead of the previous PCSSMap, what's wrong ? void main(void) { sampleFunction(); //gl_FragColor = textureCube(blockerMap, direction); gl_FragColor = textureCube(previousPCSSMap, direction); } As you can see, sampleFunction() works with blockerMap and has absolutely no contact with the previousPCSSMap. However, the previousPCSSMap seems to be replaced by the blockerMap and I have absolutely no idea how it's possible. As it's nonsense, I dare coming here begging for your help... Some more info : - I use shadowGenerator from Babylon. - I use my own PCSSGenerator. But this one is a CC of shadowGenerator. The unique difference is the shader which is called. - The last shader (material's one) only displays the result, the issue should not come from here. - I verified 1000 times, I don't send blockerMap in previousPCSSMap in my code. Maybe it does in a dark side of the library, but I don't think so. - I systematically empty my cache between each shader modification. - Of course, my PCSS shader contains a lot of calculations and uniforms I didn't show here. But I really commented a lot my code to obtain something really close to the usecase above. I'm working on isolating the issue in a new project. Thanks ! PeapBoy
  7. Hello ! I'm working on a project in which a moving light revolves closely around an arbitrary model, revealing the surface it has lit, in a "fog of war" style (but with precise lighting): (this model is "Ice Asteroid" by jesterelly and is under CC-By license) (the *point* light is just an example, may as well be directional light with a shadowgenerator). To do this, I must keep track of once-lit areas to display them even if they are not illuminated atm. As this information can only be assigned to the whole surface (the camera is mobile, and all surface is concerned), I thought of using a texture to store it, "inside" the model material. I was kindly pointed toward RenderTargetTextures, which seem the way to go , but I struggle with implementing my plan (and I don't know if it is correct anymore) : 1) using a new material = standard material + a light map texture, which stores all pixels of the diffuse texture ever lit ( 0/1 states), and display (or not) a fragment accordingly. Seems OK. 2) the "light map" is a RTT which onBeforeRender replaces the model material with a "detect light" custom material : its fragment shader output only convoys light/shadow information, and it is displayed on a plane, to form the model UV map (but only with lit/unlit zones).the rendering takes place, creating the "instant" light maponAfterRender puts back the almost-standard model material, and makes an "addition" of the newly created light map and the previous one (final_uvpixel_value = uvpixel_value_1 + uvpixel_value_2), passing it as a texture to the model material. I scratch my head on these 2 points : in particular, I have no idea how to access to the "light state" of the model surface, presented as a flat texture. I tried to "unwrap" my model by replacing gl_Position = viewProjection * finalWorld * vec4(position, 1.0);by gl_Position = vec4(vDiffuseUV,0., 1.0);in the standard material vertex, but so far I only got beautiful glitches (always on UV-mapped models). I can't think of another way to go for the moment. Any idea about this "real-time surface illumination texture" is welcome (thank you very much for reading this)