• Content Count

  • Joined

  • Last visited

Everything posted by thrice

  1. Could also maybe create a ribbon and update that as the sphere moves, and use that to track the spheres path, if that's what you are trying to do? (i'm assuming you're going for a 2d fog of war effect i.e. warcraft/diablo?) idk:
  2. Eh, not like that at least, you def can't store variables within the fragment shader itself. Part of your question sounds like maybe it could be solved with vertex shader, which is mostly outside my knowledge domain, but maybe not. However It also sounds like there might be a better way of doing what you are trying to achieve. You could probably use a custom render target to essentially bake the image as the sphere moves through it, or something to that effect? Then you'd just pass the baked image as a sampler to the main shader and use that to diff it? IDK I don't have much experience with render targets, but that is essentially how I would try to approach the problem, baking the image as some sort of 2d map of the fog of war as it's lifted, then you'd only need to update it as the circle is moving. (And you'd probably simulate that in another shader, in which you could pass the heightmap texture, and draw a circle on it, at the same scale of the main sphere / ground ratio, then you'd move the sphere by just passing in a xy offset to the circle, as the sphere moves, which would be drawn on top of the height map, which would be the thing you'd be baking)
  3. Ya so, you'd either have to do the calculations on the javascript side, and pass it to the shader as a uniform, or something else, but you can't store computed variables in the shader for the next pass if that's what you are asking. Are the sphere / background 2 different textures you're passing into the shader?
  4. It's possible, just not directly from within the shader. Instead you pass the data to the shader on the JS side of things, i.e. assuming you have an instance of shader material, pseudocode: shader_material.setVector3('inputColor', new BABYLON.Vector3(1,0,0); shader_material.setFloat('time', 0.0); Then after event occurs you can do something like current_time = 0.0; time_step = 0.01; scene.onBeforeRenderObservable.add(() => { shader_material.setFloat('time', current_time + time_step); });
  5. Hey Delta. I'm stuck on an older version of babylon atm (3.1) so no glow layer for me, but also I don't think glow layer works with shader materials? (all my effects are shader materials)
  6. Also, I tried using multiple cameras / layer mask, but the issue I am having is the pipeline knocks out the layer below it when it is applied. So in this PG, I am trying to make the shape mesh on top of the board have bloom applied, but the board underneath not bloom. (uncomment line 50 to see the issue I am having with this strategy, but I am hoping there may be a better way anyways?)
  7. Basically looking to use the bloom post process but only for effect type meshes, which have baked illumination (projectiles and such), and maybe my main game board mesh. Anyone know an easy way to do this?
  8. The reason (or at least one of) is because isVisible=false can be used to group meshes together, so that the children meshes are still active in the scene (like a transform node in newer babylon versions). If you setEnabled(false) instead the child meshes will be removed from the rendering loop also. I believe there are other uses I've encountered as well just can't think of off the top.
  9. You should be able to make a custom shader material for this purpose, at least that is what I do. I use it when I want the material to appear exactly as is. Something like: BABYLON.Effect.ShadersStore['noLightFragmentShader'] = ` precision highp float; uniform float time; varying vec2 vUV; uniform sampler2D textureSampler; void main() { vec4 texture_color = texture2D( textureSampler, vUV ); vec4 final = texture_color; gl_FragColor = final; } BABYLON.Effect.ShadersStore["noLightVertexShader"]= ` precision highp float; // Attributes attribute vec3 position; attribute vec3 normal; attribute vec2 uv; // Uniforms uniform mat4 worldViewProjection; // Varying varying vec4 vPosition; varying vec3 vNormal; varying vec2 vUV; void main(void) { gl_Position = worldViewProjection * vec4(position, 1.0); vUV = uv; }`; this.babylon = new BABYLON.ShaderMaterial('blah', this.scene, { vertex: 'noLight', fragment: 'noLight', }, { attributes: ["position", "normal", "uv"], uniforms: [ "world", "worldView", "worldViewProjection", "view", "projection" ], }); `;
  10. Hmmm. I'm on OSX 10.2 sierra. - Chrome v70.0.3538.110 - Also seeing issue on my macbook pro/chrome. Also seeing issue in my actual game which is running an older electron/chromium version (not sure which exactly). What are you running?
  11. Sorry, use this playground instead for the broken version In the one I posted originally, it was broken, but I was doing dumb stuff to try and work around it which makes it more confusing
  12. Sure, though hard to demonstrate via SS due to flickering. Here are videos of: chrome version (broken) safari version (working) Are you seeing the working version or the bugged version?
  13. Just opened 2nd example on my iPhone X and runs flawlessly however. So maybe bug with engine? Or something with auto play? Though I don’t see why that would be the case when the first example is working in chrome
  14. So I am having issues with videoTextures and createInstance. I thought it was related to alphaBlending/mode and what not, but I am able to get it working perfectly if I just use clone or without createInstance. So I am wondering if it's an issue with video texture, or just my createInstance implementation in my custom shader (though I use the same code elsewhere for createInstance with non video textures and as far as I can tell it works correctly.) Anyways here is a working playground version, or what should be happening, using clone. Here is the fubar version. It really is quite fubar, all sorts of different glitches. Like sometimes there is a big blue plane blocking one of the textures each time, sometimes the videos go at superspeed and don't seem to recognize where they are supposed to stop playing, sometimes one vid will play and not other, and ONE time, got it to work perfectly but couldn't reproduce after that. So I am at a loss. Anyone have any ideas? (also make sure to hit run after you load playground, on both of them. and left click to play the videos);
  15. In case anyone else stumbles on this thread like I did yesterday, here is a working example on how you can faux do this. Basically rather than use a transparency channel I discard black pixels, and allow a discard threshold to be set, which will call discard on any pixel whose sum of r.g.b < discard threshold. I am however having some issues with video textures/instances but going to open a separate issue for that.
  16. Hey all, So I'm finally starting to promote my babylon based project a bit now, that I have an actual site together at - The game itself isn't hosted there or anything (it will be a PC+Mac release), but I have been putting out a ton of playtest / demo sessions I've recorded on YT and posting on site (and will be throughout the next week). So anyways, if you're into digital CCGs or CCGs in general, you should def check it out as it's quite unique. I would also love any feedback, and am happy to answer any questions you might have! Also, should say thank you to anyone on these forums who has helped me in any way up to this point! I mean, these forums are seriously great, and filled with helpful people. (Should my project make it into the big leagues you can be sure I'll do a forum search and you can expect your name in the credits )
  17. Conceptually, it's starting to make more sense after looking into the source code.I need to read more about shader attributes as I don't think I really understood what they were doing until this point.
  18. Thanks delta, -- that is mostly what I thought. I am seeing different behavior, it is firing the onWorldMatrixUpdated on every frame for all of it's non worldMatrixFrozen children, when none of those things should be changing. - I am stuck on version 3.1.alpha5 ATM, so if there were any bugs with anything ^ which were fixed after 3.1.alpha5 that could be helpful to know, but I'll dig into the issue further.
  19. -- That said ^, I still think it would be really useful to have built in to the framework if it's possible, as an StandardInstancedMaterial or InstancedAtlasMaterial or something like that. Reasons being: I tried using sprite manager, but aside from the fact that I couldn't use it anywhere where I wanted a non billboard mode mesh (which is probably 75% of the draw calls in my scene, i.e. cards which need to be able to rotate), there were too many issues. Different (worse) rendering quality when sprites are far from camera, having to track parent/position manually with onWorldMatrixUpdated, translating parent size to scaling which is a hack in itself, etc. But the main dealbreaker /reason I can't use it at all, in most places in my scene, is because of lack of rotation. If there was a InstancedAtlasMaterial or whatever, it would be hugely beneficial as not only could my draw calls be reduced by a huge amount (I would estimate my on average 100 draw calls I could cut in half easily, and that's without caching what would be really large textures like my card images at all). But the biggest benefit would be having a consistent API to work with, i.e. not having to use a different mechanism entirely (i.e. SpriteManager, or gui texture), to optimize things. also almost all of the examples I've seen people post of using Atlases/sprites with meshes (i.e. this library ) , seem fairly pointless to me (at least for a great many use cases, without something like ^), unless I am missing something. At least from my limited experience, drawCalls (in general), seem like the #1 key performance indicator for optimizing speed, and when using a library like the one above, there is a good chance you can end up with the same number of drawCalls, since you end up having to create a new mesh (or clone) for each use of the texture. (Which in almost all of my use cases, creating an instanced mesh with a new texture instead of an atlas, saves me way more draw calls than I would by using the atlas). -- Granted that's not taking into consideration complexity of the mesh being rendered, but in my scene 95% of my meshes are planes, and I haven't seen nearly the performance boosts from using clone+atlas compared to createInstance+nonAtlasTexture -- But point is, having support for this would allow for best of both world / total draw call annihilation.
  20. Hi Delta -- it makes sense in theory, however I am unsure how I would go about implementing/ tracking that. If I'm understanding correctly: You are saying to use one shader material, shared by many instances. In that case, how would I let instance 1 know to use uOffset X and instance 2 use uOffset Y, for example? Because if I'm using a shader material, and I say for example, material.setVector2('uvOffset', new BABYLON.Vector2(1,1)) or whatever, then I'm still going to end up modifying the shader material on all instances. Unless I am missing something? (also, figured out the setVerticesData, I didn't copy over a false param from the constructor of texture as in library to invert it because I had no idea what it was doing (would love more named param constructors i.e. mesh builder style in future ))
  21. I'm using it in two angular projects, one angular 5, one angular1. Def not angular problem, however could be related to canvas size as stated above. One other thing to check is scene.getEngine().getHardwareScalingLevel(). 1 is the default, anything < 1 = higher quality, anything > 1 = lower quality I would also take the postprocess out of the equation for now as well
  22. The situation is this: While attempting to improve performance by doing some ugly things with sprites, which use onAfterWorldMatrixUpdatedObservable, hook, I noticed an issue in my scene: Basically: I have one main board mesh which almost everything minus player hands are parented to (not directly, but each player has a board which is parented to the main board mesh, etc). -- The issue is, when I parent a mesh to the board, and I add an onAfterWorldMatrixUpdatedObservable hook, it is getting called every single frame. -- Which leads me to believe that possibly every object which is part of the main board mesh descendants tree, is also having their world matrices updated on every frame,which could obviously hopefully account for some of the performance issues that I've been having. (The same mesh, unparented, does not have onAfterWorldMatrixUpdated being called each frame.) I browsed the codebase, but didn't see anything obvious which could be causing this behavior. So I am wondering if anyone knows of any gotchas or non obvious places or things that could be causing the worldMatrix update. -- I can likely solve it by recursively freezingWorldMatrix of all the children, however I would like to figure out what is going on as well.
  23. Wondering if anyone has an example of using a spritesheet or texture atlas with instance meshes, or if this could be added to the roadmap as it seems like a pretty core/important feature? (Or if it isn't possible?) I've been having to do some pretty awful things in an attempt to work around the lack of support for that. Currently all the examples of using a atlas/sheet that I've seen, do not work with instanced meshes, since when you createInstance on a mesh, it is using/must use the same material. So if you modify the (uOffset, vOffset, uScale, vScale), you are mutating the texture itself. - So I didn't think it was possible, but then I saw this post: So I thought maybe it was possible. However it looks like if you setVerticesData of an instanced mesh, it mutates the source, playground below for example. So is it even possible to have instanced meshes with different verticesData to benefit from the merged draw calls or not? --- ALSO: Does anyone happen to know if the math changed on setVerticesData calculations after babylon 2.3? I've copied code from the babylon-atlas library, but it's returning the wrong frame, and it's upside down (and the frame data is in the correct format). So hoping maybe just a simple change to get it working, but math is hard.
  24. When this was announced the other night I tried frantically googling things thinking the sky was falling. What I've learned from it (which may or may not be correct, correct me if I'm wrong): Each browser has it's own translation layer which translates things differently on OSX. On PC as davrious stated above, does all translation, down to directX. - It looks like on OSX angle is aiming to be translating over to vulkan which will fill the gap once deprecated. I haven't figured out what mozilla or safari are using (possibly forked version of angle for firefox:, and maybe something already translating to metal for safari? Also the thing that I've still not been able to figure out concretely, webgl (on all non desktop devices) is based on OpenGL ES. which is completely separate from opengl. Some people are saying opengl ES support is NOT being deprecated, and won't be affected. (So basically, even though apple is saying we are deprecating opengl from iOS, it won't affect webgl to any degree on phone/ipad). So the good news is, as far as I can tell, the sky isn't falling, or at least not entirely.
  25. I get that we're shooting in the dark without it, that's why I've tried to be as detailed as I could with the profiling so hopefully someone would see something that would lead me to the issue. -- Unfortunately, can't produce a live repro atm due to app being wrapped with electron, and even then I still have a rails/node server separate that need to be run for it to work, so it's not possible atm. That's a good idea on the .max file, I tried looking through the commits on github but wasn't easy to see what all changed between versions. I'll give that a shot