Jump to content

Exca

Members
  • Content Count

    364
  • Joined

  • Last visited

  • Days Won

    12

Exca last won the day on March 30

Exca had the most liked content!

2 Followers

About Exca

  • Rank
    Advanced Member

Recent Profile Visitors

5581 profile views
  1. Calculate the angle between mouse and the object and then set that to your objects rotation.: const mouse = renderer.plugin.interaction.mouse; const dx = mouse.x - object.x; const dy = mouse.y - object.y; object.rotation = Math.atan2( dy, dx); Written from memory without testing so not 100% sure everything is correct. But that's the basic idea. Also depending on what your objects "forward" direction is, you might need to add an offset to atan2 result.
  2. Progressive web apps (PWA) are one way to create apps with web technologies without using wrapping. If on the other hand wrapping is needed, then cordova based ones are still the most common ones.
  3. I was thinking of doing that in 2d, but just doing it so that it looks like 3D. For example if you animate 5 ropes with varying points you can get a semi3d looking set. Or if you create a mesh that you rotate and squash a bit and then move the vertices up/down you can get a mesh that looks like it was orthogonal 3d without doing any real 3D stuff.
  4. Depending on the style that could be done pretty easily with either deforming a plane mesh (with gpu or cpu, first option being much faster) or using ropes and then just moving points along the x-axis.
  5. You could extract the frames from canvas with toDataUrl or use the extract plugin to get the actual pixel data of a single frame. Then gather all of the frames and encode them into a video with some encoder, dunno if those exist in the browser, most likely someone has made one. You can do stacked rendering with multiple different ways. - Have 2 containers that move with different speeds. - Use overlapping canvases with each having their own renderer and move them. - Have each of your object in the scene have a depth value and move everything based on that. - Use rendertextures
  6. The currentTime is a value that starts increasing when you start the context. That's why the timing is done with current-start calculation. https://developer.mozilla.org/en-US/docs/Web/API/BaseAudioContext/currentTime 80ms sounds like it is not calculating correctly. Havent used pixi sound myself so not really sure what might be wrong with the progress event. If the audio is played without webaudio (which shouldnt happen on a modern browser unless explicitly told to do so) then those delays sound possible.
  7. Usually with Webaudio you could use audiocontexts currentTime and store the starting time when song is started. From that you can calculate position = context.currentTime - soundStartedAt. This should have no delay at all. Are you using pixi sound? Checking that source code the progress event looks like it should be correct timing. Can you see if you are using webaudio internally or htmlaudio? The later one does not have exact timing available.
  8. Easy way on how to get better looking downscaling is to find out the points where your game starts looking bad and then instead of scaling down the game containers / elements, you keep that good looking resolution and scale down the canvas element. This way the downscaling algorithm is not impeded by webgl limitations but can use the one that browser uses natively. Little bit hacky way, but it's a pretty well working workaround that can be achieved with small effort.
  9. You could make a filter that takes the world texture as input and the light mask as input and then just draw the world if light has value at that same position. And otherwise keep value hidden. Something like this: vec4 world = texture(worldTex,uv); float light = texture(lightTex,uv).a; //Using only one channel for light, this could also be light color + alpha for intensity gl_FragColor = mix( vec4(0,0,0,1), world, light);
  10. Do you have only canvas on the page? Then using lighthouse wont give very much detail as it has no components to analyze canvases. Only the dom-side of things, loadspeeds and stuff like that. Most likely the LCP is for the canvas element and for some reason that fails to be analyzed. It should go similarly as with images. Do you have an example on the site which fails?
  11. How do you use it and what sizes are your textures? How complex is the scene you render to RT? There's no additional hidden costs in rendering to rendertexture vs. rendering to screen, other than the additional memory usage.
  12. Just listen for wheel event on the window/document/element https://developer.mozilla.org/en-US/docs/Web/API/Element/wheel_event
  13. You have basically two options. 1. Render the expensive stuff into a separate rendertexture and use that as you would any other sprite. Rerender the rt when things change. 2. Use two canvases. Update the expensive canvas only when needed. [Edit] For 1 you can use cacheAsBitmap = true to create rt of a container and use that instead of the whole render list. Though I'd suggest using a custom handling with own RT to handle this as debugging cacheAsBitmap can be a nightmare if there's some errors.
  14. I dont think the regular tilingsprite supports mirrored rendering out of the box. You could create a custom shader for it that detects if it's on an odd or even tile and then flips the uv-coordinates depending on that.
  15. Check the network tab if you are still getting the cors error. To fix that you would need to either run the game on the same domain as the images to ignore cors-rules or have the server send a crossOrigin header that applies to your domain or just a wildcard. The renderers null error might just be due to asset not being loaded as cors blocks it.
×
×
  • Create New...