• Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by themoonrat

  1. When I saw the title I thought I'd done a pr for it at some stage! It wasn't accepted into pixi official because there were worries about the text api becoming a bit crazy. The hope in the future is that you could pass perhaps a PIXI.Gradient into the fill properties, and thus no need for separate stop point and gradient type settings every time it's an option. But I use the contents of that PR in my games on my own pixi fork
  2. Just to second using Howler... it's the best audio lib out there IMO. Also second using m4a and ogg. mp3's have some issues in browsers with seeking and looping nicely, in my experience. I'd also recommend having different compression settings, which are selected based on the device. Mono 96kbps might be good enough for mobile devices, but maybe supply Stereo 128kbps (or above) for PC players where memory is unlikely to be an issue, and better speakers almost certainly going to be used.
  4. You shouldn't be using multiple renderers. Just use multiple containers, with the parent most container having the mask on it, seeing through to the container below it
  5. Nopes, you've done the right thing with the fromUrls You don't want to cast it... they're different things. 'BaseTexture' and 'VideoBaseTexture' contain data about the actual image/video. A 'Texture' is some information about what part of that base image / video to use. For example, if you have a spritesheet... that whole image is the BaseTexture, but you just want to draw from certain sections of that image, and those would be the Textures pointing to different parts of the BaseTexture. So Sprites use a Texture, which contains information on how to use it's BaseTexture
  6. Could you supply an example or fiddle with which we could take a look?
  7. And make sure to request it within a callback for a valid mouse or touch event Helps cover the various browser prefixes
  8. There's PIXI Tween Original: My version: Typescript version:
  9. Ok, it might be because you're not adjusting the movent of the sprite by the delta Take a look at the example and notice how it multiplies movement by delta, to account for slight inaccuracies in timing of requestAnimationFrame. Lastly, after that, if there are still issues... just host the page somewhere and access it via your browser. I make games to be accessed via the browser and the S7 runs like a dream. Maybe it's something to do with phone gap?
  10. It's most likely because of your resolution property. As an example, if the width and height of the window was 1280x720, but the devicePixelRatio returned 3, internally, pixi is now trying to render the screen at 4k resolution. Ouch! Don't use device pixel ratio is my advice for games
  11. You could also consider adding your events at the renderer level, and do some maths to translate to your container. ie. let location = new PIXI.Point(); renderer.plugins.interaction.on( 'pointerdown', ( event ) => { container.worldTransform.applyInverse(, location ); }
  12. @Gerente You may be interested in - which I think achieves some of what you are trying to achieve (it has the concept of being able to anchor a container to the edge of another container)
  13. PIXI supports 9 sliced sprites too ( and you might have more success with filters on sprites than on Graphics. As always with WebGL, performance is best when you are making consecutive draw calls using the same base texture. Filters will always break that batching, so if you could put one filter over all of your sprite draw calls, rather than 1 filter per line, then that'd help
  14. A Container is nothing but a collection of other display objects, so cannot be visually represented itself. You need a Graphic or a Sprite (both of which inherit from Container) to even give the container dimensions in the first place
  15. It's not what you want in terms of an automation tool that works with pixi, but there is a chrome plugin that does detect internally rendered display objects that may be of use
  16. I'd take a guess that pixi, via the WebGL renderer, is uploading that texture to the GPU once (it's something the lib has control over) I'd then take a guess that the canvas renderer is uploading that texture to the GPU every frame. But just because WebGL _could_ do it, doesn't mean you should. Take a look at - 99.9% of gpus devices out there support 4096x4096 texture size, yet only 29.9% support the next size up, 8192x8192. It probably works on your pc because you have a discrete GPU in it.
  17. It ground to a crawl for me too, and I was able to get a profile on it, and the delay is all in the GPU At the bottom you have a canvas that is 10000x10000 This is way too big The maximum texture size I'd recommend is 4096x4096... but in my games I often play it safe with 2048x2048 for the canvas renderer because of the time it can take to upload large textures to the gpu
  18. Can something like that be popular? Sure! A Dark Room was a hit and modern-ish take on that... buttons instead of typing commands, but it was still a 'Text Adventure' imo But the best advice I can give is... would you enjoy making it? If you're a hobbyist, and your target is to make a popular game, I think there's more chance of ending up disappointed (or not finishing it) than if you set out to make a game you want to make and have passion for.
  19. I'd recommend loading the game up in Chrome, with dev tools open, and looking at the network tab. After the game has loaded, you'll be able to sort by type, and see what textures the game has loaded, which then gives you a fair idea of how they are used
  20. Again, for the example you showed, those 'particles' are just part of the sprite animation of the winning symbol.
  21. Even if your textures are created dynamically at runtime, when you convert them to a texture, you can scale that generated texture down. ```const texture = renderer.generateTexture( displayObject, 0, 0.5 );``` for example will create that texture at half the resolution Looking at your render tree (using pixi-inspector and PIXI.utils.BaseTextureCache in the console); because of all of this generated sprites, you're missing out on one of the tricks that makes WebGL so fast; rendering sprites from the same base texture. In WebGL, each time you change the baseTexture that the renderer has to render from, there is a slight penalty. If all images are from just a few base textures, then this penalty goes away. Each generate texture you're creating is from a different base canvas... so the optimisations that allows crazy levels of bunnies in the famous bunnymark can't occur in your game. Is there a reason you have to generate the assets in game? If you _have_ to, then generate them all into one display object, convert that to a base texture and manually creating your own Textures from that?
  22. There is no support for word wrapping for CJK languages ( - Boo In any case, for these language what is _actually_ required is word wrap but with break words enabled; there are no spaces in CJK text, and word wrap usually just finds spaces in text. Break words allows word wrap to work without waiting for spaces. - Yay However it's not quite as simple as enabling break words, as there are rules in place as to which characters are allowed to be broken and which aren't - - Boo So you'll need to code your own custom solution at around this line: - that checks the text wanting to be broken up to see if the rules allow for the breaking up of characters or not.
  23. As a basic 'is this mobile or desktop' - is a decent lib which Pixi uses internally. But a better option than just 'mobile or desktop' is to detect some device capabilities. At a base level 'do you support WebGL'. If not, set the lowest settings due to canvas fallback! But if it does support WebGL, you can query the hardware has to what is supported. Based off the official 'is webgl supported code' from .... imagine i'm inside that 'if ( gl ) {' statement if ( gl.getParameter( gl.SAMPLES ) ) { const maxSamples = gl.getParameter( gl.SAMPLES ); } Now, 'samples' can equate to 'can I do multi-sampling anti-alising, and if so, how many samples can I do. Older device hardware won't support this, so will have a value as 0... whereas modern devices do. is a great website to show the available parameters and the stats for what the typical results. So in the above example, I could say "Well, if you support less than 4 samples, that's the lowest 13% performing of devices, you're all low quality". You can do this for a number of parameters, like texture units available, max texture size supported etc. Is another useful website to get that kinda report on the hardware you are running. So if a user has a specific performance issue, send them there and see if you can lower quality according something their hardware doesn't support. As for how to degrade quality, the 2 things that really effect performance and are easy to change are build time are supply lower resolution assets to lesser quality devices, and lowering the rendering resolution for lesser quality devices. Beyond that you're looking at profiling your code on lesser quality devices to see where the bottleneck is. WebGL filters, for example, can often be a good candidate to remove on lower quality devices that still support WebGL
  24. The actual renderers are the canvas and webgl, which you are free to create directly, but autoDetectRenderer is usually preferred as it favours creating the webgl renderer (much faster) but automatically falls back to canvas if webgl isn't supported. You'd only have to write this fallback yourself anyway. The Application method is just a helper class that uses autoDetectRenderer in the background, and provides commonly required basic functionality, like access to a ticker, and getters for the renderer and view. If you don't want to use it, fine We've found it very useful at pixi for creating the examples, as it handles the common boilerplate code, leaving just the example code we want to show off
  25. The doc with a single options object is the correct way going forward