• Content Count

  • Joined

  • Last visited

Posts posted by canvasman

  1. 7 hours ago, themoonrat said:

    I'd just call destroy

    As part of the DisplayObject's destroy function, it has the following code, that removes itself from any parent it has 

    if (this.parent)

    Moonrat is the best! Thanks dude :)

  2. 7 minutes ago, ivan.popelyshev said:

    1. destroy() in sprites are actually a over-, javascript GC will take it anyway just like any other object. It could be different if Textures had links to sprites (events, subscribe) but they aren't. You remove sprite from container and pixi forgets about it.

    2. sprite fields are getting slaughtered and its not usable anymore, many things will just throw error when you try to use them :)

    Okay! Regarding the #1 question, would you call both removeChild() and .destroy() in that order or only either in my case when you don't need to use that same sprite later?

  3. I am creating multiple sprites from same texture in PIXI v5, I have few questions:


    1. When I am not going to use specific sprite anymore, do I need to remove the sprite from container with container.removeChild(sprite) before calling sprite.destroy()?

    I call destroy without arguments because I don't want to destroy the texture which I might use later to create more sprites.


    2. What is the difference between calling container.removeChild(sprite) and sprite.destroy(without arguments)?

    Does the GC collect everything regarding the sprite in either way? (except the texture which is in the GPU memory).



  4. 1 hour ago, Exca said:

    I use autodetect with additional blacklisting on per device basis. If you have a modern computer/phone with modern browser then GPU is basically always faster. Latest devices I have checked that didnt apply to that rule are android 4.x and first gen ipads.

    If multiple GPU's exists on a device then that decision on which to use is done on os level (at least I havent heard of any other way).

    MajorPerformanceCaveat is used in my knowledge, last checked it in v3 though.

    On my laptop Chrome somehow used the bad GPU instead, then I changed it on nvidia settings and chrome started using the better one. I hope thats not common situation.

    Just checked that PIXI v4.x uses that failIfMajorPerformance thing when checking for webgl support so autoDetectRenderer return canvas renderer if that thing fails as well.

    Do you just check if that GPU string in gl context is in blacklist or how it works? Also did you made the list yourself or where did you get it?

  5. I need to make background for the game so my options are:

    1. Create large image which fits to the map

    2. Use tiling sprites

    What are the pros and cons between those two, regarding both canvas and webgl renderer?

    With tiling sprite less network usage when downloading? Less GPU memory needed?

    Is it slower to render tiling sprites spanning size of large image instead of rendering just that large image?

  6. If I have a lot of objects in my stage and I want to destroy whole stage with its children what would be good way to exclude some of textures or base textures from being destroyed? Should I remove them with removeChild before I will do stage.destroy(true) or could I NOOP destructive functions somehow in chosen textures? Looks like PIXI.Texture.WHITE has something like this going here

    Could that removeAllHandlers() functions be used in my situation? Also why is .on, .once and .emit NOOPed, isnt .destroy enough?

  7. 14 minutes ago, ivan.popelyshev said:

    You can manually iterate through "renderer.textureManager.managedTextures" and see whats alive, sum all by area (width * height), 4 bytes per pixel. In v5 its in "renderer.texture.managedTextures" or something like that. maybe "_managedTextures".

    Okay, so in that object I can see everything which has some GPU memory reserved currently? Also textures which I get after loader is done, do they reserve the GPU all the time until they are destroyed?

  8. 4 minutes ago, ivan.popelyshev said:

    Yes, it is possible to create texture from Image() please look closer, I remember that it was easy as passing ImageResource somewhere ...

    No, fromImage is still a valid way, it was deprecated because there's "FROM" function now. "Texture.from".

    Yes, you can just create one extra loader if you suddenly need to download one more texture.

    Okay! So if I wanted, I could load image to Image() object and pass that to Texture.from to create texture from it? Btw would that be syncronous task when Texture.from processes the data from Image()

    Thanks for your answer :)

  9. 1 hour ago, jonforum said:

    is this can maybe help you ?

    Look at the size column for total GPU memory

    and for awser you no this will maybe depend of your projet.
    On my side somethime i need do it manualy after loader my json.

    take a look with this and also pixi prepare 


    Wait what is the bindTexture? I can't find any documentation.

    Thanks for the answer anyway :) 

  10. After resetting applications state and destroy all the textures, can I see anywhere if there is still some textures left in the GPU memory or confirm that the GPU's memory has been released and there is no memory leaks?

    In task manager I check dedicated GPU memory. After adding some textures and resetting application state, destroying all textures it still wont go back to where it was at the beginning.

    So where and what values should I look at about GPU's memory status?

    Also does every texture loaded with PIXI's loader reserve GPU memory even if its not present in scene anymore?

  11. I am using PIXI's loader currently. Its working alright but one loader can only load one set of images at the time. So when I start loading, I need to wait until it finishes before adding new image urls to load AFAIK?

    I would just like simple way for loading images with just callback if success. I have this hassle now with multiple loaders and adding callbacks to them if not loaded yet otherwise return loaded texture.

    EDIT: oh looks like .fromImage is deprecated after v5.

    Is it possible to make texture from Image() object. Would it be complicated to create own loader that way?

  12. 10 minutes ago, ivan.popelyshev said:

    Yep, just updateText() it and store the texture somewhere, use it for a sprite later, or use that text later. You can also set text resolution if you unset "autoResolution" 



    I am using PIXI v4.8.2, I couldn't find that autoResolution there : \

    Is the dev branch stable?

  13. Is it possible to create PIXI.Text with resolution 2 and cache its texture to use with sprites later?

    My problem is that when I scale stage (zoom out), texts jitter and become blurry. That doesnt happen when I set renderer resolution to 2 but it slows down older computers.

    Solution which would work for both canvas and webgl renderer would be perfect.

  14. On 2/25/2019 at 9:58 AM, ivan.popelyshev said:

    no readPixels() for you. That means no "extract" plugin.

    Okay, so if I dont use readPixels or that extract plugin it should be fine? Does all of PIXI's internal functions work properly if dirty canvas?

  15. For loading some images I keep getting No 'Access-Control-Allow-Origin' in console. I noticed that adding crossOrigin: "" on object which I pass to loader.add() function will make it work but I heard that there is some consequences or it makes canvas "dirty". So what are actually the consequences and when this method should not be used?

  16. In the keyboard event, what property would suit best for game hotkeys?

    I am thinking about using event.key property. It would be very convenient since no need to do number to character conversions unlike event.keyCode but is there any drawbacks I am not aware of why it would not suit well for games?

    I still see a lot of event.keyCode used even though its deprecated, will it be dropped?

    Share your thoughts? thanks :)