visgotti

Members
  • Content Count

    18
  • Joined

  • Last visited

  • Days Won

    1

visgotti last won the day on September 27

visgotti had the most liked content!

About visgotti

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. In the first scenario - If I create the renderer like this const screenWidth = window.innerWidth; const screenHeight = window.innerHeight; const canvas = document.getElementById('canvas'); const renderer = PIXI.autoDetectRenderer({ forceCanvas: true, width: screenWidth, height: screenHeight, antialias: false, roundPixels: true, resolution: 2, view: canvas, }); renderer.view.width = screenWidth; renderer.view.height = screenHeight; renderer.view.style.width = screenWidth + 'px'; renderer.view.style.height = screenHeight + 'px'; renderer.view.style.display = 'block'; renderer.view.style.position = 'absolute'; renderer.view.style.top = '0'; renderer.view.style.left = '0'; renderer.view.style.zIndex = '-1'; renderer.backgroundColor = 0x000000; then I do const playerSpine = new PIXI.spine.Spine(r.player.spineData); playerSpine.skeleton.setSkinByName('base'); playerSpine.x = window.innerWidth / 4; playerSpine.y = window.innerHeight / 4; The player is in the middle of the webpage like I'd expect. But with the same exact code - if I create the renderer with the option - forceCanvas: false, I no longer see the player in the middle of the screen. I can't seem to figure out how webgl is positioning it. It seems like the webgl and canvas act differently when setting the resolution. No errors or anything. Anyone have an idea as to why this is happening? edit: here's the most simple code snippet to reproduce the problems https://github.com/visgotti/pixi-res-problem/
  2. Does anyone have experience optimizing/caching a DropShadow filter on spine animations? I was thinking that I apply the filter then each time a new unique combo of bones/trackEntries are played, i'll "hash" the combo (give it a unique string id based on frame index, skin name, etc) and then render the dropshadow to a render texture to be used the next time that "hash" is being played in the animation. Has anyone done anything like this or have any tips that can save me some time in the long run?
  3. lol crap... seems like my second implementation was all for naught thanks though going to try this now. edit: yep so sure enough my original function would have worked with just adding the false flag to renderer.render Thanks @ivan.popelyshev It seems to be working awesome now, I can start focusing on optimizing my packing algorithm.
  4. <deleted because it was so wrong>
  5. Here's a visual of what I'm trying to accomplish- as I add rects it gets added to the texture without changing the positions of anything else. This is meant to be ran during the game loop so the algorithm for finding an open spot is super fast but super shitty, I plan on making it better but I just wanted to create a proof of concept first. As you can see rects get added without changing the rest of the "texture" but I'm trying to figure out how to do this as a pixi texture without having to re render the whole "texture" each time I add a rect/sprite I think I'm making the wrong assumption on how a RenderTexture works.. Would it only work if I kept a PIXI.Container with all the elements and then I would have to re-render that PIXI.Container to a RenderTexture each time I add a new image? This would probably be too slow to dynamically add new images when it starts reaching the max texture size and then my shitty packing algorithm becomes even shittier since I have to redraw the texture each time anyway.. and then I'd have to recreate all the textures that use the RenderTexture as the base texture everytime it updates? Yea I feel like I'm going down a rabbit hole of bad design decisions. Is there any efficient way to just add another image to a texture when the majority of the texture is exactly the same besides one small rect area so I can keep all the past textures I've created as well as not re render a whole new RenderTexture I have a feeling I'm just fundamentally wrong with my assumptions on how my library could have worked.
  6. I'm trying to have a RenderTexture grow as I dynamically load images What I do is load the image as a texture, create a sprite, then I want to render that sprite to the render texture. But every time I render a new sprite to the render texture it seems to delete everything else on the render texture. public addSprite (id, sprite: PIXI.Sprite, deleteSprite=false) : PIXI.Texture { if(this.mappedTextures.has(id)) { if(deleteSprite) { sprite.destroy({children: true, texture: true, baseTexture: true}); } return this.mappedTextures.get(id); } const { atlasIndex, rect } = this.addRect(id, sprite.width, sprite.height); if(!(atlasIndex in this.atlasRenderTextures)) { this.atlasRenderTextures[atlasIndex] = PIXI.RenderTexture.create(this.maxAtlasWidth, this.maxAtlasHeight); } const renderTexture = this.atlasRenderTextures[atlasIndex]; // assign sprite position to the open rect sprite.position.x = rect.x; sprite.position.y = rect.y; // render sprite to render texture this.renderer.render(sprite, renderTexture); // make a reference to the new texture using render texture as base. const texture = new PIXI.Texture( renderTexture.baseTexture, new PIXI.Rectangle(rect.x, rect.y, rect.width, rect.height), ); this.mappedTextures.set(id, texture); if(deleteSprite) { sprite.destroy({children: true, texture: true, baseTexture: true}); } return texture; } I'm trying to write a lib https://github.com/visgotti/DynamicTextureAtlas that allows a growing packed texture atlas and right now I'm implementing a class that uses the algorithm to map the atlas to a RenderTexture with sprites, the packing algorithm isn't good but i plan on optimizing it. But the big thing I wanted to make sure is that everytime a new image is packed the items in the atlas don't get repositioned so it would be quick to update a RenderTexture since all we need to draw is the new image added instead of the whole packed sheet. Is there something fundamentally wrong with my approach?
  7. I was using http://kvazars.com/littera/ for v4 but it doesn't seem to work correctly for v5.. Does anyone know software that can convert my fonts into a usable .xml? Or can someone provide me with a valid .xml font file format?
  8. Thanks for the quick responses Sent pm with info for that slack invite
  9. Awesome, thank you. I don't have experience with shaders but I was going to look into using color replacement anyway for changing color palettes for some of my character clothing, thanks for the info.
  10. Hi, right now I'm using tinting to change the color of my bitmap text, but I want to be able to change the stroke too.. as of now the stroke is black and the inside is white, so changing the tint works perfectly but what if I want to change the stroke? Is it worth having a separate sprite from the same texture overlap eachother, one with a white stroke and transparent fill color and one with no stroke and white fill color, then tinting each one appropriately.. The alternative is to use color replace filter and chang the black and white that way, but I know filters can be extremely expensive while tinting is free. But I'm wondering if color replace under the hood works similarly to tinting and I can get away with it? Thanks
  11. fs.readFile('bunny.png', function(err, bunny) { img = new Image; img.src = bunny; var timeNow = Date.now(); for(var i = 0; i < 500; i++) { ctx.drawImage(img, 50, 50, 50, 50); } var timeAfter = Date.now(); console.log('time:', timeAfter-timeNow) }) time: 807 Yep you're right. 800 ms for this.. Looks like I'll have to write something that spawns enough processes so I can render scenes for my targeted framerate.. No idea what other direction I can go in.
  12. I wouldn't say it's really too heavy each bunny adds ~10 ms to the render. for (var i = 0; i < 25; i++) { var bunny = new PIXI.Sprite(texture); bunny.anchor.set(0.5); bunny.x = (i % 5) * 40; bunny.y = Math.floor(i / 5) * 40; mainStage.addChild(bunny); }
  13. Can't tell if this is sarcasm or actual advice..
  14. Hey obviously I know PIXI.js is meant for html5.. but I'm trying to have a server that records my game and converts it to a video. So to do this I need to render images of the current canvas at at least 20 fps In the web browser this would be obviously simple. The following code- var timeNow = Date.now(); renderer.render(mainStage); var timeAfter = Date.now(); console.log(timeAfter - timeNow) In web browsers returns 1 ms In node it returns anywhere from 180-300 I'm shimming PIXIjs with jsdom and node-canvas.. I don't understand the underlying technologies enough to understand why there's such a big difference.