mobileben

Members
  • Content Count

    34
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by mobileben

  1. By cutting do you mean generating new UVs? Or really generating a new texture? Since you are not fully describing things, it does make it more difficult for people to help you.
  2. What are you trying to do? When you say a texture needs to be cut into several ones is how it cut determined at runtime?
  3. Since you want to control your drawing of the layers, you may want to create a class which uses a container as a parent for the sprites. The reason for the class is you could use that as the interface to control the layers (ie. sprite). If you know that the textures are inter-related, if you have not put them in the same atlas, you should. This will eliminate the need for the underlying renderer to change the texture when rendering (ie. higher likely hood depending on what the renderer is doing that they can be drawn with the same draw call).
  4. @bruno_, thanks so much for the information. It was very helpful. When you say you use "dummy files" do you mean wrapper files for Cordova, or you create dummy files for the ones that Cordova will supply? @mattstyles, thanks as well. And thanks for the note about testing often on the device. My idealized plan is to be able to switch easily between the browser and mobile, which is why I was wondering about approach. From my early guesses, the part that will be more challenging is anything that has to go through the JS bridge such as IAP. Mainly because I'm using Typescript. What I think I may have to do is actually do those parts in Javascript to avoid having to debug transpiled code and use Safari developer mode to debug through the simulator. As both your approaches are more of a two-step approach, I assume then that your index.html files as well as possibly how the app starts (Since Cordova will fire deviceready which we should be using to launch the game) are handled as one offs? I did find I can use the merges directory to use custom index.html files per platform. Right now I have what may be a workable model to develop on which would allow for dev browser/mobile from the onset.
  5. I currently have a setup where I use Webpack + Typescript for game dev. This is experimental, but it seems to be working well. I wanted to add Cordova in the mix to see how hard it would be to add support for mobile. One thing that strikes me is that Cordova does also become a bit of a build system, but seems cumbersome for just HTML5 dev. My plan would be to do most dev on the browser. Then move to mobile. One take away is it does seem a bit easier to start off with Cordova, only from the perspective that it seems easier to drop a project into Cordova versus dropping Cordova into your project. This really just means using the directory structure and having some of the config that a Cordova project likes, and then putting my code and required packages, etc, around it. For people that are using Cordova, what is your workflow? Meaning are you doing most dev on the browser in the directories and then doing Cordova builds as needed (ie. ignoring Cordova only until needed)? When you do an HTML5 build for the browser is it completely devoid of Cordova? Or are you including Cordova and just going through a slightly different startup routine? I suppose this leads to the question of are you doing only one distribution that runs on everything, or creating different distributions based on platform? Also, are there any viable contenders out there to use instead of Cordova?
  6. I assume when you ask "render it as canvas" you are referring to rendering the `PIXI.Graphics` as is? So in other words something like: const gfx = new PIXI.Graphics(); // Do stuff to make a graphic gfx.endFill(); app.stage.addChild(gfx); Yes, you are better off converting to a texture and the creating a sprite. The actual `_render` for a graphic does a bunch of work to display the parts of the graphic. Simplified graphic shapes like rectangles should be faster to draw. Just how much work is related to complexity and whether or not the graphic is batchable. Sprites on the other hand just update vertices (presumably only if needed, although looking at the code it doesn't look like it has any dirty bits) and then renders. Hmm, wondering. I'm not really a JS guy. But I've done some reading that seems like you can get some async stuff running. Has anyone dabbled with that? It would be super helpful if it is real async. Since then things like texture creation could be done off the main thread. The only caveat though ... which I don't know if pixi can handle is the need for some locks. I'm more used to multi-threaded game engines where one has those features to help better hide latency.
  7. Also thinking about this some more. It seems to me your main problem really is the lines. For the rects (points), you really just need to create a rect and convert it to a texture and then create a pool of sprites. Alternatively you could create that sprite based on a texture image. I'd recommend a white rect and then tint it to the color you need. You can also scale it as you need. Just create a pool of sprites and use them as you need (and hide what you don't). This should give you good performance regrading the drawing of your points. The line is a bit more problematic. You probably need to define "real-time". Depending on your application real-time isn't always real-time. Meaning at times you can actually eat up more frames doing something. For example, is it still usable if it is 30fps or 20fps? For the line, when you zoom and scale, rather than invalidate, why not build "offline" the newer line? Then when it's done, show that line and hide and either destroy or invalidate the other? So perhaps a good solution is a pool of sprites for points and rather than clearing creating a new line and the hiding and invalidating/destroying the old. This is one of those ideal cases where being multi-threaded is helpful, since you can offload both the new line render and destruction on another thread.
  8. I think for generating a texture from the line graphic you would do something like this, however I will be quick to add is in my test, it comes out a bit jaggy. The actual sine wave as a graphic is slightly jaggy as well, just not as bad. I also notice clipping. You should be able to draw the two and compare to see what I mean. const graphic = new PIXI.Graphics(); graphic.lineStyle(2, 0xff0000, 1); const startX = 0, startY = 0; const increment = 0.1; graphic.moveTo(startX, startY); for (let x=increment;x < 100;x += increment) { const y = Math.sin(x) * 20; graphic.lineTo(startX + x * 10, startY + y); } graphic.endFill(); let sineTex = app.renderer.generateTexture(graphic, PIXI.SCALE_MODES.LINEAR, window.devicePixelRatio); let lineSprite = new PIXI.Sprite(sineTex);
  9. Ahh, okay, I saw that. I misunderstood. You talked about zooming/scaling so wanted to see how it would differ. Essentially the rough views are the same (rectangles and lines)?
  10. Why don't you send a few examples of what a graph looks like at different scale so we can see the spectrum of visuals you are trying to achieve and how they interrelate?
  11. Good question. Curious, what are you trying to do (ie. why don't you want to extend from PIXI.Point)? Are you using Typescript? If so I notice that the objects are not created with prototypes.
  12. When scaling anything up, you will potentially artifacts. The severity of which depend on the direction (up or down) as well as how much and what you are scaling by. As you are doing an affine scale, you won't suffer any effects due to change in aspect ratio. Are all those images the same size BTW?
  13. There is one outstanding issue that I know of right now. Webfonts. When I create a PIXI.Text, it's still drawing larger based on the px I set. Not double, but about 25% larger. I noticed changing the value changes the level of blurriness. But too high and the font breaks down. The overall size remains the same. Is this expected? Another oddity I noticed. There seems to be a bug with set resolution The call to updateText/updateTexture never occurs. You need to do something else. So in my case, I called text.height afterwards. This would cause updateText/updateTexture to be called.
  14. I have something that works. I'll put the solution here, as it isn't large, to get your feedback. I can create an issue and file a PR if you think it is worthwhile. I haven't yet committed the branch to my fork. As a recap as to the why, I want to use native resolution for everything. Other OpenGL I've worked with is easy, since it is always native. Hence there is no meaning to @#x type files, for example. Everything is a pixel and the layers above handle any form of device pixel ratio type scaling. I wanted a good way to immunize against android as well as use the approach of small, med, and large type of assets the retina suffix named files. This means my app is initialized like the following: export const app = new PIXI.Application({ width: gameArea.width, height: gameArea.height, view: <HTMLCanvasElement> document.getElementById('game-canvas'), antialias: true, transparent: false, backgroundColor: 0xFFFFFFFF, forceCanvas: canvasType == "canvas", resolution: 1}); What I have done is introduce the concept of a fallback. The idea is that if prefixing is used, then it uses the prefix. If not, it will check for a fallback. If no fallback value exists, then it will use the default. I chose this approach because it is naturally handled by the code. Moreover, I needed to add an extra value (the fallback) because I have not control of changing the value. So while the concept of the defaultValue is nice from an API perspective, it is only explicitly set by BitmapText and it takes the renderer resolution value. So we have no way of controlling it otherwise. Code is as follows. I'm happy to make any changes/rename etc. import { settings } from '../settings'; let FALLBACK_RESOLUTION_FOR_URL = 0; /** * get the fallback resolution / device pixel ratio of an asset. This is used * in conjunction with getResolutionOfUrl * * @memberof PIXI.utils * @function getFallbackResolutionOfUrl * @return {number} fallback resolution / device pixel ratio. If 0, then ignore. */ export function getFallbackResolutionOfUrl() { return FALLBACK_RESOLUTION_FOR_URL; } /** * set the fallback resolution / device pixel ratio of an asset. * * @memberof PIXI.utils * @function setFallbackResolutionOfUrl * @param {number} value - the fallback value if no filename prefix is set. */ export function setFallbackResolutionOfUrl(value) { FALLBACK_RESOLUTION_FOR_URL = value; } /** * get the resolution / device pixel ratio of an asset by looking for the prefix * used by spritesheets and image urls * * @memberof PIXI.utils * @function getResolutionOfUrl * @param {string} url - the image path * @param {number} [defaultValue=1] - the defaultValue if no filename prefix is set. * @return {number} resolution / device pixel ratio of an asset */ export function getResolutionOfUrl(url, defaultValue) { const resolution = settings.RETINA_PREFIX.exec(url); if (resolution) { return parseFloat(resolution[1]); } else { const fallback = getFallbackResolutionOfUrl(); if (fallback) { return fallback; } } return defaultValue !== undefined ? defaultValue : 1; } To use, I'm simply doing the following after I create the Application. PIXI.utils.setFallbackResolutionOfUrl(window.devicePixelRatio);
  15. Okay, I believe I have figured this out. While I eventually sort of got `npm link` working, it generates a bunch of errors. First trick is you need to link bundles/pixi.js as this is where the actual standalone exists. The next "trick" deals with linking. I also discovered that when running `npm link` it also installs dependencies under bundles/pixi.js/node_modules. When it does this, it actually copies in the actual @pixi dependencies whereas when `npm run build` is executed, it creates symbolic links to the generated node_modules under packages. This really threw me for a loop when figuring this out, since I had to determine why any changes I made were not showing up. Previously I thought that I needed to run `npm link` and then install. However, this was incorrect. If you run `npm link` first, those the symlinks are not created. Also note that when `npm link` I had to use sudo as on my machine, my global node_modules is owned by root. That all being said, in the end I decided to forgo using `npm link` and instead manually created symbolic links. This leads to the following steps: 1. Clone repo (either actual or your fork) 2. Create symbolic link using ln -s REPO_DIRECTORY/bundles/pixi.js /usr/local/lib/node_modules/pixi.js Where REPO_DIRECTORY is your repo directory. YMMV depending on where node is placed on your machine. If you're on Windows and don't have unix commands, you'll need to create the appropriate shortcut. This replaces `npm link` 3. In your project directory, in the node_modules directory ln -s /usr/local/lib/node_modules/pixi.js pixi.js If you had pixi.js already installed via npm, you will need to uninstall it first. 4. You can now install and build (You could have done this earlier as well). So, go back to REPO_DIRECTORY and npm install npm run build 5. Back in your project directory, if you're running Webpack, than add the following to your Webpack.config.js file, if you need to make sure your resolve has symlinks set to true. resolve: { symlinks: true, modules: ['src', 'node_modules'], extensions: ['.ts', '.tsx', '.js'] } Note that I just copied and pasted what I had. 6. Profit Another interesting quirk I noticed from this whole thing, is to generate a proper .d.ts file, you need to make sure anything you export is commented properly. Else you will find the code is generated (eg. pixi.js) but the declaration file will not have it defined.
  16. So I sort of have it working, but find some inconsistencies, no doubt by something I'm doing. I would assume if I do a `npm run build` that it will build everything. This includes updating bundles/pixi.js/dist. What I have found is that I can make some mods, however when I re-run `npm run build`, the generated es.js files seem updated, but the actual pixi.js doesn't have my changes. What I have done is inserted a new function into getResolutionOfUrl.js. I've even tried to rename getResolutionOfUrl. After the first build, it never seems to try to rebuild these files. So I seem to be missing a trick. The generated pixi.js.d.ts file has the new function name. pixi.js does not. I tried to do an `npm run clean`, but that generates an error which I put below. That directory integration-tests/node_modules doesn't exist. Oh, BTW, what it seemed worked was creating the link in the bundles/pixi.js directory. However this needed to be done prior to npm install and npm run build. If I can find out why my npm run build is not really building, then I should be able to work from there.
  17. Sigh. I tried `npm link`. However I suspect that the way pixi.js is being built, it's not conducive for using npm link. I'm basing this because the name of the package is actually "pixi.js-monorepo". So when link happens, it creates that link, not the link you need. I had to manually create a symlink, which more or less to me is a warning something is wrong. I believe I have all the links set up, but now when I try and build, it can't even find any trace of pixi.js. This could be because the symlink into my projects node_modules is essentially the root directory for my local copy of the repo. There is no scoping folder (eg. @ pixi) like when there is a normal npm install. Are people forking the code and finding fast ways to iterate the coding cycle? There isn't much documentation on this, so unfortunately for those with less build experience end up having an even bigger hill to climb. I know it's doable. I'll more or less need to simplify my cases and build a test case where I modify a pixi.js file and then move my changes in and build and rinse and repeat. That comes across as clumsy and unoptimal. Which then makes me think I'm missing a trick. Heh, and yeah, Webpack sucks. Not sure what it is with the JS world, but they seem to love tools which are blackboxes to most people.
  18. I wanted to experiment with some things in Pixi.js and then perhaps do a PR. Right now I'm looking for another way to override the resolution used by textures (as opposed to using @#x). I've forked and built Pixi.js. It dawns on me easy "hacking" would be to directly use the exported pixi.js file, make mods there, then later transfer them back into my fork. This way I could experiment and not have to always do rebuilds of the fork. Quick note: I'm using Typescript and Webpack. I was using npm to install pixi.js. My thought was to uninstall pixi.js. Then to copy my built version of pixi.js into my src directory. The problem with this is my import doesn't work right before. I am importing it using import * as PIXI from 'scripts/pixi' The source file I copied over was the generated pixi.js file which was in bundles/pixi.js/dist. I did not copy over the map file (although I just tried and it had no effect). But I now get errors such as Where 'Application' is PIXI.Application. What is the best way to add in pixi.js without using npm? And I guess as a secondary question. Can I just add my own distribution and copy it into node_modules? What I'm looking for is the most efficient way to be able to experiment and mod pixi.js. Early on I did try to use pixi.js directly, but found I ran into too may problems as well, so I jettisoned that approach and just used the npm package for it. Please note, I'm new to the whole Typescript/webpack game and learning. So I more than likely have introduced a lot of "user error".
  19. That's the technique more or less I'm using. I dunno, it seems like you don't sleep! Thanks so much for the code example, it's helpful. I usually go for a slightly different route. I typically make graphic elements and physics elements components that get assigned to my game objects. BTW, in your constructor, when you position, shouldn't it be converting spaces (PTM)? I assume this is some code you already have working, but curious. I've never used p2, but I'm making the assumption that like box2d, one need to always convert spaces to get it to work right.
  20. Okay, thanks! I'll take a look. I'm probably going to try some experimental code with the concept I tried. It will provide some flexibility to people that don't use @2x, @3x, etc (ie. so they can do everything native resolution, which includes textures being treated as native).
  21. @ivan.popelyshev thinking about this some more. What is there was another function created similar to getResolutionOfUrl which can be used to override resolution. What I'm noticing is that a few places reference getResolutionOfUrl. So if we were to "catch all of the places" for overriding resolution, it would seem the good place would be to where getResolutionOfUrl is being applied. I've experimented with creating new Texture objects after updating the BaseTexture resolution. This has no effect until something new is created from it. However, doing this doesn't work right if the BaseTexture is for a sprite sheet, since sprite sheet creation creates frames based on resolution. It would seem like a reasonably clean solution which would also make it a bit better for any strange devicePixelRatio Android uses. This is needed because without doing something like this, sprites/images are always the wrong size when rendered (unless of course the proper @#x trailer is used).
  22. Thanks. Using 0 as the alignment value fixes my problems and I can compute proper bounds.
  23. Heh. I was just wondering you were noting some other issues. I already have a working code base with Box2d and there doesn't seem to be any issues ... so just double checking if I was missing something.
  24. Okay, I think I found an answer to my own question. The trick seems to change the alignment if the lineStyle. Doing the following gives me the expected results. this.panel.lineStyle(this.menuDef.panel.stroke.thickness, this.menuDef.panel.stroke.color.color, this.menuDef.panel.stroke.color.alpha, 0);
  25. I'm prototyping code to draw menus. Essentially the menu is a rounded rect with stroke lines. The interior rows are the individual menu items. Menu items are based on a colored rectangle backing with icon and/or text. Each menu item as well as the menu panel can have their own colors. This menu is generated "programmatically" based on metadata. As part fo the process, the max required width for each menu item is computed. The overall size of the menu panel is based on this. In my tests, the menu panel and the menu items use different colors. Thus it's easy for me to see gaps. I have noticed that my expectations of what the panel size should be doesn't match. What I'm seeing is the menu items width are "smaller" than the interior of the fill of the panel. I see this by seeing a leading and trailing gap between the menu item and the stroke lines of the rounded rect. Code to create the panel looks like the following: this.panel = new PIXI.Graphics(); this.panel.lineStyle(this.menuDef.panel.stroke.thickness, this.menuDef.panel.stroke.color.color, this.menuDef.panel.stroke.color.alpha); this.panel.beginFill(this.menuDef.panel.backgroundColor.color, this.menuDef.panel.backgroundColor.alpha); this.panel.drawRoundedRect(0, 0, details.width + this.extendedWidthCorr + this.menuDef.panel.stroke.thickness * 2, details.height + this.extendedHeightCorr, details.radius); this.panel.endFill(); this.root.addChild(this.panel); Current thickness is 4 and details.width is 190. You can ignore the "extended*Corr" items. They are effectively 0 right now and will be used later since I plan on allowing the panel to be anchored to an edge, and in that case, I don't want to see the rounded corners along the edge. Also, ignore the inconsistency of width and height. I'm currently focusing on why the width is not proper. Note that when I create the application I have been using a resolution of 1. I'm on a Mac, so the devicePixelRatio is actually 2. I've also as a test set this to window.devicePixelRatio and have the same results. When I define stroke thickness, is that "pixel accurate" or or rather pixel close enough? It does not appear that is the case. The scale ratio of fill to stroke lines are different by roughly a factor of 2. Observationally I have found that if I have the menu item width and the width used to create the panel be the same: - If there is no lineStyle, then the width of both the panel and menu item is the same (they properly overlap which in this case means the widths are identical and overlap) - If I use lifeStyle, the menu panel is slightly larger than the menu item. It looks like it is close to one lineStyle width (which also then seems to confirm my "roughly a factor of 2" from above). I also noticed that even though the menu items are drawn at x = 0 (anchor is upper left), when line styles are drawn, and even though the panel is also at 0, 0 ... The starting draw positions are not the same. They should be aligned properly. What I see in this case is that the menu item background is slightly smaller than the menu panel and the menu panel appears to be offset slightly in the draw, even though the x/y of the panel is 0, 0 just like the menu items (they are all children of the same container). I was going to attach an example, but I get an error indicating can't create some directory for uploads and to contact for assistance.