mobileben

Members
  • Content Count

    51
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by mobileben

  1. @RomainMazB got it, I perused the code. Some unsolicited suggestions (feel free to ignore) which may make your life easier, especially given the actual behavior is to make `onClick` part of the CardSprite class and avoid using `event` to derive the target to use. I would also recommend you wrap setting the card to be interactive and listening to events into another `CardSprite` method. You can control enabling and disabling a card from being interactive as well as listening to the mouse over. This will be helpful if you have several cards where some are interactable but others are not ... and the state can change depending on user actions.
  2. @ivan.popelyshev I created a issue here: https://github.com/pixijs/pixi.js/issues/6149
  3. Nice code sample! Obviously doesn't use `event.target`, but I think this type of approach works well.
  4. I think the problem is the implementation and the definition of how `target` gets assigned. Keep in mind that how the browser handles the mouse events is different than how pixi does. Pixi essentially transforms it into "pixi space" or rather the node hierarchy. Mozilla documentation indicates that mouseout would have an assigned target. Based on the code base, `processInteractive` is responsible for `target` assignment and that is assigned based on hit, where hit has several criteria much of which is dependent upon the point being within the DisplayObject. Since, when there is a mouseout, the point is no longer in the affected DisplayObject, target will be null. So what ends up happening (at least in my test cases) is that target == null and currentTarget = parent. @RomainMazB you may want to think about creating a class for your cards. This would make it easier to manage your cards.
  5. Bizarre. I'm nascent with Javascript/Typescript. Try this variant. card.on('pointerout', event => { card.position3d.y -= 10; }); It is probably grabbing the `this` context from the current scope that card is in.
  6. Fair point about the new function objects 😁! I think the reason there is not target is that it fails the hit test. The actual DisplayObject in question no longer has the point within the object. Hence target cannot be set to it (I looked at the code in a debugger). I believe in this case, if currentTarget is set, it is the parent. This is the normal code that should set the appropriate target (Pixi.js code). if (displayObject.interactive) { if (hit && !interactionEvent.target) { interactionEvent.target = displayObject; } if (func) { func(interactionEvent, displayObject, !!hit); } }
  7. Have you tried to use pointerout? The currentTarget is set for that. I use pointerover/pointerout for my hover code. Are you just trying to animate an object on hover? I don't care about target. If card has the position3d property, then just use an arrow function to get the proper this context. This would change your code. to look like. card.interactive = true; card.on('mouseover', event => { this.position3d.y += 10; }); card.on('mouseout', event => { this.position3d.y -= 10; });
  8. I figured it out. It's assigned in bundles/pixi.js/src/index.js. I believe it's assembled through lerna/rollup. The rollup config I believe references that index file.
  9. I took a better look at the custom game loop and it makes sense. Not sure if I'll go that route just quite yet. A question I have when looking at the code. I can't seem to see where Application actual assigns properties such as Ticker. I've run a grep on the code base and it isn't obvious to me how this is done. For the most part, Application.js (in packages/app/src) look pretty "empty". I also don't see assignment to other things such as view, etc. Where is this all done?
  10. BTW, as a heads up https://pixijs.io/examples/?v=next-interaction#/basics/basic.js doesn't load/do anything. I've tried on Chrome and Safari.
  11. I very may well have to go this route. I generally try and deviate as little as possible when using 3rd party. I've been burned too many times doing my own things and then running into incompatibilities. Plus my relative lack of JS/TS experience has me focusing a bit more on getting up and running versus getting things cleaner. Up to now, I think my scaling handling has been smattered across different places. This one is more application based. I'll dabble with the game loop a bit today. I'm still tying some pieces together and then will see if I can handle the resizing of assets dynamically. I don't expect the handling to be real time, since that is dependent on assets. I also expect the ease of doing this to be dependent upon the actual number of assets involved. My initial stuff here is "toy" in size, so manageable.
  12. It is indeedy an Application thing. https://github.com/pixijs/pixi.js/blob/2becb1e4e119d6e03c7f1fe0e65ee9d91a5fb687/packages/app/src/ResizePlugin.js#L22 It takes the HTML Element or window. It works, I tried it. I also tried using renderer.resize(w, h), which seems to work. For now I'll assume it correctly resizes the frame buffer versus scaling. Once I get further along I can verify if this hypothesis is correct or not.
  13. Thanks. I'll need to experiment. BTW, is the best way to resize the renderer to use `renderer.resize()`? This is why `resizeTo` eventually uses and I assume then this actually modifies the framebuffer size in use. I'm not quite used to following JS code that use "runners" yet.
  14. When a user resizes the window, the render area can become bigger or smaller. This potentially has a visual effect in that if the window started off small and then was increased, the textures could look bad if the render area is simply scaled to fit the canvas bounds. Going from larger to smaller is less of an issue, though one could make the argument that it's doing more work than is needed. How are most people handling this? For example, if you get a resize event and the canvas say doubles in size, do you simple increase the canvas size but leave the renderer area the same? Or do both? If it is the latter then this would possible require newer textures. For my setup, I have a defined game area which is currently defined as 480x640. I then support different multiples of this. The plan is to do layout at 480x640 right now (though I may use a higher res multiple later). When the game starts, it will best match which multiple fits the canvas and it will use that. I'm trying to determine if I should just choose the size of everything at start only and if the user changes the window size, they are stuck with that or if I should add support to find the best matching supported dimensions and then use that. If I decide to support better fitting which multiple works, it also implies I would need to have a method to change textures on the fly. I would imagine something like this would use a multi-pass system where I destroy/dispose of the active textures, load the newer ones, and then apply those new textures to the existing PIXI.Sprites. Obviously any other draw element would need to be updated as well. This seems like a lot of work, so seeing what people's experience is regarding this and if it's worth the effort.
  15. Okay, sure you provided some code, but the reason I asked about more code was to better understand how you set it up primarily since you indicated you were getting invalid values. I think that's pretty much standard practice and if you posted to say SO, I would guess someone would invariably ask you the same thing. Regardless of if you use degrees or radians, you would be faced with "out of bounds" numbers. At least for my code, I always 1. work in radians within the engine and 2. adjust for when the value falls out of bounds. Note there are times I actually do use degrees. That could be when coming from a tool that exports the data, but where we find that occasionally we manually tweak values during experiments. However, it will be corrected once it goes into "runtime". Also keep in mind, you can use "angle" instead of rotation. If you look at the code, it just does the math for you to cover to radians. FWIW, I'm a C++ guy. But I also have used Flash and Actionscript as part of my tool chain. I always have method for dealing with degrees or radians (explicitly named as such) but as mentioned, game engine itself uses radians. I'd say more often than not, most stuff I've encountered uses radians.
  16. What do you mean by "get the actual rotation"? rotation is both a getter and setter. https://github.com/pixijs/pixi.js/blob/812ff8a944e0c805b8afc16ebef5a5d6fba0c0c3/packages/display/src/DisplayObject.js#L597 It is radians. If you want the degree variant, you can use "angle". Using 2500 should be fine, but if you want, you can always reduce it down in size. BTW, it would probably be helpful if you showed a bit more code on how you created objects and are trying to rotate them.
  17. mobileben

    MTB Hero

    Maybe one way of handling jumps is letting the player do a little bit of a trick move when they have air time? Perhaps giving some extra "bonus" for doing so. Bonus to provide some incentive (note since you are time based, that would need some further thought on what the bonus does or if an extra point system is added ... or perhaps it adds some "speed up"). But also, if timing isn't right on the jump, there is a "wipeout". I double quote that because a wipeout could be treated much like running into a barrier. I agree with totor though. I thought there would be jumping.
  18. By cutting do you mean generating new UVs? Or really generating a new texture? Since you are not fully describing things, it does make it more difficult for people to help you.
  19. What are you trying to do? When you say a texture needs to be cut into several ones is how it cut determined at runtime?
  20. Since you want to control your drawing of the layers, you may want to create a class which uses a container as a parent for the sprites. The reason for the class is you could use that as the interface to control the layers (ie. sprite). If you know that the textures are inter-related, if you have not put them in the same atlas, you should. This will eliminate the need for the underlying renderer to change the texture when rendering (ie. higher likely hood depending on what the renderer is doing that they can be drawn with the same draw call).
  21. @bruno_, thanks so much for the information. It was very helpful. When you say you use "dummy files" do you mean wrapper files for Cordova, or you create dummy files for the ones that Cordova will supply? @mattstyles, thanks as well. And thanks for the note about testing often on the device. My idealized plan is to be able to switch easily between the browser and mobile, which is why I was wondering about approach. From my early guesses, the part that will be more challenging is anything that has to go through the JS bridge such as IAP. Mainly because I'm using Typescript. What I think I may have to do is actually do those parts in Javascript to avoid having to debug transpiled code and use Safari developer mode to debug through the simulator. As both your approaches are more of a two-step approach, I assume then that your index.html files as well as possibly how the app starts (Since Cordova will fire deviceready which we should be using to launch the game) are handled as one offs? I did find I can use the merges directory to use custom index.html files per platform. Right now I have what may be a workable model to develop on which would allow for dev browser/mobile from the onset.
  22. I currently have a setup where I use Webpack + Typescript for game dev. This is experimental, but it seems to be working well. I wanted to add Cordova in the mix to see how hard it would be to add support for mobile. One thing that strikes me is that Cordova does also become a bit of a build system, but seems cumbersome for just HTML5 dev. My plan would be to do most dev on the browser. Then move to mobile. One take away is it does seem a bit easier to start off with Cordova, only from the perspective that it seems easier to drop a project into Cordova versus dropping Cordova into your project. This really just means using the directory structure and having some of the config that a Cordova project likes, and then putting my code and required packages, etc, around it. For people that are using Cordova, what is your workflow? Meaning are you doing most dev on the browser in the directories and then doing Cordova builds as needed (ie. ignoring Cordova only until needed)? When you do an HTML5 build for the browser is it completely devoid of Cordova? Or are you including Cordova and just going through a slightly different startup routine? I suppose this leads to the question of are you doing only one distribution that runs on everything, or creating different distributions based on platform? Also, are there any viable contenders out there to use instead of Cordova?
  23. I assume when you ask "render it as canvas" you are referring to rendering the `PIXI.Graphics` as is? So in other words something like: const gfx = new PIXI.Graphics(); // Do stuff to make a graphic gfx.endFill(); app.stage.addChild(gfx); Yes, you are better off converting to a texture and the creating a sprite. The actual `_render` for a graphic does a bunch of work to display the parts of the graphic. Simplified graphic shapes like rectangles should be faster to draw. Just how much work is related to complexity and whether or not the graphic is batchable. Sprites on the other hand just update vertices (presumably only if needed, although looking at the code it doesn't look like it has any dirty bits) and then renders. Hmm, wondering. I'm not really a JS guy. But I've done some reading that seems like you can get some async stuff running. Has anyone dabbled with that? It would be super helpful if it is real async. Since then things like texture creation could be done off the main thread. The only caveat though ... which I don't know if pixi can handle is the need for some locks. I'm more used to multi-threaded game engines where one has those features to help better hide latency.
  24. Also thinking about this some more. It seems to me your main problem really is the lines. For the rects (points), you really just need to create a rect and convert it to a texture and then create a pool of sprites. Alternatively you could create that sprite based on a texture image. I'd recommend a white rect and then tint it to the color you need. You can also scale it as you need. Just create a pool of sprites and use them as you need (and hide what you don't). This should give you good performance regrading the drawing of your points. The line is a bit more problematic. You probably need to define "real-time". Depending on your application real-time isn't always real-time. Meaning at times you can actually eat up more frames doing something. For example, is it still usable if it is 30fps or 20fps? For the line, when you zoom and scale, rather than invalidate, why not build "offline" the newer line? Then when it's done, show that line and hide and either destroy or invalidate the other? So perhaps a good solution is a pool of sprites for points and rather than clearing creating a new line and the hiding and invalidating/destroying the old. This is one of those ideal cases where being multi-threaded is helpful, since you can offload both the new line render and destruction on another thread.
  25. I think for generating a texture from the line graphic you would do something like this, however I will be quick to add is in my test, it comes out a bit jaggy. The actual sine wave as a graphic is slightly jaggy as well, just not as bad. I also notice clipping. You should be able to draw the two and compare to see what I mean. const graphic = new PIXI.Graphics(); graphic.lineStyle(2, 0xff0000, 1); const startX = 0, startY = 0; const increment = 0.1; graphic.moveTo(startX, startY); for (let x=increment;x < 100;x += increment) { const y = Math.sin(x) * 20; graphic.lineTo(startX + x * 10, startY + y); } graphic.endFill(); let sineTex = app.renderer.generateTexture(graphic, PIXI.SCALE_MODES.LINEAR, window.devicePixelRatio); let lineSprite = new PIXI.Sprite(sineTex);