mattstyles

Moderators
  • Content count

    1,167
  • Joined

  • Last visited

  • Days Won

    11

mattstyles last won the day on March 4

mattstyles had the most liked content!

About mattstyles

  • Rank
    Advanced Member

Contact Methods

  • Website URL
    github.com/mattstyles
  • Twitter
    @personalurban

Profile Information

  • Gender
    Not Telling
  • Location
    UK

Recent Profile Visitors

2,816 profile views
  1. Not on Chrome it doesn't, as there is no up to track (I'm assuming this is the case everywhere). You have to disable the context menu to get a mouseup (which makes sense as the browser won't hijack your mouse event).
  2. Just sniff on user agent, it'll be close enough. IE went through a whole thing (it might have been early edge) where it pretended to be a different browser as people usually sniffed UA rather than feature detection which meant IE was getting a degraded experience even though the browser had improved to a point where it didn't need to degrade it, but, I think they've always included the MSIE X.X so just grab the UA in javascript and react appropriately, something like: if (/MSIE/.test(window.navigator.userAgent)) { alert('Unsupported browser') } You might want to get a bit more targeted with your regex, have a search for regexes to use. UA sniffing is a bit 'meh', but it would be easily good enough (consistent enough) for what you need. If you're set on feature detection (which is a much much better way of doing things) then I think modernizr is still the go to.
  3. @jpaik123 There are now really detailed instructions on packaging apps using cordova at their site, https://cordova.apache.org/. Failing that there must be several hundred articles and blogs on the subject, but, given that the cordova docs are now really good its always best sticking with that as you can be reasonably assured they'll be up to date.
  4. There is no contextMenu up and contextMenu works like select inputs in that it 'steals' the mouse event when it fires so you'll get a mousedown but no mouseup. I tried shimming a mouseup by playing with focus but it was dreadfully inconsistent, usually wrong, so I'm not sure how you'd grab a right click mouseup event. edit: ha ha, worked it out, you just have to prevent default action for context menu and you'll get a mouseup! You'd have to assume it was from the right button though, you couldn't be 100% sure.
  5. Right click isn't a valid event in the browser, whatever you're using the shim it is listening for the context menu event, which won't fire if the left click is depressed, without a genuine right click event in the browser (which won't happen as its a reserved action, hijacking it is not encouraged) I doubt you can make it happen. edit: actually, I've just checked and I can listen for the contextmenu event (fired on right click) even when my left button is depressed. Only issue would be that I don't think there is any way to register 'un'-clicking the right button (mouseup), so it would still be impossible to accurately track when both buttons are depressed.
  6. No, Pixi just renders stuff really really fast, everything else is up to you. Phaser has the concept of stages, called states, and handles moving between states. But, if you're just worried about globals, JS can not restrict globals: window.foo = 'bar' // usually var foo = 'bar' If its done at the top level (i.e. not in a closure) then both of the above are equivalent in the browser and mean that the variable 'foo' is available everywhere (outside web workers, possibly, I dont know). You can't restrict this. So if you want to share state across different sections of your code that would be one way to do it.
  7. There would be no way to consume that in an es module style project, if you did: import 'MyLibrary' Would you then just use the namespace: var triangle = new MyLibrary.Triangle() I guess that would be an option but to work it would have to molest globals when you import it (as some do, looking at you Pixi ), which negates some of the point of using a module system in the first place. You could achieve this easily enough in a module world by aliasing yourself in an entry point: import Library from 'MyLibrary' window.Library = Library If you really wanted to do that This would prevent libraries from having to molest globals but still allow you to manually shove it global so you don't have to import it in each file you want it. Most larger libraries have a UMD build which does the same as the above. For those libraries that need that (like Phaser, which currently doesn't work with a module system) you just keep them outside your module system, which means you now have to be responsible for updating them etc etc, no big deal really, but I wouldn't want to do that for a project with 20-30 top-level dependencies, and to ensure you didn't dupe dependencies you'd have to do it for all your dependencies, part of my current project has over 1000 modules in my node_modules folder (thats from only 18 top-level deps) and whilst npm now flattens where it can, some of those deps will have their own deps contained within when version conflicts occur (wow, hadn't thought of that, imagine trying to work that out manually!!), but many will be development dependencies. I'd still guess our current bundle that hits the browser would encompass potential 100 (maybe more) external modules, without a module system I'd go so far as to say that it would be so time consuming as to be practically impossible.
  8. I'd be inclined to say that the DOM would be more than enough in this case, Pixi is more applicable to rendering lots of things that change really fast, in your case that won't be happening. That's not to say that the structure of Pixi wouldn't be a help, I doubt you'd have too many problems using it to create a graphic novel sort of thing (although I still think DOM would be easier!). No JS library can prohibit this, JS has globals which are available everywhere, so if you're writing in JS you can always use globals and access them where you like. Pixi is just concerned with rendering and doesn't make any opinions about whether you should use lots of globals or a different structure. You might be thinking of Phaser where a common pattern is to create a global Game object and some State objects and I know we see some questions about sharing app state between distinct game states.
  9. Nope, IE doesn't support class, and I'd guess it won't as its superceded by Edge (not sure there's any ie11 developer besides security/bug patches). I agree that using a transpiler like Babel is easy enough and allows you to support old browsers, but I also think your 2nd suggestion ("Or maybe I should just show a warning that IE is not supported?") should be your preferred line. Don't support old browsers unless you really have to. All modern browsers auto-update and things have shifted firmly towards placing responsibility with this to the user i.e. if a user deliberately chooses not to update their browser then they shouldn't complain about things not working for them. Like or loathe this philosophy it is one that is gaining traction, for very good reasons. There will always be cases where this is harsh, but the ideal is that updates are seamless and don't break backwards compatibility (often) so that stuff can dev/test/work against current versions and be reasonably sure its got a good shelf life, the reality might be very different in some cases but I'd still stick by a decision to not create legacy unless absolutely necessary.
  10. What happens for you on iOS? Which doesn't support fullscreen. I scanned through the Phaser code, does it just try to make your canvas fill the viewport? i.e. you still have safari chrome visualised?
  11. Because its enforcing and can not be undone, not a great decision for an ecosystem. If you like to use this in your own projects then more power to you, similarly if you want to use it for libraries you consume then cool. They can be interoperable, as they are, anyways. Most large libraries export a UMD, which includes using a global for the library, which is a namespace for it, whilst also exporting a module system under the hood. The only issue with libraries including their own dependencies as part of their namespace (which they would need to do) is code duplication, for example, if you include some lodash functions and you export a bundle for your library attached to a namespace then you must include those dependencies (if the consumer also uses that same dependency they'll get it twice) or enforce that your consumers also include those exact same dependencies globally in their project so your dependency library can use them, which directly breaks encapsulation and makes your library harder to consume. Out of interest, if you wanted to include an http library named Request and a state management helper utility library called Request, both expecting to stick to a global Request namespace, how would you handle that? Drop one? Seems a shame. Yeah, I feel like I've been on the attack but, as with most opinions, if your system works for you and your team then why would you change it? You'd need tangible benefits, and that needs to be weighed against time/effort to learn a new system. Good article, I keep meaning to run my own tests like this, but I can't see how a browser module system is going to be ready for a good while yet, its certainly coming, but, I wonder if it'll ever make it attractive enough to completely ditch a build system, on a slightly different note gzip (and other) compression is better over 1 big file than the same big file split into 100. My current project runs at about 400 js files or so, of which, more than 250 or 300 will be bundled up for the browser, but I can not even guess at how many files will be bundled if you include dependencies, my guess is a lot for a couple of hundred kb of js (although weight wouldn't change too much, depending on gzipping) the number of requests will be through the roof so I see a good number of problems yet to be overcome before using modules natively can be a viable option. Thats not to say module bundlers aren't superb at what they do and are now at the stage where they provide benefits to the deliverable to the browser (or other environment) rather than just a development benefit. So if you did want to take the dive with es modules there's no need to wait for browsers, solutions exist now (and have done for a couple of years) that let you get a head start on this, and as node chose commonJS modules (very similar to es modules, by design) you have full access to npm and the module ecosystem, which is a huge boon.
  12. This sounds absolutely dreadful, does this imply I can import an alias if desired but also just grab the namespace from global? Wow, why make something so simple more complex than it needs to be. Why is this even a worthwhile goal? Is naming an import, usually as the name of the module, really so hard? Is one additional line, often auto-completed, really a chore? If you're worried about code-bloat then some of it is recouped at the module end where exporting is shorter than namespacing + exporting, plus, as I'll go on about, there are other advantages es6 modules have when it comes to reducing the amount of code required. As an example, what does this snippet do? var triangle = new Shapes.Triangle(200) You have no idea, and, to make matters worse, you have absolutely no idea where `Shapes` comes from. How does the global variable `Shapes` get put into the code? Into the page? Where does it come from? Where do I look to find out what the Triangle constructor does? What the param means? The module system way: import Shapes from 'shapes' var triangle = new Shapes.Triangle(200) You've still got no idea what that code does but you'll know exactly where to look, immediately, because you know how import works and where imports import from (these are standard and would only change if a developer has explicitly changed it in the build system). Furthermore, I can actually do some other things with a proper module system, consider this: import {Triangle} from 'shapes' var triangle = new Triangle(200) This follows object destructuring, if you've never encountered this before then it pulls `Triangle` from the `Shapes` object. This is nice, but its simple sugar, however, there is more. Whilst the destructuring sugar is nice, its only sugar, but, with es6 modules it is not mere destructuring, certain tooling have taken this idiom to enable 'tree-shaking' (not that I'm not a fan of tree-shaking, I think the process should happen at a different stage, but thats a different story) or, smart module inclusion, it does this by statically analysing the dependency tree (which it knows about because you've used a defined syntax to do so) and deciding what is and isn't required for this code to run (this is new for JS but not new for programmers, many many languages do this). The above paragraph is big, really big, so I'll elaborate. Imagine shapes exports 100 different shape constructors, but you only want Triangle, well, by analysing the dependency graph the tooling can ascertain whether you need more shapes and decide to include ONLY what you need, and it can do it deterministically because es6 modules are static, and it can do it down the dependency tree, wunderbar. Outside of hoping that your minification process can remove redundant code (which it can not do effectively because it does not have as much information as module tooling does) this is impossible with namespaces. Compiled languages generally don't care (for the same reason, neither does node really), but as you're serving your stuff over the internet you should. Just reread @dmkos reply and realised I've just rehashed what you already said! By including imports explicitly at the top of a file its immediately clear what this code directly relies upon (although following the dependency tree further is difficult), easy for new developers to the code and easy for you when you return to that code after several weeks/months. It sets the scene for the code.
  13. I think I skimmed it, read Bobs post and assumed that was running all the time, yeah, responding to a resize event and triggering those calcs is all golden. I've never tried scaling each frame, but I can imagine that calling `context.scale` involves pixi resizing all renderables in the scene/context, which is bad, although it could (and probably does) have a memoization technique for reducing work. Would assume that is where your bad perf might be coming from though. You could try some tests by resizing all the elements with a scale just once and see if removing the context.scale thing helps (you'll need to make sure your elements are actually scaled up as, obviously, larger sprites are heavier to render so testing against small renderables isn't a reflective test).
  14. Doesn't work on mobile http://caniuse.com/#search=full although that implies Chrome on Android should work
  15. I definitely agree with @BobF, whilst lazy evaluation is great for developers it can be bad for perf, particularly where it does not result in doing the least amount of work over a longer period of time. Where possible do these calcs once and just render each frame, obviously rendering a larger area will result in decreased perf but at least you'll limit the amount of calcs to perform that render each tick.