Jump to content

The Phaser 3 Wishlist Thread :)


rich
 Share

Recommended Posts

Well for my two cents worth I think it would be worth as you said having a look at where all this is going.

 

Despite others wanting more cocoon support I suspect cocoon is a dead duck.  The IOS 8 Webview (WKWebView) and Androids webview in Lollipop (Chromium) both bring webgl firmly into the game. 

 

http://davevoyles.azurewebsites.net/current-state-webgl-ios-8-yosemite-android-lollipop/

 

As a byproduct the cocoonjs value proposition is effectively dead.

 

Android is always a bit slow to get their releases on to everyone's devices but with them unbundling their webview so it can be updated separately from the play store this also should be less of a problem in the future.

 

In the mean time there is Crosswalk that implements the latest chrome webview.

 

Another Interesting thing is Visual Studio Community edition. This is Microsofts new free dev environment and it is now a lot closer to The Visual Studio professional edition.   And supports html5 multiplatform apps through apache cordova.

 

http://www.visualstudio.com/en-us/explore/cordova-vs.aspx

 

http://arstechnica.com/information-technology/2014/05/visual-studio-goes-cross-platform-with-cordova-integration-from-microsoft/

 

So now you have the hands down best ever development environment no longer being a neutered child and it supports cordova.  (So far it does not support crosswalk)

 

In any case I would expect cordova to be getting bigger and bigger. 

 

Put this together with the Intel XDK free cloud compiler support for cordova and crosswalk and I would say the direction is clear.

 

Cocoon.js is dead - the niche it exists in is closing. Apache Cordova is the defacto standard with Crosswalk as the stopgap

 

Perhaps it always has been - but putting together a grunt driven npm based cordova/crosswalk build chain is a pain.  Making it easy for developers to use - thank you intel and microsoft will ensure its adoption.

 

What that means for games - I don't know, but I smell change.

 

As the phaser 3.0 suggestion - seamless integration into cordova and mainstream ui component libraries.  Even if it doesn't add functionality - but ensures compatibility.  In other words future proof it.

Link to comment
Share on other sites

Yes I couldn't agree more. Phaser 3 is entirely about protecting what is changing (and changing fast) both in the app space and with ES6. It's also a chance to reboot the API. There is SO MUCH I want to add right now, today, but can't because the framework has already gotten too large - P3 is our opportunity to do so in a sensible modular fashion.

 

Work has already begun in earnest on Phaser 3. I'm still really excited about what's possible with Phaser 2 and can't wait to release 2.2 next week (so many cool new things in it!) but I'm even more excited about its future.

Link to comment
Share on other sites

Actually something you might want to have a play with.

 

Inheritance patterns and legacy support.

 

 

Coming from my old C++ days a common problem is that you have a base class which over the course of time you add functionality to via a whole pile of derived classes  Then along comes a need to refactor or extend some of the base code and everything you or someone else has written now has to be regression tested and lots of it breaks.  "No one should really have been hooking directly into _private_counter should they...."

 

The solution although not widely used is to put the version number into the class name. So that all the old legacy class_v1 inheritance tree still works but you can derive class_v2 from class_v1 and people can migrate in their own good time - the old legacy interfaces still work and you are free to do what you want on class_v3 without breaking any existing interface.  Patches to the class_v2 tree come along as part of the normal bugfix and release process.

 

Phaser uses some of the nicest code I have read in decades - literally - and it uses a nice prototypical inheritance model - Probably imposed upon it by the need to inherit from pixi.  In order to enable you to do new stuff while not breaking everyone's old stuff - ( that's the price you pay for success BTW)  perhaps you could consider a strategy via which phaser 3 is released on top of phaser 2 in the same way that phaser inherits from PIXI.  Phaser 2 will still be there - at some point will no longer be supported but people can continue extending and fixing their existing code base.  But people starting new projects can use the new features.

 

Now it does not need to be version numbers - could be code names or some sort of configuration parameter.  Just some mechanism to keep old code working while freeing you up to totally architecture if you want.

Link to comment
Share on other sites

Phaser 3 is a blank slate to be honest.

 

We're not going to be bound to any other APIs and will do away with the whole "God class" issue we've painted ourselves in to.

 

I think what v2 taught me was that the ease with which a developer (especially new ones) can get things running is of paramount importance. For all its size and weight the API is essentially bloody easy to use, it empowers devs to just get on with creating things without sweating the small stuff too much. It saves them time - and time is the most precious thing any of us have. Allowing them to save it is a powerful thing to be able to give someone. The second you involve a "build step", external dependencies, npm, browserify and all of that, you fundamentally break what made it powerful in the first place. You start to eat away at their time in bigger and bigger chunks.

 

Creating objects doesn't have to be a needlessly complicated process of creating entities and adding all kinds of components to them. A dangerous path I will avoid at all costs for Phaser 3. But at the same time we do need to re-architecture the internals carefully, to allow for it to carry on growing at the same rate or faster, but without ending up with a 2MB minified file in a years time. I believe there are ways to achieve this and we're exploring them carefully.

 

So the chances of there being a Phaser 2 hidden in Phaser 3 are slim at best - but what matters most will be there: the actual heart and developer-first approach of Phaser. These will remain as strong as ever. In terms of timeline I'm not expecting to see Phaser 3 out until Summer 2015, so there is plenty of life left in 2.x yet.

Link to comment
Share on other sites

It would be nice to have an update method that runs on par with the render loop and an update method that runs on par with the physics loop. And separate loops for the physics engine and the renderer; just like Unity 3D has, for example.

 

That way we could have a fixed timestep for physics, that's much better for it: (read http://gafferongames.com/game-physics/fix-your-timestep/) and a variable render loop that let's the game play as smooth as posible. Having an update method for each loop would let us do stuff that only makes sense to do after the physics engine has advanced one step, and do everything else when the graphics are ready to be drawn again.

It would also let us tailor the game loops frequencies better to a game, making physics that do not need to be very accurate be able to run slower on those games, without losing render fps, and the other way around.

Link to comment
Share on other sites

I hear the "objects as parameters" request quite often. The issue I have with it is that I don't see any real difference between having to remember magic object properties as to having to remember parameters. You need to know what they are either way, and if you don't know (or don't have them in code in front of you), you need to check the docs regardless.

 

There are definitely cases where there are too many parameters though.

 

And ES6 is getting closer and closer, so many features are already usable today in stable Chrome, by early 2015 there will be many many more!

 

Objects have other benefits too, for example what if a function has a bunch of optional arguments

Some.method('hello', null, null, null, 'world');// vs.!Some.method({  key1: 'hello',  key2: 'world'});

Or in the case that you're revisiting code, the key name could save you a trip to the documentation, assuming it's intuitive. I would also argue theres less room for mixups, e.g. putting an argument in the wrong spot.

Link to comment
Share on other sites

Just to nail this one dead while people are still discussing it - after talks with some very smart compiler engineers at Google about performance implications, I can confirm that there will be absolute NO "objects as parameter containers" in Phaser 3, in anything that is even remotely hot code. So get used to learning parameters I'm afraid, because they're here to stay. The only place where they may be permissive would be in object construction, but never in object manipulation.

Link to comment
Share on other sites

Some wishes:

 

Support for stereoscopes (like the View-Master presentation) on VR displays.

A modular core and extension system that would allow a build process to omit any code never called.

 

If there is support for traditional game constructs, such as "character" or "level," I would prefer the support be in the form of modules or extensions.  There could be a side-scroller extension, an endless runner extension, an RPG extension, etc.

Link to comment
Share on other sites

Although you've nailed the idea of passing objects as arguments dead, let me chime in on it, because I just started reading this thread.

 

Yes there is a small performance hit in doing this, so you might not want to do it in very time sensitive methods or those that are liable to see lots of looping.  Having said that, one of the really nice things about passing objects is that it makes API more resistant to breakage.  It's a very common practice out there in the world, so it may bear some consideration.

 

Whatever the case - in the end, you're the one writing this puppy, do what you wish, I'll be happy either way.

 

 

ADDENDUM:  I just went back and re-read your post Rich... actually you are pretty much saying what I am saying.

Link to comment
Share on other sites

To explain this (magic objects as parameters) further. From a conversation @mrale and I had last week about it:

 

"I would avoid configuration objects in very hot places (especially if profile confirms that those places are hot and GCing is visible on the profile too). Allocation is cheap and sometimes can be elided entirely, but I would not rely on that in the performance critical code.

 

Another point here is that if you sometimes do X.foo({ a: 10, b: 20 }) and sometimes X.foo({b: 20, c: null}) then all the code that interprets these configuration objects will go polymorphic and you will pay for this with performance. Each of these {...} configuration objects has a well defined shape - but if they are different it's detrimental for optimizations (where monomorphic is the best)."

 

The performance hit isn't small I'm afraid (although how much that matters depends on game type) - it's the difference between being able to compile it at all vs. not doing so. Multiply this across all classes in Phaser and the issue becomes significant, because I can't be inconsistent and allow only some methods to accept objects and others not to do so.

 

I do agree, it is a common JavaScript practise. But that doesn't mean it's a good one :)

 

I still think where this could be most beneficial is in object creation. I'll have a good think about it as I reckon there are ways to allow it in suitable places.

Link to comment
Share on other sites

Hi there, we've added to our games this nifty library to support named arguments https://github.com/dtao/named-args

 

Currently we're only using it when calling #Tween.to, #Input.enableDrag and #Emitter.start.

 

In the beggining I was a little afraid of these long argument methods and the legibility of our code (that's why we added named-args), but I realised in later releases of Phaser that we have more configuration methods so we can configure objects even after creation time (for example, #Tween.delay, #Tween.loop and so on).

 

So if you follow the Tween API for the rest of the classes I wouldn't consider this as a priority, because Phaser can offer other ways to be more declarative.

Link to comment
Share on other sites

An easy way to scale the game up WITHOUT antialiasing. This is such a pain to do right now and we have to resort to weird chacks etc.

 

Best case scenario: I would set my game to any size, i.e 100x100 and set the scale to 2. The resulting html5 canvas would be 200x200. The scale would then be easily changeable in the future if I want my game to be rendered at any other scale. This is important for games with scaled pixel art and it's a pain to export our art already prescaled (and not good for bandwidth/size and such!)

Link to comment
Share on other sites

  • rich unpinned this topic
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...