mattstyles

Moderators
  • Content Count

    1671
  • Joined

  • Last visited

  • Days Won

    24

mattstyles last won the day on September 12

mattstyles had the most liked content!

About mattstyles

  • Rank
    Advanced Member

Contact Methods

  • Website URL
    github.com/mattstyles
  • Twitter
    personalurban

Profile Information

  • Gender
    Not Telling
  • Location
    UK

Recent Profile Visitors

7278 profile views
  1. I typically use Parcel as my bundler, which also handles an index.html out of the box (requires it actually), so when that 'watch' task runs it performs an incremental build based on what has changed and the output folder will typically contain html, JS, CSS (if generated) and images (if using directly) etc. The output folder becomes the stuff Cordova wants. I haven't fired up Cordova in a long time so don't know if that auto reloads things in Cordova, I never got to the stage of using it much so only ever fired up Cordova for a 'static' build for smoke-testing. The stuff such as IAP and other Cordova plugins: the Cordova bridge are just exposed global variables which it tacks on to 'window' at init time so they are available (typically) _before_ your code wants to use them. For in-browser use, you need to shim or mock those exposed globals. I.e. for IAP, it probably does something like expose a function with a callback or a promise, which you can then use to determine what has happened, for in-browser you just need to shim that function/promise with an expected result. I think that many Cordova plugins (such as the persistent storage stuff) will typically expose browser-based versions (where it makes sense) so you _could_ use those instead. Otherwise a custom HTML file with the shims attached in the head (for a dev build) would suffice I expect, although, shimming the entire API you are using (depending on the complexity of the plugin) could be tricky. Your workflow of devving in the browser isn't unusual though so hopefully the plugins you use will expose browser-based shims/mocks/functionality that you can use whilst developing your product.
  2. I agree with Bruno and generally take a two-step approach (not that I've done it much, and only to proof-of-concept stage). Use your webpack build during development (using mocks as much as possible, quite a few Cordova plugins I've used actually provide these mocks for browser-based use, or even feature-complete ones browser versions), then use webpack to package it up and then use Cordova to take your built files and turn them in to a whatever cordova needs to run on a device. If you don't use any newer language features you can omit the webpack build step, but, you're using TS and no runtime understands it (yet) so you'll need to transpile it anyways. On the dev in browser thing, yes, this is great, but, make sure you do regularly check on a device. If you're on a Mac then the simulator is generally good enough (always always do smoke-testing on real devices though, at least, more is better but more is time-consuming so find a balance), I can't remember what tooling I used to simulate Android devices on my dev machine, I remember there were a couple of options and all worked without too much hassle. Again though, test on real devices. The more frequently you do this the better, but you need a balance because it is potentially time-consuming. I'm pretty sure you can set up Cordova in some sort of watch mode, again, you can do a two-step approach here where Webpack builds built files for Cordova, which is watching them and rebuilds itself when they change. I don't think there is any way to get hot reloading or even incremental builds like this though. Generally speaking, devving in the browser (where things like debugging are also easier and tooling is better for hot reloading, incremental builds, shims, mocks etc etc) will be good enough. Just don't forget the real devices, lest you risk something 'funky' slipping through (iOS Safari is the new IE6, there are some oddities you might run foul of).
  3. That's nice using particles. Looks great. Using a shader would also be an option for water simulation.
  4. Can you check what you have in your package.json to make sure npm is doing what you think it's doing? i.e. making sure it's explicitly setting 5.1.2 After that check the package.json inside pixi.js inside your node_modules folder for the version there. After that you need to make sure your bundler is not mucking around, as you mention a service worker it looks like you're already looking at this. _edit_ ooh, have you checked your package-lock.json file? it could well be the 'caching' issue is in there.
  5. I've just tidied up speedrun. If you want a quick way to get access to a modern JS toolchain for your demos then it might well be useful for you. I've also created a quick example project (using pixi, but you can use anything) for you to have a play with, available here. Getting Started The example project shows a pretty clean way to set up a demo project. It requires you to have node (and thus npm, the node package manager) on your development machine, and you’ll need git on your machine also if you want to clone the repository. With these prerequisites (and some knowledge of the command line) the following commands will get the project up and running: git clone git@github.com:mattstyles/speedrun-pixi-example.git cd speedrun-pixi-example npm install npm start These commands will clone the repository locally, then traverse in to that repository, then install the dependencies from npm that are required to run the project and then runs it. Running the project means starting up a local server to serve your files and creating a development environment including some features like transpilation of newer javascript language features, use of ES6 modules and hot-module reloading (HMR) as well as a few other goodies. All of these goodies are provided by parcel. Some more details A modern JS toolchain often includes some of the following features: * A development server to serve your files (rather than from the file:// protocol which has some restrictions you probably don’t want to deal with) * hot-module reloading i.e. incremental bundling of only the files that have changed, resulting in a faster feedback loop (this is subjective, not all of us, myself included, absolutely agree that this is a good thing) * Access to ES6 or CommonJS module systems * transpilation of newer language features that may or may not be supported (yet) in your development browser of choice This project uses `parcel` the do the bulk of the heavy lifting which gives you a few additional features: * Very fast bundling, for optimal feedback loops from changing source code to seeing the result of those changes in the browser * Friendly error logging, errors are propagated to the browser rather than remaining solely on the command line * Automatic bundling, this allows for automatic inclusion of images and css files * Many pre-packed transforms, allowing use of images, css, json, as well as language supersets like typescript or even separate transpilable languages like wasm, rust and openGL (amongst other available transforms) The advantages are all subjective; there is a clear argument that this additional complexity simply is not worth it. However, access to a module system and newer language features is very attractive and projects like `parcel` can ease the problems of writing source code using these techniques. Many of us here like to create many small demos and proofs-of-concept, and `speedrun` can help ease the pain of setting up a toolchain for such projects. Many people here also aren’t comfortable with setting up a javascript toolchain to get access to things like a module system or newer language features. Speedrun hides this pain so you can get going, but it’s still worth (in my view) finding out how this stuff works, and speedrun or the example project won’t help here as it very deliberately abstracts this toolchain away. Thankfully there are many resources out there if you want to (and have time to) learn.
  6. Welcome to the forums @esocane If that is your first attempt, I can't wait to see what your tenth will look like, they were great! Well done! (edit: oops, misread, not necessarily your first attempt, they're great games though anyways!) I have some feedback (I played caves and oakwoods): * Map is great, I missed it in caves * Level gen felt more structured and less random in Oakwoods, I don't know if this is intentional or just coincidence. Random level gen is ok, but, its gets old really quick for players, there are some good techniques out there to help structure a level such that you can get smarter about placement of enemies and items etc i.e. if you have a 'room' mechanic (even if the level looks open it can still have 'rooms') then you can calculate how many rooms must be traversed from the spawn point (stairs etc) to give you a heuristic on how 'deep' in to a level a player is, stick the best goodies in there, with the harder enemies etc etc * Why do I have to clear a level before proceeding? Let me make the decision if I want to proceed, rather than go on a bug hunt to find the last monster left. If you really want me to hang around, make me find a static item, a key, for example, in order to proceed. This is particularly annoying in your case as defeating enemies gives you nothing, which is a fine mechanic, but not when you are forced to kill them all i.e. once I've found all items in a level all there is no incentive for me to clear enemies as each battle only drains my health. Use the first few levels to explain to the player that they can not return to a previous level, and then let them choose if they want to explore fully and gather all items, or speed run it deeper. * I really like the simplified armour/weapon system with upgrades rather than introduce a load of junk items and inventory management (which can be fun, but its sometimes nice to not have to deal with it). Cardinal Quest 2 does not have inventory management (really) either and that works well. Maybe you could work out a way that upgrades 'look' like new weapons, to add a bit of variety but not change the upgrade mechanic. It's a nice feeling to get a new bit of kit, even if a +1 would work just as well, a new Crystal Sword of Ultimate Power feels nice, even if its just a +1 or +2 on your current Dark Mace of Doom. * The 'fog', or limited visibility, worked great. 'Peeking' round corners felt good, and 'filling in diagonal walls' was great, although, on that last point, you could 'see' through diagonals. This is fine when your walls are trees (in Oakwood) but not so great when they are genuine walls (Caves), although, I didn't see it as getting in the way, it just isn't quite correct. I can't add a screenshot so I'll try and explain a bit better: ~ ~ ~ _ ~ # # _ ~ # P _ _ _ # _ _ _ ~ _ ~ is not visible, _ is visible floor, # is wall, P is player. In this case, the player can peek around the wall (right and up) but should not be able to see diagonally (bottom-left), but, the top-left wall should be filled even though its a diagonal its feels correct to fill it with visible wall, although, maybe you should leave it blank as if you fill it immediately the player has a clue about where floor and where wall is, even though they can not actually see it. * I found a way to wait by clicking the tile I'm currently on, this is a bit annoying as I was using keyboard navigation so a keyboard shortcut (is there one? I couldn't find one) to wait would be handy. Waiting is really important in this type of game to help you get enemies where you want them. * You should really check on the device pixel ratio to make sure you support 2x and even 3x screens. Most screens are now retina/hd. Everything is blurry for me on a 2x (retina) screen. * I really liked how you stripped down a 'classic' roguelike to be simpler. I reckon that even in its current form you could wrap that up for mobile devices and throw it on app/google stores and make a little bit of $$$ if you wanted to. I think you've done a fantastic job! * I liked the style, it feels like a classic roguelike and the simple 2-step animations helped, good use of palette. I don't agree with the font choice though (the comic one, Bangers), doesn't fit with the old-school graphical style, and didn't fit with the monospace font used in some places. * You're logging an array to the console (in Caves at least). Not that it matters much but you probably shouldn't be. I think they feel like really solid games.
  7. https://github.com/mozilla/BrowserQuest This one is really old, but still very very good
  8. Welcome to the forums @MrRitzy This sounds like great fun, but, yes, you are correct, it is quite complicated. But, I do not want to discourage you! Nothing worthwhile was ever trivial, so great cracking on it! You're going to need some javascript to make this all tick, so you'll have to make sure you have some skills there. Once you have some JS skills (sounds like you probably already have at least some, maybe you're already very good) you're going to need to work out how to persist some data. As its idle people are going to leave the tab/browser hosting your page/s, so you're going to have to track where they were and when so that you can drop back in to the gameplay when the user returns. Local storage will probably get you where you need to go for this, so learn how to use that. If you need more than that then you'll have to get in to some sort of backend solution to store data, this is considerably harder but you may well enjoy that challenge too. Sounds like you have made a real conscious effort to reduce your scope, great work! Keep it small, get it working. Once its working, keep building on it! And very good luck to you with this project! The community here is very active and very knowledgable, whenever you hit a roadblock keep posting here (and other places too, we're not the only oasis) and asking questions. Then, as you skill up, make sure you answer questions from others and help them to learn and grow too Good luck
  9. Rightly or wrongly, most JS applications now use a build system of some sort, which means they have access to a module system, which means that they can understand `require` (which is CommonJS format, as used by Node, to complicate things, there is now ES6 module spec as well, using `import` and `export` syntax, mostly, build systems will understand both, mostly). As most JS apps use a build system this means that most JS libraries (such as found on npm, which is *not* just for node, its for JS, which includes node and client and IOT and and and etc etc etc) now _expect_ that build system and the module system that it allows. However, many many JS libraries also expose a client-only version of their library for those who do not want to use a build system. Even if you are building an entirely client-side application I would still advise setting up a build tool chain. Some disadvantages of a build chain: * Can be tricky to set up * Still not quite as simple as just running code in the browser Some advantages: * Many cross-browser issues disappear due to transpilation * You get access to a module system, helping you to organise your code base * Due to module system you get easier access to external libraries by using a service such as npm to access that code (i.e. you no longer have to reinvent the wheel and it is considerably safer than other methods of accessing third-party code) If you did want to go down the route of using a build system I think the following steps would get you there, usually without too much trouble, but, it might involve quite a lot of learning depending on how experienced you are: * Install node on your local machine, this side-installs npm, the node package manager. * Install a command line on your machine (if it does not already have it) and spend a few hours understanding how to use it. * Create a new folder for your new project. * From the command line run `npm init`, this sets up a new project. * From the command line run `npm install -g parcel-bundler` * Create an `index.html` file and put whatever you like in there * From the command line run `parcel index.html` At this point 1 of 2 things will happen: 1) You will have port access issues on _some_ systems. Resolution to this depends on your machine, google the solution, there will be lots of solutions. 2) You local server will fire up and will serve your index.html. If you run in to point 1, fix it, and you will be at point 2. This is good. Now cancel that running script (usually ctrl-C/cmd-C from the command line). From the command line run `parcel watch index.html`. Now you have a development build running. Change the code within index.html, switch back to your browser tab, see those changes instantly propagated in the browser _without_ a page refresh. It is magic. Beyond installing node (and npm), these steps are outlined in more detail at https://parceljs.org/getting_started.html. Parcel is one of many bundlers you may use, but it is the easiest to get going with (and very good I might add, but you have choices if you don't like it). Note that none of this is non-trivial, and its up to you if you think the advantages (which I have barely touched on) outweigh the initial cost of setup. Also note that this isn't necessarily the 'best' way to get started, I'd advise some changes, but, you can do those later. This is likely the easiest way to get going with build tooling for JS.
  10. mattstyles

    Game Data

    The answer depends on your use-case. Local storage is a great option for many use-cases. Some use-cases where it definitely is not a good solution: * Users playing from multiple devices/browsers (local storage lives in one browser and can not be shared) * Local storage, in theory, can be nuked at any time by the browser (I've never seen or heard of it happening, but, browsers make no guarantees)
  11. mattstyles

    Game Data

    What's wrong with your current system? If you answer that then you'll know what you need in a _better_ system. Without knowing an answer to the above then you're just mucking around. The options for storage that you have are reasonably limited. You can use an in-browser database (modern browsers all support indexeddb, edge will soon add it for v2, websql isn't completely supported), or send your game state off to a remote server to store how you like. None of these are better or worse than local storage, they're just different methods and what works _best_ for you is dependent on your use case.
  12. This is a really tricky question for a generic answer as there are so many variables which _could_ dictate whether your project structure works well or not so well. Personally, I'd take any answers with a pinch of salt i.e. there is no single 'best' structure, like, not at all. Have a think about what the problems you are trying to solve are and how the processes and concepts you employ for organisation are going to solve that problem? To the above question, the answer is usually 'I do not know'. This is fine. With the above in mind it is usually better to follow this sort of process: Start the project. Put stuff anywhere, it does not matter at this stage. Get something working. Keep going. Now you have a working product and you can start to identify what organisational problems you have and think about how to solve them. Until this point you are largely guessing. If you have created several similar projects before your guesses are probably good, if not, then they might not be. It is relatively easy to apply some structure and organisation to a project that doesn't really have one, it can be pretty tricky to change organisational structure (can be, depends on many things again, you certainly should not be afraid to change later on if your current system turns out to be not very efficient). There are some rules of thumb that might help you though: Small files and folders are easier to manage than larger ones i.e. small in scope, not necessarily small in lines of code. Decouple things as much as possible, this makes them easier to work with, and makes organisation easier to change. Tight coupling is a nightmare, avoid at all costs. Uber objects (and, similarly, uber-projects) are hard to manage, this is really the above concern worded differently. Divide and conquer. UI and logic (rendering and smarts) are good things to separate. Avoid logic duplication, if you end up writing similar logic in multiple places, consider generalising and abstracting it. Utility functions can form a huge part of your codebase and is _usually_ a sign of good organisation. MVC is fine for games. As are other methodologies. Go with what you think makes most sense for you (and your team) and the project.
  13. Not a stupid question! This is the web and JS so there are, of course, multiple ways of skinning this cat. https://codepen.io/personalurban/pen/dBQQaK here is one way I knocked up. Its a canvas element on the page, with the spinner element also in the DOM. The order of the DOM dictates rendering order, but you could apply z-index via CSS if your DOM structure was different. The CSS is copy-pasted from https://projects.lukehaas.me/css-loaders/ after a brief google search for CSS spinners, there must be several thousand of these such sites. I've used a couple of set timeouts, down the bottom of the JS file, to control the loading spinner. After a set amount of time I've applied a class to fade it out, then, using a different timeout I've removed it from the DOM. You don't need the fade if you don't want it. You could also use element handlers to get an event when the fade has finished, but, they're unreliable cross-browser and it's easier to just know the length of the transition and use a timeout to deal with the next action you want (in this case, remove the element). Hope that helps.
  14. Have a look at Pixi, it just helps manage drawing to the canvas really quickly. Usage inside React is non-trivial, but there is plenty of help out there. You might find Inler's React-Pixi module helpful, it doesn't officially support v5 of Pixi yet but I've been using it (so far) without issue. You have choices here, try searching npm/github for react-pixi and you'll come across a few implementations. If you really like getting your hands dirty there are a few projects like regl that are fun. Probably not quite what you're after though. You could just go the route of having an uncontrolled React component with the canvas renderer inside, again, try searching if you need help working this out. It's been done and written about a number of times for a few different front-end rendering libraries and with a few different canvas rendering libs. There are options other than Pixi too, try searching for canvas rendering libs. It's fun to play with the canvas API but it's quicker to let someone else create a wrapper around it, i.e. Pixi etc.