TheHermit

Members
  • Content Count

    39
  • Joined

  • Last visited

About TheHermit

  • Rank
    Advanced Member

Contact Methods

  • Website URL
    http://www.urbanhermitgames.com
  • Twitter
    urbanhermitgame

Profile Information

  • Gender
    Not Telling
  • Interests
    Physics, game development, Tabletop RPGs, and Go.
  1. Well I took a shot at this for Fight Magic Run this week. I used JSONP with cgi-scripts server-side to manage the data. So basically every time I needed to send information I issued an HTTP Get, and I issued a different HTTP Get every 1 second to retrieve state from the server. Roughly 75% of the 'multiplayer' work was setting up the environment so users could actually decide who to play, and only 25% for maintaining the actual game sessions. For even a simple board game, I ended up with maybe 10 separate server-side scripts for the various things - submit a move, check game status, check user status, challenge a user, cancel a challenge, send a 'I'm still online' ping, etc. This kind of method left a lot to be desired - if instead I had some kind of centralized thing that abstracted the various tasks a bit (for instance, 'relay data to user', etc), it would have prevented the different scripts from becoming subtly inconsistent with each-other during edits, something which happened a few times. I'd probably try to do it differently the next time. For real-time I would have needed to use websocket. I'd need a server that basically sits there, grabs incoming connections, and just relays what needs to be relayed. One thing I haven't been able to figure out though is the monetization side of multiplayer. That is to say, lets say you make such a game and it now relies on your server. When you license that to sponsors, how is that going to be received? Furthermore, lets say you don't want to end up having to commit to basically acting as the server for sponsor player-bases, I don't know how open sponsors would be to 'and you also have to run this server-side stuff and accept connections on this or that port'.
  2. Alright, done! 2-player topic, 2-player team this time. Keanen Wendler-Shaw and I did a multiplayer strategy game called 'Cascade'. This really does require two (human) players, so bring a friend! There's a lobby, but I won't be there for at least 8 hours...
  3. Yes, this is all true, but I'm a masochist Gave up on trying to do realtime though. I'm doing a sort of strategy game thing with player matchups and a lobby, so hopefully I get to actually work on the game and not just the infrastructure before the weekend is up.
  4. I'm tryingto sell an HTML5 game to the players and did get a couple of 'why would I pay for this web game when I can go on Steam and buy three games on sale instead' kinds of responses; I think it only really worked because I bundled the full game in Node-Webkit so it doesn't look like a webgame. Its kind of working, but I probably would have made more money making a much shorter game to sell directly to portal site sponsors. The players expect free, quality, etc, but the publishers probably have some awareness that if they're offering too little money, then developers won't bother to take the time to make polished things since there'd be too little return on their time. I would guess that short and sweet is probably best for selling to sponsors, since their players are going to be hopping from game to game (and in some sense, thats actually better for them than someone playing one game for 10 hours - more ads served, etc).
  5. This theme is brutal. Time to learn how to do synchronous multiplayer over HTML5!
  6. Thanks for the comments, I'd be happy to do a post-mortem later on.
  7. We've just released our first commercial game, Travelogue. It had a kind of a break-neck 3-month development schedule - I basically wanted to test the waters before deciding to focus on game-making as a career but before my current job ended and I had to start sending out applications. All three of us were basically doing this on the side of full-time jobs. Here's our trailer; the awesomely over-the-top voice-over is thanks to Whiskeyninja. There's also browser-based demo you can play We're selling the game both at our site, and on itch.io. There's quite a bit I would have done differently in retrospect - its a pretty niche game for a first offering, and the interface could stand to be much more polished and dynamic. The web demo has also been a source of a lot of trouble, with sound incompatibilities under IE, then the fix to those breaking Chromium, then the fix to that breaking Opera, and so on. I really need to find 'the right way' to do sound in HTML5/JS because this has been ridiculous (doubling up the download size just to include mp3 and ogg versions for different browsers meant I could barely fit the demo onto Kongregate). The other thing that was a big risk with this project is that its basically a Javascript/HTML5 game, but we're trying to sell it at a $10 price point. It was pretty clear that, regardless of what the game was, we'd be fighting the 'its a browser game, it should be free' expectation, so we packaged the game with Node-Webkit and turned it into a native application for Windows and Linux, installer and all. Even the web demo being browser-based seems to trigger this reaction somewhat, though I think its probably worth the traffic that the demo generates for us. So all in all this has been a learning experience that reminds me of that one Go proverb 'lose your first 100 games as quickly as possible'.
  8. You can see my games at http://www.urbanhermitgames.com/?page_id=96 If you're interested, please contact me at thehermit@urbanhermitgames.com
  9. So for WebGL I know there's a way to render directly to texture for this kind of thing, so you don't have to display it. I've seen this in Three.js, for example in the glow tutorial. Its used for a lot of post-processing effects like blurs and such. You might need to have your own shaders to do it though.
  10. That's good to know! Though its still a problem if things are slow by default for people. My game Heat Sink had this issue. I used a canvas buffer to store pixel data for collision (since I didn't want to do collision with the several thousand dust particles that accumulate during the game), another canvas buffer for picking, and another canvas buffer for a visual effect where the motherboard gets dusty over time. I was getting panned on Kongregate and Newgrounds for my game being slow though in my own testing it was running fine - until I tried it on Firefox. So its almost better to test things without this fix in place to make sure that people who don't know about this aren't going to just move past because its taking 2 minutes for them instead of 4 seconds. I hope this particular fix propagates and becomes standard in the browser quickly for that reason. Edit: If this is a Linux-only bug, then I doubt it was responsible for many of the players having trouble. Anyone know if this affects IE as well?
  11. Does Pixi.js use WebGL? If so, you could write a shader to basically handle most of the collision tasks on GPU. The idea would be that you assign each non-colliding sprite 'category' to its own channel. Perhaps the player and their bullets is the alpha channel, enemy bullets are green and blue, and the enemies themselves are red. This would limit you to 255 enemies, 65535 enemy bullets, and 254 player bullets at once - probably enough. You then render the sprites in three passes to a hidden buffer using a unique color index for each sprite and only modifying the sprite's particular channel. When two sprites of the same type overlap, have the new replace the old completely. When you're done, you can can through the pixels of the resulting buffer to see where collisions occured - any pixel that has non-zero values in two different collision channels. The values of the channels tell you then what objects collided. Oh, and make sure not to use interpolation or you'll get really weird results. For speed, you can gracefully downgrade accuracy by using a lower-resolution hidden buffer than the full resolution of your game. Every-two-pixel-perfect collision is probably not noticeably different from pixel-perfect collision for example, but will be 4 times faster.
  12. That was a lot of fun, but now I'm this weird mix of buzzed and tired. Here's my entry: Rebound Recon, and its page on Ludum Dare.
  13. We're talking about different effects. I'm not talking about bloom or anything like that. All I wanted was to be able to, for any given model, have a texture channel that was displayed independent of illumination and added to the texture channels that were illumination-based. I know how to implement this, but I'm just pointing out that the way shaders work makes that kind of simple addition annoying - you have to add it to each shader that you want to support it. So it would be useful to create a framework that naturally handles combining effects for you without needing multiple passes. Multiple passes only make sense when you're performing some non-local task on a subset of the pixels, like a blur that only hits the glowy bits, a bloom shader, etc. But its kind of silly that if I wanted to, e.g., take the output of a phong shader and invert it, I have to make a full copy of the phong shader's code and then put the inversion in there (also ensuring that if the original shader is ever updated, I don't immediately benefit from that update). Thats basically what I'm getting at - software that lets you treat shaders less like independent atomic things and more like modules that can be linked together at the code level.
  14. That is in fact the tutorial I eventually found and followed (it was a bit out of date, too, since all of the compositing/etc stuff has been moved into separate .js files in the more recent three.js versions). Note that it doesn't just add an emission layer to textures though, it has to create a duplicate of the geometry, render to a buffer, then it composites the buffer with the static model. Basically, my problem was that in order to do something like: fragColor = threeJSPhongMaterial + myStuff I really had to either re-implement the Phong material, or do geometry duplication, multiple render passes, etc. Because of the way shaders are loaded in, you can't really do a simple 'take this thing and add/multiple/whatever without knowing what 'this thing' actually is' that and you have to do the compositing trick or make a copy of the Phong material and add your incremental adjustment. My thought is that ideally, that combined shader should be created dynamically by software that lets you compose shaders by combining eachothers' outputs. The model is that each 'shader plugin' if you will takes in the usual stuff passed from a fairly thorough vertex shader, such as UV coordinates, textures, position data, light source coordinates and colors, etc, as well as the output of other fragment shader plugins. The software then sticks together all the code that is used for this particular shader along with all the inline composition effects and gives you a single .fs file that you can then use. The shader plugins themselves would have code exposed, so you can do code-level tweaks and new effects.
  15. I tend to have the same problem when it comes to shaders. THREE.JS has a 'general purpose material' shader that has most of the usual things one might want to put on there, but it was missing the ability to have an emission map (or 'glow' map). Because of how it works, it seems like you can't just go and 'add a bit of code in' without finding their shader, pulling it out, and re-doing it all yourself. The advice I found online about this was to basically use console.log to dump their shader to the console and then to go and grab that text. It feels like a lot of shader operations should be something you can add at the code level without having to use intermediate buffers, and that kind of modular approach would make things a lot easier for people to get into it. I could imagine that some kind of node-based shader builder where you can put together individual bits of shaders to make the final effect would be a really valuable tool, sort of like how Blender's material editor works. Honestly, such a thing probably exists already, I just don't know what its called.