Jump to content

Socket.io client does not emit based on available memory/cpu in tab?


ForgeableSum
 Share

Recommended Posts

I've discovered an odd quirk using socket.io w Phaser. I have a simple setup: two clients on separate computers and a node.js server on one of the clients. The tick rate (time between each message) is 100ms (i.e. something perfectly reasonable). Both clients communicate to the server with socket.emit and the server sends the message back to both clients. Works great. Well, here's what's odd ...

This is for a webgl web game, mind you, so there is a lot of stuff happening in the tab. When things really start to cook up in the game and the tab thread starts to use more system resources, the socket ceases to emit to the server. The game is running fine, perhaps a drop in FPS but no serious crash or anything like that. In addition, I can tell the socket.emit code is being executed (or at least called) every 100ms just like normal (by executing a console.log directly before it). But despite that, the server doesn't get the message, sent from the client with socket.emit. I know this because I am console.logging every communication that makes it to the server...

I can literally increase the size of the window (which sucks up more resources, because the webgl renderer has to do more work) and the messages will cease to be received on the server when the window gets too big. Bizarre! Why would sucking up more system resources cause socket.emit messages to not reach the server? And if it did function like that, why wouldn't I receive a message/error/warning on the client or the server? I have difficulty believing socket.emit justdecides to not work based on how much system resources there are.

On the clients I'm using the latest chrome, Phaser.IO (pixi.js for the renderer) and mac osx. What the hell is going on??? Many thanks.

Link to comment
Share on other sites

I've discovered setInterval and setTimout isn't firing when this happens. From stack overflow: 

Quote

This probably means something is hogging the single JS thread and letting the event queue get serviced. It's not surprising that networking calls don't work in that circumstance. There is probably a way to tell your gaming engine to allow some cycles for the event queue.

So, it seems setTimout and setInterval literally cease to work once Phaser sucks up enough memory and CPU. i.e. that makes socket.io (which relies on event queues) completely incompatible with Phaser.  Anyone have any ideas on how to get around this?

 

The only thing I can think of is to use a web worker and execute the network calls on a separate thread. Of course, I would need to send messages to the web worker containing all the data I need to send to the server (approximately 2200 characters of string every 100 milliseconds). I would send the messages using a Phaser.Timer (RAF). socket.io would need to be running in this separate thread as well ... idk... Is this practical? 

Link to comment
Share on other sites

Okay, I think it's not that the setTimeout will cease to work, however the setTimeout clock will be delayed if there is very very heavy scripting going on. I've had this issue before. 

There is a cool article on gmail for mobile work on how having heavy load will cause delays in timers firing. http://googlecode.blogspot.de/2009/07/gmail-for-mobile-html5-series-using.html

I think you're best way around this would be to make a few variables: i.e - freq, lastSent, accDev.

Then on the internal RAF or, the update loop in phaser (which is triggered by the JS RAF) but would be more true to your game. If you invoke a separate RAF, you'll loose some performance there - You will want to do a quick check against the current time, the frequency of each data burst and if it is within an acceptable deviation. If you want a freq of around 100ms.. then proving your FPS doesn't drop below 10fps, you should still get a data burst roughly every 100ms or so. The acceptable deviation might be nice is when under heavy load, you send the data packet at 99ms, instead of waiting for the next RAF, which may be 160ms.. this will reduce network lag if the persons WebGL is struggling. 

You could also pair the accDev up with a reject module. If there has been too long a delay since the last data burst, then perform some validation check to make sure no black magic is happening client side!

Hope this helps!

Nick

Link to comment
Share on other sites

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...