Jump to content

[NodeJS Networking] $5.00 VPSs vs 1 Dedicated


 Share

Recommended Posts

I want to first say thanks to @Rezoner for his insight about his networking infrastructure with wilds.io.

He spreads out game instances onto different physical servers that utilize 1 core. Which works great with nodejs.

I was thinking since most of the time these virtual private servers are so oversold... would buying one beautiful, fat dedicated server be better?

Let's take a look at a E3-1231 v3 dedi box:, 4 cores, 8 threads.

Pros: 

  • Can run 4 node instances (possibly more)
  • If paying for DDOS protection, only need to pay once
  • Less worrisome then making sure 4 other physical servers are "online". 
  • Dedicated box, all resources are yours, probably performs far better networking and cpu wise.

Cons:

  • Cheapest I found was around $79 a month
  • Can only be in one location (cannot spread servers out, well you could, but for more cost)

 

Let's take a look at buying cheap virtual private servers. In this case, we'll use the $3.49 OVH box: 

model name      : Intel Xeon E312xx (Sandy Bridge)
stepping        : 1
cpu MHz         : 2394.472
cache size      : 4096 KB

The catch: You are allocated 1 "virtual core", so.... yeah

Pros: 

  • Dirt cheap [$3.49]
  • Can buy at least 20 in comparison
  • @Rezoner has stated he can house around 100-140, and even up to 200 players per node using the new uWS library. Except he's using a Digital Ocean droplet, which I think they have a better CPU, so lets just say 100 per node, that's around 2000 players if you buy 20. [$79 / 3.49]
  • Can offer game servers at different locations. Europe, Canada, anywhere...
  • Horizontal Scaling would be incredible easy... just buy a random VPS anywhere and have it connect to your central server to let them know, and boom players are ready to join. 

Cons:

  • Performance
  • If buying a DDoS-Protected IP, you will have to pay for each physical server. (BuyVM charges $3 per box.) -- Which might be more $ per month. You can do GRE tunneling, but that would be kind of crazy to tunnel to 20 boxes...

 

..and cannot think of anything else right now. It seems to me that the cheap VPS way is far better. In terms of cost:performance ratio as well. I guess the question is how many active players can you achieve on a E3-1231 v3 dedicated box with nodejs, using a 20Hz game ticker.  What is everyones thoughts and if I missed any pros or cons feel free to let me know so I can add them

Link to comment
Share on other sites

Its generally the way things are going (or have gone depending who you ask).

The biggest pro for large-scale distributed deployments is redundancy against machine failure (as opposed to merely redundancy against process failure). Of course, the con is that its harder to manage a distributed system.

IMO, you need a solid deployment process to run a distributed system, preferably one that sandboxes processes, i.e. for each new server instance you have to install everything you need, your deployment process should be automated to handle this, docker/rkt/lxc is great for this sort of thing.

You also need to manage and respond to process/machine failures or changing network conditions. You'd do a little of this balancing yourself of course, in either scenario, but with distributed you'd have to deal with a machine going down and restart it rather than 'merely' daemonizing a process. If your big solo box goes you're jiggered though, thankfully this is fairly rare (might want to check how rare) but will always result in service disruption and data disruption.

If service and data disruption is a concern to you then there is no choice, distributed is the way.

Link to comment
Share on other sites

Alternatively, you could take the idea of cheap VPS even further by using Amazon's Lambda service instead. Lambda lets you run your code on NodeJS without buying servers. In fact, servers are completely abstracted away so there are no concerns about server admin, scaling, machine failures, etc. This is very cheap while your traffic is low because you only pay for the machine time you actually use. I use Lambda on a non-game project and it works quite well. However, there could be latency issues with using Lambda for fast paced multi-player games, so you would want to check into that depending on your situation.

Link to comment
Share on other sites

15 hours ago, mattstyles said:

Its generally the way things are going (or have gone depending who you ask).

The biggest pro for large-scale distributed deployments is redundancy against machine failure (as opposed to merely redundancy against process failure). Of course, the con is that its harder to manage a distributed system.

IMO, you need a solid deployment process to run a distributed system, preferably one that sandboxes processes, i.e. for each new server instance you have to install everything you need, your deployment process should be automated to handle this, docker/rkt/lxc is great for this sort of thing.

You also need to manage and respond to process/machine failures or changing network conditions. You'd do a little of this balancing yourself of course, in either scenario, but with distributed you'd have to deal with a machine going down and restart it rather than 'merely' daemonizing a process. If your big solo box goes you're jiggered though, thankfully this is fairly rare (might want to check how rare) but will always result in service disruption and data disruption.

If service and data disruption is a concern to you then there is no choice, distributed is the way.

Your redundancy point is great.

Currently, this is what I am doing: I use nginx to load balance 3-4 node instances to idle thousands of ws connections. This acts as a "central" server, I guess. To handle login/out notifications, private messages, chat, etc. This also acts as a manual load balancer to send the player off to game instance server that has the least amount of connections. I keep track of all this through Redis to store how many are online, etc:

1888facf364a35db9b5d948a35f13200.png

VNeNJUQ.png

 

If you find anything abrupt with this please let me know :) 

 

3 hours ago, BobF said:

Alternatively, you could take the idea of cheap VPS even further by using Amazon's Lambda service instead. Lambda lets you run your code on NodeJS without buying servers. In fact, servers are completely abstracted away so there are no concerns about server admin, scaling, machine failures, etc. This is very cheap while your traffic is low because you only pay for the machine time you actually use. I use Lambda on a non-game project and it works quite well. However, there could be latency issues with using Lambda for fast paced multi-player games, so you would want to check into that depending on your situation.

I might give this a try on the trial! Seems pretty cool if they can do that with node

--Thanks guys for the responses, I think distributed is what I will end up doing :D 

Link to comment
Share on other sites

I don't see anything wrong with your setup and unless you see faults or places for improvements then you shouldn't even consider the pain of migrating to a new system.

Here's some more projects/reading if you want to learn a bit more about how large deploys handle distributed:

  • Kubernetes
  • Mesosphere
  • Docker-compose (docker, naturally, have really started pushing their additional services, all sorts going on)
  • CoreOS (not a PaaS, you'd need the software on top, but the OS is set up for it)
  • Consul (for service discovery, its a piece of the puzzle rather than the whole system)

Of course, you could try and roll your own for monitoring across your cluster of services, but its tricky. I'm writing my own monitoring service in Rust at the moment, which is fun and a great learning process, but rolling something from a learning project into a production environment requires a great deal of effort.

Generally speaking, any time you have a 'Central' anything, you have a single point of failure, a real fault-tolerant setup eliminates all single points of failure. Common singularities are load-balancers, backing db's for load balance or session storage, and punting too much on to one physical (or virtual) machine.

My own solution that I'm slowly converging upon builds upon CoreOS. (Take the following with a pinch of salt, its very much a work in progress).

CoreOS comes with networking and distributed as its primary goal, uses Etcd for service discovery (its a distributed key/value store, similar in scope to Consul et al), uses Fleetd for managing tasks (responding to failures, moving instances, creating instances, monitoring, etc etc) and Flannel for networking. I plug Flannel into Docker, the latest Docker build has greatly increased their networking facilities, so I create a Docker overlay network using Flannel as comms, this allows each instance to be able to discovery other instances, for example

My load balancer (I'm only using one currently, but will expand to being able to fire up more eventually) needs to know the IP of each connected service. Each of those services could die at any point, or get moved to a different host environment, this can not be managed manually. Each service has a discovery service running alongside, Fleetd is responsible for deploying these, the service discovery registers the IP and Host machine when it comes online with Etcd, the load balancer uses Etcd (you probably use Redis to do a similar task) to learn about these connections and reconfigures itself when these changes occur.

All services run inside Docker containers on the same network, due to docker and flannel, each process on the overlay network has access to other services, the current stable uses the Hosts file for this, the dev branch uses something else (I forget what just now). This means that any process can find any backing services it requires but not be dependent upon their location, they discover that via the networking.

The end-goal of all this (I haven't even touched on building the app/services yet! It is a big undertaking, but, replicable) is to be able to have a network environment that monitors and reconfigures itself. Part of the joy is being able to deploy to the network and have it automatically pick up the latest deploy and manage resources i.e. migrate old services to new when they empty.

It's exciting, but a huge investment, one that is unnecessary for smaller projects. I'm slowly writing a management service in Rust to help with all this, which is great fun, many many challenges.

Link to comment
Share on other sites

20 hours ago, WombatTurkey said:

I might give this a try on the trial! Seems pretty cool if they can do that with node

I don't recall if AWS Lambda offers a trial, but they provide a fair amount of free machine time each month before charges kick in. Keep in mind that Lambda is stateless, so you'll need to store your game state using an AWS DB service or AWS ElastiCache (supports Redis).

Link to comment
Share on other sites

5 hours ago, mattstyles said:

On the subject of Lambda, I'm itching to try it out

Being able to implement services without the hassles of managing servers is exciting and fun. It's pretty easy to get an idea for a new microservice running with it. I can recommend ebook AWS Lambda for anyone interested in giving it a go. It was the only book on the subject at the time I read it, but there are now other books on Lambda that look good as well. The AWS documentation is ok, but the book will get you up and running more quickly and provides some valuable insights that would be harder to glean directly from the AWS documentation.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...