thedupdup

Why do two websocket messages seemingly show up at the same time?

Recommended Posts

I have multiple servers in multiple locations. I need to get the frequency at which the websocket messages are received. When I connect to a low-latency server, the frequency is normal (50 - 60 ms). But on high latency servers, the frequency sometimes is 0. I asked a similar question not to long ago, but the answer there was that the socket is buffering messages. I find this unlikely since it only happens on high latency servers.

Here is the code responsible for handling the websocket:

        startTime = Date.now();
            ws.onmessage = function (evt)
                {

                prevData = recivedData;
                var receivedMsg = evt.data;
                recivedData = JSON.parse(receivedMsg);

                const endTime = Date.now();
                ms = endTime - startTime;
                startTime = endTime;

                if(msAvg == null){
                msAvg = ms; 
                }
                msAvg = Math.round(((msAvg * 5) + ms) / 6);
                id = recivedData.id;
                }

 

ms is the frequency in milliseconds.

How can I get to the bottom of this issue?

Share this post


Link to post
Share on other sites

Server sends message A then B. Message A gets lost along the way, client receives message B, but given how TCP works, it won't be accessible yet (head of line blocking). TCP handles resends and server sends message A again. Client receives it and supplies A and B one after another to your application.

That's one very realistic option (longer the path, higher the chance for packet drops). But it can be anything that causes one message to arrive sooner or later compared to other message. Routes can change, packets can get delayed.

In general, you can't depend on the rate to be perfect.

Share this post


Link to post
Share on other sites
15 hours ago, Antriel said:

Server sends message A then B. Message A gets lost along the way, client receives message B, but given how TCP works, it won't be accessible yet (head of line blocking). TCP handles resends and server sends message A again. Client receives it and supplies A and B one after another to your application.

That's one very realistic option (longer the path, higher the chance for packet drops). But it can be anything that causes one message to arrive sooner or later compared to other message. Routes can change, packets can get delayed.

In general, you can't depend on the rate to be perfect.

That makes sense. Is there anyway to disable the head of line blocking? I would rather loose a couple packets than have them blocked. 

EDIT: I should also note that on the highest latency server (200 ms avg) the frequency is almost always 0 

Share this post


Link to post
Share on other sites
14 hours ago, thedupdup said:

That makes sense. Is there anyway to disable the head of line blocking?

No, this defines a key aspect of TCP.

UDP doesn't have this restriction, and thus can not ensure message order either, but you can't use it on the web.

Share this post


Link to post
Share on other sites
18 hours ago, thedupdup said:

EDIT: I should also note that on the highest latency server (200 ms avg) the frequency is almost always 0 

That is a bit weird. It shouldn't be happening often, maybe at most 1% of messages unless you are on very bad network. This leads me to think that the server has Nagle's algorithm enabled (TCP_NODELAY – buffering of data until there's enough to send out, it is usually enabled by default and the higher latency could make it more aggressive.). Try looking into that.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.