[SOLVED] Websocket randomly reconnects when running Mopidy through a reverse SSH tunnel

I am using Mopidy.js on a custom frontend. When I run both the frontend and Mopidy locally, I have no connection issues.

However, to run the frontend on a custom domain, I need to expose the Mopidy server. I have done this by setting up a reverse SSH tunnel to a remote server (ssh -R 6680:localhost:6680 user@remote_server). The remote server runs nginx, and proxies all calls to mopidy.domain.com to localhost:6680 (which is then passed through the reverse tunnel).

With this setup the client is able to establish a Websocket connection to Mopidy, but it randomly reconnects. When logging events on the clientside, I see that I get a ‘reconnectionPending’ followed by a ‘websocket:close’.

I’ve tried configuring the SSH connection to have a higher ServerAliveInterval, but to no avail. I’ve also tried setting a high proxy_read_timeout in my nginx config.

Are there known problems with such a setup?
Is there any way to enable logging from only Websocket events for debugging purposes?

I have no experience with this exact setup but I can’t see why the ssh tunnel would have an impact. Are you able to remove the nginx proxy from the equation and test without? I would expect the issue is there. Mopidy behind a reverse proxy does work fine but you do need to get the nginx config correct. I think there is an example on here sonewhere. I don’t think mopidy has any websocket logging.

Edit: wouldn’t you want a lower serveraliveinterval? Did you try enabling ssh logging?

Nice catch! That might be it. A lower interval makes sense. I’ll try it out and report back.

But then you would see the same issue on any long-term reverse tunnel. So I’m not convinced that will help, I’m actually surprised you need to touch that setting at all. You should get tcp keep alives by default.

This is the SSH logs I’m getting when a client disconnects. It always starts with receive packet: type 96.

I’ve tried with different interval values, but the clients keep reconnecting. It seemed to maybe happen a bit less frequent when the interval was set to a low value (like 10 seconds), but this is hard to verify. I also tried using the default value for the interval, but no luck.

debug3: receive packet: type 96
debug2: channel 1: rcvd eof
debug2: channel 1: output open -> drain
debug2: channel 1: obuf empty
debug2: channel 1: close_write
debug2: channel 1: output drain -> closed
debug2: channel 1: read<=0 rfd 6 len 0
debug2: channel 1: read failed
debug2: channel 1: close_read
debug2: channel 1: input open -> drain
debug2: channel 1: ibuf empty
debug2: channel 1: send eof
debug3: send packet: type 96
debug2: channel 1: input drain -> closed
debug2: channel 1: send close
debug3: send packet: type 97
debug3: channel 1: will not send data after close
debug3: receive packet: type 97
debug2: channel 1: rcvd close
debug3: channel 1: will not send data after close
debug2: channel 1: is dead
debug2: channel 1: garbage collecting
debug1: channel 1: free: ::1, nchannels 3

I have tried all the tricks I can find for making nginx compatible with websockets, but none of them work. I am finding this very hard to debug, but after going through nginx, ssh and Mopidy logs, it does seem like it is the clients who are dropping the connection.

I was able to debug the Mopidy websocket by adding

mopidy.http.handlers = debug

to my mopidy.conf. When the connections are dropped, the Tornado websocket’s on_close is triggered. As far as I can tell, this method is only called when a client has disconnected. Furthermore, the status code I am receiving on my client side is an error with status code 1006, which signifies that the connection was closed abnormally.

Here is the nginx configuration I am using https://pastebin.com/eaxu3ryy.

I think a possible solution could be to set the websocket_ping_interval in the WebsocketHandler. If this is set in the Tornado websocket, will a (Mopidy.js) client automatically respond with a pong when it receives a ping from the server? Or do you have to manually implement this behavior? My understanding is the former.

I forked the Mopidy repo and tried to set websocket_ping_interval to 20, and set up a logging event in the websocket’s on_pong function., but I did not get any output. I might have passed in the argument in the wrong place. How would I go about setting this value?


Is this complicated setup really the best environment to debug in? Can you recreate it with a simpler environment to better target your efforts? For example, Take it back to server and client on the same network, verify that’s all working as you’d fully expect it to, and then add nginx. If that’s still working then remove nginx and have a remote client going through a tunnel. If that works then add nginx back in.

I get what you’re saying, but I suspected the problem was related to issues with production services and settings that can be a bit tricky to mimic locally (remote tunnel, SSL, Cloudflare, …).

I managed to find the cause of the problem though. Cloudflare supports websockets, but for non-enterprise users they close all websockets after 100s of inactivity. Which happens very frequently when using Mopidy where there can be long periods of inactivity while a song is playing/paused.

I fixed it by forking Mopidy, bumping Tornado to 4.5 (websocket_ping_interval was introduced in this version) and setting websocket_ping_interval=30.

Could this be an optional setting in Mopidy? I imagine this is something people running behind proxies etc. would benefit from :slight_smile:

OK, so the environment was actually even more complicated than described! In regards to the Tornado version, Mopidy already supports Tornado 4.5 so that’s just a case of updating your installed version of Tornado.

You are the first person to hit this. I don’t think people generally expose their Mopidy server on the internet. Perhaps those that do, did so before Cloudflare even supported websockets and just moved it to a different subdomain. It’s a shame nginx can’t handle this really, seems weird for Mopidy to have to workaround an external network problem. You are welcome to create a GitHub issue and propose this feature request if you want.