Webapp using an IPCsocket head-scratcher

I have a webapp which will always run on a local machine in order to allow local browser-enabled devices to connect with a desktop app. This (MacOS) approach uses an IPCsocket, and has always given speedy response, which is expected.

Suddenly my webapp’s response slowed way down, when no significant changes to code had been made. So I started poking around, and used system.debuglog to assure that the desktop app is still making speedy responses (via IPC), but there is a significant lag (sometimes up to 14 seconds) of the data reaching browser clients.
Then I noticed that I have been experimenting with VPN for my internet access. I’ve been using the MacOS function to connect to VPN at certain times (i.e. not using the PureVPN app that would route all traffic through VPN all the time).

Guess what?! When the VPN is connected, my IPC socket slows way down, and if I pull up the Network preferences panel, the connection meter shows data is going through that path when a (local!) browser window makes a request from the desktop app. PureVPN is a reputable service, so this is probably expected behavior, …but I certainly was not expecting it. This is repeatable consistent behavior.

So apparently the traffic between the browser and the webapp is being routed through the VPN? I’m afraid I don’t know how to analyze this. Any observations/clarifications would be appreciated.

hm. Maybe it’s not the IPC socket that’s slowing down. Checking now…

Comparing clock times: The IPC socket is taking an average of 22 ticks to receive a response (approx 1k of data) when VPN not connected. And (after one round of testing, i.e. each time I connect to VPN the lag seems to vary) the time goes up to ~250 ticks.

I guess it would be expected behavior for the request from the browser to get to the webapp via a VPN round-trip (even though the server is the same machine). But why would the IPC socket slow down too?

OK, …not a big conversation starter. So I’ll wrap this up by saying it was a good learning experience. Since IPCsockets seem to behave like TCPsockets (even in MacOS), I have switched over to UDP communication. This gives faster response and (apparently) none of the downsides of using the UDP protocol. However I still do see a delay when a VPN server is connected, which I didn’t expect to find.

Before, using IPCsocket: average response time ~22 ticks (VPN not connected) and ~250 ticks with VPN connected.

Now using UDPsocket: average response time ~4 ticks when VPN not connected, and ~24 ticks with VPN connected (it varies).

Keep in mind that UDP packets are not guaranteed to arrive at their destination.

I did think about this yesterday though. I’ve seen similar behavior on my machine when a vpn connection is active.

  • Make sure "Forward All Traffic is turned OFF.
  • Make sure you have DNS servers defined for the VPN connection.

My instinct is that this is a routing issue though. It’s not “round-trip” so much as trying to make a connection through the network and then failing and trying something else. A bug report in feedback (with a sample) might help us figure out if this is a framework bug or an OS issue though.

Thanks Greg. By “Forward all traffic” I think you mean the “Send all traffic over VPN” checkbox, in the VPN’s Network settings. …which was switched on.
And regarding UDP, I’m hoping that I can rely on all packets arriving successfully since it’s staying within the local host.

[quote=316611:@Tod Nixon]Thanks Greg. By “Forward all traffic” I think you mean the “Send all traffic over VPN” checkbox, in the VPN’s Network settings. …which was switched on.
And regarding UDP, I’m hoping that I can rely on all packets arriving successfully since it’s staying within the local host.[/quote]
Send all Traffic may be part of the issue. You should try it with that off (and make sure the rest of your stuff works too).

Regarding UDP, all I’m saying is that it’s not guaranteed. UDP is a broadcast service in that the packets are sent out and it’s up to the clients to pull them out of the ether. If you truly need a 100% reliable socket, you should be using TCP or IPC.

Indeed “Send all traffic” seems to have been the culprit, since now (still using UDP) I cannot detect a difference in response times when the VPN is on versus off. And I’m tempted to stick with UDP since it is so much faster. Regarding UDP’s weaknesses: my understanding is that in addition to the possibility of completely dropped packets, a packet might arrive incomplete or corrupted. Do you think it’s worth implementing a checksum on each packet?