I have a console app using 4-5 Xojo.Net.HTTPSockets to perform some work on a CentOS 7x64 system. The console app is a 32-bit binary.
When one of the sockets (they all have identical code) starts to perform many requests back-to-back (but never more than one at the same time) it starts to throw exceptions. The most common are “Error performing TLS handshake: An unexpected TLS packet was received.”, “2 - Error resolving ‘url’: Name or service not known”, “6 - Peer failed to perform TLS handshake”, and “8 - Message Corrupt.”
I’ve already checked with both my host and the destination host, both claim they are not blocking/throttling connections. The local firewall is not configured to throttle outbound connections.
Anybody have any idea what is going on? I’ve checked every log on the system, I can’t find any clues as to what is getting in the way.
Are you calling new Xojo.Net.HTTPSockets or reusing the same one in an event on the Xojo.Net.HTTPSocket?
I recall that coming up as an issue elsewhere on the forum.
I’m never going to be able to find the post, but it came up in a thread somewhere here that re-using a socket from one of it’s events was causing a problem.
I’m recoding now to try creating a new socket for each request.
Use xojo.core.timer.calllater to trigger processing the next request.
In this case, I’m using a semi-synchronous design. The socket is a property of a thread attached with AddHandler. The thread will start the request and suspend. When the socket gets its value, the data is dumped into a property and the thread resumed. So given this design, requests aren’t directly chained from a socket event.
However, the app has been running for hours since the recode without a single error of any kind. I’ll need more time to conclude that the one-socket-per-request design actually solved the problem, but so far, so good.
I can tell you that in the PageReceived event, it’s too early to start another request. If you do what Wayne suggested and use CallLater to start the next request, you’ll have much better luck.
I don’t think that’s what is happening here. The socket throwing the errors isn’t even the one doing the bulk of the work.
Nope, one socket per request did not solve the issue.
Update over night. The errors are now consistently “message corrupt.”