Multiple Simultaneously TCPSocket Write

Hi I am seeing a weird outcome not sure if its suppose to be,
so apparently in my method as the code below

TcpSocket1.write “SomeData1”
//some processing
TcpSocket1.write “SomeData2”

-Server Receive

Not sure why SomeData2 is not received. but if i put like a small delay in between i would get SomeData2 in the end.
Does TcpSocket.write require some cooldown time before firing another one?

So this is just a snippet, in my code it would be multi-threaded sending writes to the server

Xojo and the operating system itself both have a few levels of write buffers for TCP sockets. I would start with a Flush call after each call to Write to make sure that your data is actually being sent immediately.

It’s also worth noting, because of those buffers and other internal details, that you can’t rely on receiving your entire message as one piece, you will need to build up messages into your own buffer (just a string should do) and have separators of some kind (just something simple like EndOfLine may be fine) to indicate when a message is complete, then split on this in your receiver to ensure you process only a complete message. ‘Most of the time’ this won’t actually seem to be important, but it very likely will be eventually, especially if your messages are large (but perhaps even if they aren’t).

Well, I did have delimeters for start, eol.
but the issue is

doing this in client

TcpSocket1.write “SomeData1”
TcpSocket1.flush //even with this
TcpSocket1.write “SomeData2”
TcpSocket1.write “SomeData3”
TcpSocket1.write “SomeData4”
TcpSocket1.write “SomeData5”

the server would only receive the first message.
the 2nd onwards themessage just simply didnt even exists… for my case

The data may not come in all at once.
The receiver may get:
and so on in the data received events. There is no assurance that in a data received that you get exactly and only what was sent from the other end - it can arrive chunk by chunk - but it won’t be out of order. So you need to reassemble the chunks for your protocol and then pull out whole messages.

Hey Theizu,

My suggestion specifically was to do a Flush after every write, not just after the first (which seems to already be flushed without issue).

TcpSocket1.write "SomeData1"
TcpSocket1.write "SomeData2"
TcpSocket1.write "SomeData3"
TcpSocket1.write "SomeData4"
TcpSocket1.write "SomeData5"

Flush is a bad idea and may block your app for seconds.

Put a length in front of your packages, so you can decide in data available handler wether you got all data.

The data is either going to be big enough to flush automatically or a flush is needed, there are no two ways about it. Writing of this nature should always be on a separate thread anyway so any blocking would not be an issue. At any rate this is just meant as a debugging suggestion.

His issue is that he isn’t actually getting the data. A length header is an excellent suggestion (probably better than separators, but I like to keep my data streams readable if I can which is why I suggested that) once the data can actually be transmitted correctly in the first place.

What I’ve actually done in the past for IPC that handled several thousand writes per second was to use JSON packets, so that I know a message boundary any time I see the sequence }{ (which is not valid in JSON). This worked very well, but would require some checking to ensure for instance that the sequence wasn’t inside a string. The benefit of this approach is that the stream is nicely readable and it’s very obvious when a message ends and the next begins. This application was also written in C and this separator was much easier to parse than some alternatives.

But anyway, the issue is that the example or test data just isn’t long enough to trigger a flush, and won’t ever trigger a flush until the socket is closed, or you call Flush on the socket. This is just how sockets work, there is absolutely no way around this if the data isn’t large enough.

I found out that if i put a delay of 0.3s in between the tcpsocket.write
my server would received all messages ABCDE

tcpsocket.write A //delay 0.3s tcpsocket.write B //delay 0.3s tcpsocket.write C //delay 0.3s tcpsocket.write D //delay 0.3s tcpsocket.write E


only receiving A

tcpsocket.write A tcpsocket.write B tcpsocket.write C tcpsocket.write D tcpsocket.write E

so the problem statement was if you mass spam tcpsocket.write it only does sent only 1 out the other write just simply went MIA

could it be tcpsocket has some cooldown before it can send out?
I place loggers and debug point in the write, it did hit and execute the line, but on the server side nothing is receiving except first msg

It was suggested to me in these Forums not to use flush() in a thread (if I recall correctly), so I wrote my own method:

[code]// Replacement for flush(). Checks error conditions, and uses poll.

if (me.connstateOK=false) Then Return False // Error this request

While (True)

me.Poll () // Get latest status from the socket

if (me.IsConnected=False) then
// Here, do any logging you desire.
return false
end if

if (me.LastErrorCode>0) then
// Here, do any logging you desire.
return false
end if

If (me.BytesLeftToSend<=0) Then return true


I’ve not had any issues since (can’t even remember what they were, now). I always use newFlush() after every write() to a socket.

What happens with

 tcpsocket.write A
 tcpsocket.write B
 tcpsocket.write C
 tcpsocket.write D
 tcpsocket.write E

How do you read the data ?

ServerSocket will create individual TCPSocket for each connection.
I am reading it via TCPSocket.readall in the dataAvailalble. Not sure if its because of multi threaded sockets.

There should be no need for delays of any kind. In the linked zip file is a small “listener” and a “sender”. The sender sends strings of 50 some bytes + end of line and does so 10 times in a tight loop. The listening end just receives everything and echos it to the text area as rapidly as you can press the button.

If you have a listener that does a bunch of work in the dataavailable side of things then you can significantly affect performance and, with a busy socket, could miss data. This is why earlier I’d said you should, in data available, just pull the data out and stuff it in a buffer that some other thread etc pulls out “complete messages” so data available can be as rapid as possible.