I believe the issue was related to TCPSocket and ServerSocket. When instantiated by ServerSocket the TCPSocket DataAvailable event only returns 8192 bytes of data. So it takes a lot more event cycles to get all the data substantially lowering the throughput. According to others in that thread it does not happen when a TCPSocket is used outside of ServerSocket.
@shao sean - you need Xojo Pro to see the post, sorry
@Phillip Zedalis - joy, another problem, I seem to be a magnet for them at the moment, <https://xojo.com/issue/41046>, I might actually go a whole day without finding a bug/problem one of these days… I look forward to that day
I’ve just done another test, all I am doing now is the following, and its still slow, would TCPSocket be limited to 8192?
Julian, I think the problem is that the write buffer get’s filled at once. Please try sending “small chunks”. For example start with 8 MB. In the SendProgress event write the next chunk in size of the parameter “bytesSent” of this event until bs.EOF. You can download one of my sample files that I posted in the thread above to see how it’s done.
Hi Carsten. Thanks again for the info. I tried the examples posted on the feedback. I also tried your client/server code and was only getting about 8.8MB/s when sending a 102400000 (100MB) byte file.
[BOOST] 102400000 bytes received in 11675.55 ms 12501 package(s)
I’m scratching my head as to why it might be thought that filling the TCP send queue would slow things down, its not like its spamming the network with packets, it needs to ACK the last one it sent before it continues anyway, surely if the queue is full, it will have plenty of the data to send when it can. All that is being accomplished by queuing more data only when sendprogress returns is to insert another bit of complexity and additional calls into the pipeline thus slowing things down (ever so marginally).
If you send 8MB off the bat, then send a further 65536 bytes on every call to SendProgress you will always have 8MB in the queue of the TCPSocket, what is the difference between having 8MB in the send queue and the whole file, other than memory overhead holding that data somewhere.
The feeling I’m getting in my bones at the moment is that the framework isn’t sending the data fast enough, possibly yielding too much time, I don’t know at the moment, that is just a guess.
After a lengthy session with Wireshark, the only difference I can see at the moment is that the “calculated window size” of the faster transfer (the left set on the graph above) is 65536 (64k) and the “calculated window size” of the slower connection is 1048576 (1MB).
Faster connection (xojo server with alternate client written in different language):
Window size: 256
Window size scaling factor: 256
Calculated window size: 65536
Slower connection (xojo server with xojo client):
Window size: 32768
Window size scaling factor: 32
Calculated window size: 1048576
This is all that is needed to achieve max throughput.
This code is nicer though as you have the ability to yield in the loop if needed.
while not bs.eof
Tcpsocket1.write(bs.read(65536)) ' make sure this is big enough so the queue doesnt empty or the xfer slows
TCPSocket1.Flush
Wend
No special trickery is needed at the server end (in the v3 example above from Carsten, you can run it with “Boost” off)
The only problem remains is the 8192 byte limit that is coded into ServerSocket (<https://xojo.com/issue/41046>), I hope someone can look into it soon as its been known about since October 2015