Transfer speed of sockets in xojo?

Has anyone managed to get more than about 50 Mbps (7MB/s) sent locally via sockets?

I’m currently playing with the code from the example Communication->Internet->Web Server with the corrected header code that I found somewhere.

I have just changed the Content-Type to application/zip and hosted a large zip file in the www folder the example creates.

There’s nothing complicated with the example, its not looping or poling anything, its just sitting there in a Self.Write out to the TCPSocket.

My PC hardware is up to spec, I get max rates (1 Gbps / 125MB/s) from IIS locally hosting the same file as well as similar code in other languages.

Does anyone have any ideas on what the problem could be or how to speed it up?

Please take a look here:

[quote=315671:@Carsten Belling]Please take a look here:[/quote]

Doesn’t look like it is a public thread…

I believe the issue was related to TCPSocket and ServerSocket. When instantiated by ServerSocket the TCPSocket DataAvailable event only returns 8192 bytes of data. So it takes a lot more event cycles to get all the data substantially lowering the throughput. According to others in that thread it does not happen when a TCPSocket is used outside of ServerSocket.

[quote=315671:@Carsten Belling]Please take a look here:[/quote]

Thanks @Carsten Belling

@shao sean - you need Xojo Pro to see the post, sorry

@Phillip Zedalis - joy, another problem, I seem to be a magnet for them at the moment, feedback://showreport?report_id=41046, I might actually go a whole day without finding a bug/problem one of these days… I look forward to that day :wink:

I’ve just done another test, all I am doing now is the following, and its still slow, would TCPSocket be limited to 8192?

[code]Sub Connected() Handles Connected

Dim bs As BinaryStream

dim f as new FolderItem(“C:\Users\Julian\Desktop\Atomic Web Server\WWW\”) ’ point this to a large zip

bs = BinaryStream.Open(f, False)

tcpsocket1.Write “HTTP/1.1 200 OK” + chr(13) + chr(10)
tcpsocket1.Write “Content-Type: application/zip” + chr(13) + chr(10)
tcpsocket1.Write "Content-Length: " + str(bs.Length) + chr(13) + chr(10)
tcpsocket1.Write “Connection: close” + chr(13) + chr(10) + chr(13) + chr(10)

tcpsocket1.Write( bs.Read( bs.Length ) )

End Sub[/code]

I guess I better install Wireshark and take a look =\

Julian, I think the problem is that the write buffer get’s filled at once. Please try sending “small chunks”. For example start with 8 MB. In the SendProgress event write the next chunk in size of the parameter “bytesSent” of this event until bs.EOF. You can download one of my sample files that I posted in the thread above to see how it’s done.

Hi Carsten. Thanks again for the info. I tried the examples posted on the feedback. I also tried your client/server code and was only getting about 8.8MB/s when sending a 102400000 (100MB) byte file.

[BOOST] 102400000 bytes received in 11675.55 ms 12501 package(s)

I’m scratching my head as to why it might be thought that filling the TCP send queue would slow things down, its not like its spamming the network with packets, it needs to ACK the last one it sent before it continues anyway, surely if the queue is full, it will have plenty of the data to send when it can. All that is being accomplished by queuing more data only when sendprogress returns is to insert another bit of complexity and additional calls into the pipeline thus slowing things down (ever so marginally).

If you send 8MB off the bat, then send a further 65536 bytes on every call to SendProgress you will always have 8MB in the queue of the TCPSocket, what is the difference between having 8MB in the send queue and the whole file, other than memory overhead holding that data somewhere.

The feeling I’m getting in my bones at the moment is that the framework isn’t sending the data fast enough, possibly yielding too much time, I don’t know at the moment, that is just a guess.

@Carsten Belling: can you post your example again in this thread?

This is the link Carsten posted


And this one:

@Carsten Belling, would you mind running that v3 you just posted and letting me know how long it takes you to xfer 100MB ?

Closer and closer I get to working out this problem.

I wrote a an alternate client to your v3 one posted above in another language and got the following result.

So the problem lies in the client sending data to the server as the server can quite happily receive at almost 1Gbs

I’ll investigate further.

1,000,000,000 bytes received in 9784.21 ms 9280 package(s)

Which is capping out my gigabit lan (I’m sending this between two different machines)

After a lengthy session with Wireshark, the only difference I can see at the moment is that the “calculated window size” of the faster transfer (the left set on the graph above) is 65536 (64k) and the “calculated window size” of the slower connection is 1048576 (1MB).

Faster connection (xojo server with alternate client written in different language):
Window size: 256
Window size scaling factor: 256
Calculated window size: 65536

Slower connection (xojo server with xojo client):
Window size: 32768
Window size scaling factor: 32
Calculated window size: 1048576

See here for more information.

@JulianS - in your example above… what happens if you change this:

tcpsocket1.Write( bs.Read( bs.Length ) )

To this:

while not bs.eof Tcpsocket1.write( Wend

I’ve found that writing in smaller “chunks” can make a huge difference in speed.

Exactly the same speed Greg, approx 7MB/s.

Even if I set it to 10240 instead of 65535 its the same speed.

Well, well, well, what have we here:

Both server and client in xojo!!

Its capping out my LAN (minus overhead of traffic already on the LAN).

I just have to do a few tests and I’ll post my findings.

:slight_smile: :slight_smile: :slight_smile:

I was checking out some debug info on BytesLeftToSend and I noticed that the bytes were building, so the data was being queued somewhere.

I then had a brainwave, why not flush the queue?

From my example in Post #5 all you need to add is the .Flush

Tcpsocket1.write( TCPSocket1.Flush

This is all that is needed to achieve max throughput.

This code is nicer though as you have the ability to yield in the loop if needed.

while not bs.eof Tcpsocket1.write( ' make sure this is big enough so the queue doesnt empty or the xfer slows TCPSocket1.Flush Wend

No special trickery is needed at the server end (in the v3 example above from Carsten, you can run it with “Boost” off)

The only problem remains is the 8192 byte limit that is coded into ServerSocket (feedback://showreport?report_id=41046), I hope someone can look into it soon as its been known about since October 2015 :frowning: