TCPSocket Transfer Slows Down.

I have a client app that is using a TCPSocket writing a string of commands plus base 64 encoded zip file data to a server app listening with a serversocket. The transfer speed is fine to begin with but as the transfer goes on it gets slower and slower and gets down to about 160 - 80kb/s for the last half of the transfer. I’ve tested this with different network devices and it’s always exactly the same. If I run both the client and server on my development machine(loopback) it works fine. Any advice would be appreciated. Thanks… :confused:

[code]
’ Client write
dim f as new FolderItem(strPath) 'Zip file about 27MB avg

dim t as textinputstream
dim data as string

f=GetFolderItem(strPath)

if f <> nil then
t=textinputstream.open(f)
end

while not t.eof
data = data + EncodeBase64(t.read(65535))
wend
t.close

sck.Write(command1 + RS + command2 + RS + data + RS + EOT)[/code]

[code]
'Server DataAvailable
intEOT = Instr(me.LookAhead, EOT)

strData = strData + me.Read(me.BytesAvailable)

if intEOT > 0 then
’ Transfer complete…
’ DO stuff…
end[/code]

Instead of reading into one large string and then writing it all at once, why not put a Sck.write right in the while loop? Depending on the size of the file, you may be overloading the buffers somewhat.

I did read that solution elsewhere on the forum and I gave it some thought.

Are you suggesting that I use Binary Stream similar to this?

while not bs.eof sck.write(bs.read(65535)) Wend

Thanks for any advice.

[quote=337555:@Geoff Haynes]I did read that solution elsewhere on the forum and I gave it some thought.

Are you suggesting that I use Binary Stream similar to this?

while not bs.eof sck.write(bs.read(65535)) Wend

Thanks for any advice.[/quote]
Yes, and play around with the size of the chunks you send. As I recall, we chose 51200 for Xojo Cloud uploads because it gave the overall best upload performance.

Thanks again @Greg O’Lone. It turned out to be the hardware was just unable to process the data coming into the buffer fast enough and after about 20MB it would just get slower and slower. I solved it by sending 10MB chunks at a time and waiting for the chunk to complete before sending the next one.

10MB chunks seems excessive. I would think the same strategy with 256K chunks should be sufficient. If not it would be interesting to know what kind of hardware you are working with.

Actually after working with it some more today I found the main problem was on the server side. By adding a new delimiter I was able to read/remove only the new sections of the binarystream I was sending from the buffer and that solved my problem. I’m guessing as the buffer was filling up the program was forced to process more and more data to find the new data that it had just received and that was making the buffer/processor overload.

This example kind of explains how I fixed it in DataAvailable:
https://forum.xojo.com/3902-tcpsocket-question/1