IPCSocket Speed

I have a main app and a helper app with an open IPCSocket between them. I have the main app open a file as a binary stream, read it in chunks to the IPCSocket buffer (256k chunk size) in a loop. Using a timer before and after, I know the process of reading into the buffer is acceptably quick, around 150MB/s for my hard drive setup.

The issue is the helper app (windowed app, I don’t have a console license). A binary stream is opened on connection and then dataAvailable triggers it to write to the created file. I’ve seen no speed difference between read and readall, also I don’t have any additional overhead from flushing the binary stream. Following the Xojo docs, I do have a time set to poll the socket and trigger data available, set to a period of “1”. Using timers, I know I’m not pushing more than 8MB/s which is a huge throughput loss.

Is this sort of poor performance inherent of IPCSocket?

Generally, the goal is to have the main app read a file and helper apps write to multiple locations. I’ve tried in a threaded app, but each helper thread halves throughput since threads are yielding to each other.

[quote=137750:@Angelo Lorenzo]I have a main app and a helper app with an open IPCSocket between them. I have the main app open a file as a binary stream, read it in chunks to the IPCSocket buffer (256k chunk size) in a loop. Using a timer before and after, I know the process of reading into the buffer is acceptably quick, around 150MB/s for my hard drive setup.

The issue is the helper app (windowed app, I don’t have a console license). A binary stream is opened on connection and then dataAvailable triggers it to write to the created file. I’ve seen no speed difference between read and readall, also I don’t have any additional overhead from flushing the binary stream. Following the Xojo docs, I do have a time set to poll the socket and trigger data available, set to a period of “1”. Using timers, I know I’m not pushing more than 8MB/s which is a huge throughput loss.

Is this sort of poor performance inherent of IPCSocket?

Generally, the goal is to have the main app read a file and helper apps write to multiple locations. I’ve tried in a threaded app, but each helper thread halves throughput since threads are yielding to each other.[/quote]

Why not write the data to a file and then pick it up from the helper ?

Michael, this would be a file transfer application: read once, write many so having each helper app read from the source and then write offers small if any advantage. The idea of IPC doesn’t really fit my needs but I just found it so strange there was such a decrease in reading from the socket buffer.

Since source to single destination, I tried the app with TCPSocket and locally I can get anywhere from 60-90 MB/s but I think the last bit of speed loss is due to TCP overhead.

Maybe I’ll look in Filemapping and Shared Memory from Monkeybread. I think the ideal method would be a shared memory buffer.

IPCSocket also has some horrible memory leak behavior under Cocoa: <https://xojo.com/issue/34107>

I’ve run into the same set of issues, and am pretty happy so far with FileMapping from MBS. (though to be fair, I do use an IPCSocket connection from the server to the client to tell it the name of the file mapping object).

[quote=141845:@Michael Diehr]IPCSocket also has some horrible memory leak behavior under Cocoa: <https://xojo.com/issue/34107>
[/quote]
@Michael Diehr , I can’t find this feedback case, do you know if it was ever resolved? I’m seeing some silent quits in my IPC helper app.

I think that case is not public.

It was reported on a beta version, so Alpha and Beta testers can view it. I’ll ask that it be made public.