I am not very knowledgeable on networking stuff (but I am working on learning!), so please forgive me if this is a stupid question…
From what i gather it seems most code that uses server sockets and helper apps is written such that when a request comes into the main app, it sees what is and then pasess some information/command to the helper that does it’s thing. The helper then has to send the info back (so serializing it) to the main app (be it through a file/IPC or a socket connection) which then sends the result back to the client… so basically the result data has to be transferred twice… once from the helper to the main app and then from the socket handling the connection to the client…
Is it possible when a request is received to hand off the actual connection to a helper so it can directly send the response to the client?
Using shared memory could be a way to avoid copying the data but I don’t have MBS and don’t have the knowledge to set something like that up… never mind do it Xplatform!!!
BTW as long as Xojo only uses co-operative threads on a single processor, I think such shared memory capabilities should be built into the framework … but i don’t think we will ever see that.
I have several apps that use helper apps and pipe data back and forth either through a real pipe like a TCP socket or a chunk of shared memory. What you want to do as far as handing things off depends entirely on where the input to the helpers is coming from and where it ultimately needs to go.
As far as shared memory is concerned, Ive used that only for background apps that are processing really large amounts of data. In my case they were HD images from web cams. But that also takes a lot of memory to have locked and not able to be moved around by the system. And I still needed commands in a queue over the socket to tell the watchers of that image that the next image was ready and waiting for them to read out of shared memory. It will also only work as long as the sender and receiver are on the same machine.
I originally implemented that shared memory approach because there was a bug or possibly just limitation With older versions of Xojo where a server socket read was limited to 8k, so you had to get a LOT of data available events to receive an HD image and each one required a trip through the event loop and so it was very slow. This has been fixed for a while now and my next version of that software will eliminate the shared memory solution as it isnt noticeably faster than just piping the data through a TCP socket but is very much more complicated.
If you have clear network access between the client and the helper app then the main app can just setup the communications between the 2 directly. You cant pass off a socket but you can tell the helper app to open another listening socket and then also tell the client to connect on that socket to continue the conversation. That eliminates the middle man after the initial connection is made. This works great as long as the 2 processes are running where they can reach each other without having to worry about peoples cable modems and NAT passthroughs and such, and doesnt work at all if you are working in such a distributed environment or if the firewall policies wont allow you to open a range of ports to use for that sort of communication. The main app would be strictly for load balancing between the helper apps in that case.
Again though, like shared memory, that only makes a difference if youre sending a HUGE amount of data back and forth. Just establishing the connection is a very slow process, once you have the connection in place sending data around is very fast. So if youre not sending hundreds of k or even megs back and forth its not going to be any faster to do all that work than just to forward it through the main app.
In fact one use I was think of would need to be transferring routinely about a megabyte of data back and forth…
But it sounds like it’s not worth worry about up front… (i like to design/consider worst case scenarios up front if I can)
There is also IPC sockets; which help you to create a network like socket between apps on the same machine. There are two downsides that I’m aware of with IPC.
- It appears that IPC is disk based, so that every message is written to disk.
- The App Store / Sandboxing rules does allow for IPC & shared memory; but there are limitations which can be frustrating at first. If you choose to go this way; let me know and I’ll help out as best as I can.
It’s totally possible to do shared memory on macOS without the use of plugin (also that’s accepted by App Store rules), however as I don’t do windows, I can’t help on the Windows side.
At work we are in a mixed Mac/PC environment, so I don’t do anything that is not X-platform
Pretty sure this is not true - I think that IPC sockets are just TCP sockets bound to localhost internally.
When I tested this several years ago; while trying to find a solution that was compatible with the Mac App Store, I suspected it was. I used a console based disk activity tracker and it was showing up everytime a message was sent. Which just added fuel to my belief.
Why for the love of god can I not recall the name of the tool; I had to use it against an App Store reviewer once who claimed that my app was doing things to files that it wasn’t. No not the time I was rejected for using Apple’s own imaging APIs, which use atomic saving.
As to IPC sockets … I don’t know if this still true, but I found this post by Aaron Ballman from 2006 on the old forum:
[quote]IPC on OS X and Linux is just using a unix domain socket, so that would be very straight-forward. On Windows, in newer versions of RB (I think RB2005r3 or greater), it’s just a TCPSocket bound to the localhost. However, we convert the path into a port in an undocumented manner. So, on Windows, you can do some tricks to discover which port number the other TCPSocket should be using, but there’s no promise that a future version of the product won’t change the algorithm.
And from here:
[quote] UNIX domain sockets use the file system as the address name space. This
means you can use UNIX file permissions to control access to communicate
with them. [/quote]
So on Mac and linux, if i understand that, it looks like it uses files or at least some part of the file system API, but on Windows it’s not using files for sure…
That last link is worth a read.
[Code]The linux kernel per-se does not persist any internal data or application data. You can use all kernel features without having a disk mounted at all.
You have to differentiate between filesystem and disks. A filesystem can be completely virtual, it can reside in memory, or on the network.
Some POSIX operations use paths as unique identifiers, including UNIX Domain Sockets. The path is only there as an identifier. You can place it in a tmpfs for example to avoid any disk usage. On a modern Linux system, /tmp/ is typically mounted to a tmpfs.
Note that even if your socket end-point lies within a filesystem residing on a disk, the disk usage is still negligible. As the path is only used to identify/find the socket itself, none of the actual data is ever written on the disk. And the kernel will also cache the path in memory.[/code]
It looks like Unix domain sockets never writes it’s data to the disk even though it uses the file system.
Just two things I’d like to mention.
- I could very well be wrong, and the disk monitoring tool I was using was providing false positives based upon the socket ‘grazing’ the file used for IPC.
- Linux != macOS; there are some really cool features in Linux that I’d love to see Apple adopt; but given my impression of their current direction, I don’t expect it. tmpfs is one thing; I’d really love to see. It’s a file that’s stored only in memory, would make IPC and such so much faster.
I’m guessing it just writes a file name on disk … but if don’t have an SSD , with a decent sized payload that may be able to be determined by just by looking at time sent and time received i would think, given the potential difference in speed if the Data was saved and read from disk or not.
IPCSocket works similarly to a regular TCPSocket, but instead of using an IP address and Port number, a local file path is used, leading to a so-called socket file. The same path must used on both ends of the connection, and it should preferrably be a unique file location inside a temporary folder that’s not usually visible to the user. The file might remain in existance even after closing the connection, so you should delete any leftover files from previous connections when you make a new connection.
The time between the message being sent and the DataAvailable being fired can be as low as 350 microseconds but it varies considerably depending on the type of your application (GUI or Console), what is currently happening on your system, accesses to your hard drive…
On Windows the IPC socket is a TCP socket with an algorythm to determine the port. @Joe Ranieri had a posting on this years ago - not sure about other OS’s though.
[quote=444681:@Ivan Tellez]IPCSocket works similarly to a regular TCPSocket, but instead of using an IP address and Port number, a local file path is used, leading to a so-called socket file. The same path must used on both ends of the connection, and it should preferrably be a unique file location inside a temporary folder that’s not usually visible to the user. The file might remain in existance even after closing the connection, so you should delete any leftover files from previous connections when you make a new connection.
I am sure that is how it is on MacOS as that describes a Unix domain socket… as Aarron said that is not the mechanism used on windows (Aaron was an xojo engineer , and their windows guy before )
Which is exactly want Aaron before him said in back in 2006!