The implementation of UDPSocket in Xojo is done in an events driven and therefore asynchronous manner: I can construct a socket, attach that socket to a port, send some data out to the network all in one chunk of code, but dealing with any incoming data to that socket must be handled in the dataAvailable event of the socket.
This is all fine and good, if it’s ok that I receive the response asynchronously, but all too often, I find myself in this sort of scenario:
//Need to make a decision about what to do next, based on what I hear back from other end of a socket:
//Hmm.. I send it, but I don't know the response yet, due to the latency of the socket. So I'll mangle myUDPSocket with some sort of "dataReceived" flag, as well
//as a lastDataReceived string or somesuch.
while myUDPSocket.dataReceived = false
//in myUDPSocket.dataAvailable() I'll set dataReceived = true and stuff the data into lastDataReceived
myUDPSocket.dataReceived = false //set it back to false so I can re-use this socket at a later time if needed.
if myUDPSocket.lastDataReceived == "YAY" then
//do something nice
//do something evil
myUDPSocket.LastDataReceived = "" //clear out last data received in case I need to re-use socket and don't want garbage left over.
Is this considered the “right” way to do things? I’m concerned about:
a) What if nothing ever comes back from the socket? Do I have to have a timer or poll count or something in the socket to force me out of my while loop?
b) How does such a tight while loop affect performance (especially in a multi-threaded environment?) Do I need to yield?
c) It’s always been ambiguous to me: Are sockets run in their own threads? I know they persist as long as they are connected, even if the handle to the socket is set to nil - the data available event can still fire (especially with TCP sockets).
May I suggest that you look at a finite state machine implementation?
I know it sounds async but I think once you understand them you’ll come not to think in terms of sync or async. Finite State Machines
Note that this is doing a simple EHLO to my personal mail server, so don’t abuse it too badly.
Before you open the project, get a way on your OS to see how many threads are running for each app - I use ActivityMonitor on the mac.
Open the project, run it, and mash the ok button a few dozen times. Note the threads climbing. Each TCPSocket runs completely inside its own thread. The sockets will connect to my mail server, say EHLO, get a response from the server, then wait 10 seconds and disconnect. As they disconnect, the threads die, so after 10 seconds the thread count should start to drop.
I just banged this up in response to your comments - it’s not particularly clean or well-documented… but it illustrates how to do it.
UDP is not guaranteed to be delivered in either direction hence why its often used for places where getting every send & receive is not critical.
If its critical or important that you DO get the response to something you send you should use TCP sockets
Thanks @Norman Palardy - I’m very familiar with the differences between TCP and UDP, and I have very good reasons for being forced to use UDP in this context. I guess my original question really is this:
The way I’ve set things up in my original example (that of forcing the UDP socket into different states based on whether it is ready to send, sending, or waiting for a response) - bad or good? Safe or not?
True, that is procedural, but I’m faking the behavior of a state machine.
Again, the problem is that I want to use a generic UDP socket that I open when the app starts to handle all kinds of information exchange. A true FSM would send a datagram out in one place and wait for the dataAvailable event of the socket to fire, and handle the incoming data there - effectively breaking a single method into a method to send data, and a series of methods that could be called to handle a response based on the dataAvailable datagram that arrives.
Unfortunately, this is a large project (> 50,000 LOC) and re-architecting it to work as a true FSM would be more work than it’s worth.
My original question still stands: What are the dangers of doing it the way I’ve proposed in my first post? Is there a more elegant way to approach Synchronous-feeling UDP socket interaction, where I can simply do something like this:
(I can subclass a UDPSocket as SpiffySocket and implement sendSynchronously as in my first post to make this syntax possible - I just want to know if having tight loops like this potentially running in multiple threads will cause me grief down the road.)
Could you subclass Thread and instantiate a UDPSocket (hooking up the events with addressof) and implement a simple send / receive finite state machine. (Something like expect does.)
Would then be able to implement a blocking read in
All the complexities like Threads/Timers to send/receive asynchronously, current states, are inside hidden.
Have in mind loss of udp packets and udp packet disorder (send 1, send 2 can receive reply 2, reply 1) .
[quote=34094:@Kimball Larsen]a) What if nothing ever comes back from the socket? Do I have to have a timer or poll count or something in the socket to force me out of my while loop?
Do the timing in the method itself. Save Microseconds to a local variable and check how much time has elapsed inside your loop.
[quote]b) How does such a tight while loop affect performance (especially in a multi-threaded environment?) Do I need to yield?
Yes. Yield every X microseconds. Adjust X to balance throughput and responsiveness.
[quote]c) It’s always been ambiguous to me: Are sockets run in their own threads? I know they persist as long as they are connected, even if the handle to the socket is set to nil - the data available event can still fire (especially with TCP sockets).
No. They are run in the main thread. Or in whatever thread calls Poll. But since all threads are cooperative, you need to yield timem to the other threads periodically.