Under the SSLSocket documentation:

Calling Read, ReadAll, or Lookahead may not fetch all of the data in the internal buffer. This is because SSL needs to read data in blocks (due to the cryptography), and it may not have a complete block in the buffer. For example, there may be 700 bytes available in the buffer, but SSL can only decrypt 512 bytes due to the remainder being an incomplete block. What occurs in this case is some data may remain stagnant in the buffer. When more data comes in, the DataAvailable event handler is called. If there are no more DataAvailable events, then upon disconnection, additional DataAvailable event will be issued to let you pick up any stagnant data that SSL can give us back.

Is this still relevant in the current version socket? I was extremely nervous going SSLSocket in my applications because of this, but have since tested it with various amounts of information sent via the remote .write method… and the local dataAvailable (w/ readAll) seems to ALWAYS write what is transmitted. I’ve tried small writes (5-6 bytes) and larger writes (700-2000 bytes) and they’ve all came in perfect. I know that under the TCP protocol, you’re not guaranteed a complete packet on transmission… so I’m making a handler to verify that I, indeed, got a complete ‘message’ before processing it. I just want to make sure if this is now something that is handled internally somehow, or if it’s something I need to watch for and handle myself. Thanks…

There are some SSLSocket problems:

In general it works and the events fire correct.

Especially I saw that data arrived for me one time in little 8 Byte Chunks, so we needed a lot of events until we had all the data arrived.

Sorry for the late reply, and I appreciate the feedback Christian. I did do some testing with the problems you stated, and most (if not all) seem fixed. I opened up and closed new serverSocket instances of a new sslSocket instances, processed information, and closed them for a few hours while keeping an eye on the memory usage, and report logs. Only issue I had was during the initial test, I had threads running on those instances that I didn’t stop & kill before the session closed (figured it would handle that automatically). This closed the instances within the serverSocket, but obviously left it in memory (as before long, the memory usages started going up along w/ the CPU usage… which pointed to the threads not deconstructing). To solve this, I manually killed the thread, and set it’s instance to nil in the sslSocket error event handler (if the error was 102). Re-ran the tests, and everything worked smoothly. After 1000 open and close sessions of sslSocket, my application was still using 3.2MB of memory and the CPU never spiked above 1% CPU usage at any time. I’m happy and confident with the results…