My application is crashing in debug mode (developing on Macintosh) without giving any error message. I just get the the system message “MyApp.debug quit unexpectedly.”
The system log report shows this:
[quote]Application Specific Information:
*** error for object 0x193bf30: pointer being freed was not allocated
I haven’t used any #Pragma directives to disable any kind of checking.
My application is processing an input data file. When I use a small data file (50 lines), it works fine. When I use a larger one (100 lines), and using system.debugLog I can see that it processes the entire file, but then crashes after processing the last line. The processing of the input data is very processor intensive using a recursive routine, and I suspect that at the end it may be having problems when it tries to release memory. Any ideas what I should be looking for?
Download link for full crash report?
I ran this under XOJO 2016r3 and 2018r3. It crashes in both versions. The above link is for the 2016r3 crash report. I can post a 2018r3 version crash report if it helps.
Not too much to see. My favourite type of crash: the §$%& autorelease pools.
Do you do anything with text?
5 com.apple.UIFoundation 0x91a6e50f -[NSParagraphStyle dealloc] + 89
6 com.apple.UIFoundation 0x91a6e49f -[NSParagraphStyle release]
Or with Thumbnails?
com.apple.FinderKit 0x0e3020c6 TThumbnailExtractorThread::Main() + 580
Yes, those are common.
Somewhere in Apple framework, Xojo framework or plugin, an object is released once too often.
this causes a crash when the pool is freed.
So can try to reproduce this reliable and maybe see which xojo code may trigger the crash.
NSParagraphStyle is the key here, and that’s used for styled text in the macOS APIs.
I’m not using any styled text. However, the mention of text got me thinking about a diagnostic logging routine that I have in the program. As the main routine runs, it calls a printlog routine frequently to output various information during the run. This all gets appended to a text string, and then is finally output to a text area on a secondary window. This was never a problem when I manually ran one line of data at a time. The log data would get cleared at the start of each run. But this week I added a batch processing routine that takes input from a file and runs it. I didn’t think the logging would be a problem, because the string should get cleared after each input line is processed. However, after further consideration, I decided to completely disable the logging while in batch mode. I thought that fixed it, because it was then able to get though the whole 200 line input file without crashing. However, when I tried a 400 line file it started to crash again. I figured that there must still be text accumulating in the log string, so I added code to clear the string after each input line was processed. That helped, and I was able to process a 1000 line input file up to line 789 before it crashed again. That’s where I’m at now. I think there’s a second string in there that may also be accumulating text that I’ll have to deal with. In any event, it appears that I’m headed in the right direction. So, thanks everyone for the suggestions.
After more work removing extraneous text logging calls, I’ve managed to get batch mode to process input files with as many as 10,000 records. However, the program still crashes when the main batch routine exits. I placed a MsgBox call at the very end of the batch routine, so that it hangs there until I click the OK button. This gives the program the opportunity to complete any asynchronous operations such as writing the last data to the output file and closing it. When I click the OK button, the routine exits and the program crashes, but at least it manages to process all of the data and output the results before the crash.
I’ve been using the Activity Monitor to look at memory usage. As my program runs, it starts at around 70 Mb and then gradually increases to over 200 Mb by the time the batch run finishes. There’s clearly some kind of memory leak. The program also becomes progressively slower as it proceeds through the input file.
As a rather ugly workaround, I’ve now split the batch processing routine into 3 parts. The first is called when the user clicks the Batch button on the main window. It reads various parameter data, opens the input data file, and opens the output report file. The second routine is placed in the action event of a timer. It suspends the timer, reads the next 20 records from the input file, processes the data (by calling a recursive processing routine), writes the results to the output file, and re-enables the timer. The third routine is called when the end of file is reached on the input file. It stops the timer, and closes all files. Since the main processing is done in the timer action event, it returns to the top level of the main thread after every 20 records are processed, and the runtime system can clean up memory immediately rather than leaving it to the very end of processing where it’s clearly unable to deal with it.
Running with the timer workaround, there are no more crashes, and the memory usage never goes above about 75 Mb. The timer’s period is set to 50ms, so it only adds 50ms for every 20 input records processed which is insignificant, because each group of 20 records takes a minimum of 10 seconds to process.
It’s not pretty, but it works, and at least the program is usable as I continue trying to track down the exact cause of the leak.
Some thoughts, but I don’t know how it will apply to what you’re doing.
I noticed that when I was working with video (via Apple’s API), I was able to improve performance by managing the AutoReleasePool myself. My first attempt basically didn’t the pool until all the frame were processed (we’re talking thousands in some cases). This of course not only increased memory usage during processing, but also meant that upon completion there was a horrendous delay as it then released all that memory.
In the end I settled on flushing the pool every few seconds (when it also updated the progressbar). I still gained a performance improvement, while I reduced the overall memory usage and no longer had to wait at the end of the process.
It sounds very similar to what you’re experiencing; in the sense that a long process uses more memory and then there’s a problem at the end. Whereby doing it in chunks reduces memory usage and reduces the error at the end. Because my entire interaction with the frames of a video were processed using declares, it made sense to take control; however in your case I don’t think you experimenting with a manual AutoReleasePool would help as I expect you’d just be fighting Xojo’s implementation.
Thanks. I was wondering if I could manage the autorelease pool myself, and assumed that it would be necessary to use declares. It appears that the timer approach does work, and since this isn’t going to be a commercial product, I won’t worry about it. In fact, the batch runs are really only for the purpose of getting some statistical performance data. Once that’s done, I likely won’t need the batch function anymore.
As I write this, I’m doing more testing, comparing a 2018r3 build to a 2016r3 build. The 2016r3 build runs faster, but seemed to have worse memory problems, before the timer fix.
Would be nice if an app could call the garbage collector routine as can be done in some other languages.
I have dome some testing lately again with memory consumption. How did you measure the memory?
a) Memory in Activity Viewer and memory in Instruments aren’t measured the same way. In Activity Viewer you get the total memory, while in Instruments you get the used memory. Instruments/Memory leaks can show you if you leak memory and still the total memory in Activity Viewer gets higher.
b) I don’t know how other apps handle memory fragmentation. Safari, Mail etc. run on my Mac for weeks and don’t show anything like what I see in Xojo apps. There the total memory only goes up and up and up.
I’ve never used Instruments. I was using the Memory tab in Activity Monitor. I don’t know the exact meaning of the number that is displayed. All I did was watch the memory value of my application as it ran. Since the routine that performs processing of the input is heavily recursive and creates several arrays, I would expect that memory consumption would increase as this routine is called and as it calls itself again recursively. However, I would then expect all of the memory allocated during this call to be released after the routine returns to the top level of the program. This should happen frequently. But clearly, the memory was never being released. In one instance, I saw the memory balloon to over 5 Gb before the program crashed, but I have never been able to recreate this situation. I have seen differences in memory consumption according to which processing options I select in my app. I plan to examine these more closely to see how they differ. I know, for example, that one of the options that appears to increase memory consumption uses quite a bit more regex processing than the other options.
I don’t know how other apps handle memory allocation/fragmentation either. However, I have created XOJO apps that run for weeks without causing any problems. Since they haven’t caused any problems, I’ve never monitored their memory use. I’ll have to fire one up and see what it does.