Speed up compiling

Since “upgrading” my machine to macOS 10.14, command-r takes a while with my current project (which I’ll admit is rather large).

The first 5 seconds, I get the wheel of death.
The next 15 seconds, the progress bar moves through indicating it’s doing something.
The last 20 seconds, nothing happens, in Activity monitor, Xojo is consuming 30% of my CPU. I assume that it’s writing all the stuff to disk and the bottleneck here is APFS and a 2012 SSD.

Are there any hints, tips or tricks I can do to speed this up?

I always ask, but I’m sure you don’t…
Have you got iCloud synching turned on?

If 'the bottleneck here is APFS and a 2012 SSD’ then would moving things onto an external (newer) SSD using HFS help?

I usually go read Twitter while compiling.
Progress… :wink:

Same happens here on my new MBP Pro 16
It’s a feature I guess.

I also which the loading of projects would be much faster. It takes minutes before my biggest project is loaded and ready to work with. This wasn’t the case with older Xojo versions.

We are on the same boat, imac (ssd i5 8gb) doable, macbook air slow slow compiling (ssd i5 4gb), windows terrible compiling (amd ryzen 5, 64gb Nvme drive)

It looks like something changed. I’m talking about different sizes of projects. 2019R2 was faster (it feels like it) than 2019R2.1

SSD’s should be fast if the drive has at least 20% of it’s space left. On mac you might want to check that TRIM is enabled, be cautious if manual enabling.

Which format? How large is the project?

On my really slow MacBook Air compiling is even slower.

[quote=466405:@Jeff Tullin]If ‘the bottleneck here is APFS and a 2012 SSD’ then would moving things onto an external (newer) SSD using HFS help?

[/quote]
I’m afraid you can’t on 10.14. only 10.13 allowed this.

[quote=466404:@Sam Rowlands]Since “upgrading” my machine to macOS 10.14, command-r takes a while with my current project (which I’ll admit is rather large).

The first 5 seconds, I get the wheel of death.
The next 15 seconds, the progress bar moves through indicating it’s doing something.
The last 20 seconds, nothing happens, in Activity monitor, Xojo is consuming 30% of my CPU. I assume that it’s writing all the stuff to disk and the bottleneck here is APFS and a 2012 SSD.

Are there any hints, tips or tricks I can do to speed this up?[/quote]
Just out of curiosity, what Optimization Level is your project set to?

[quote=466404:@Sam Rowlands]
Are there any hints, tips or tricks I can do to speed this up?[/quote]

Learn to like tea, coffee, beer, soda and when this happens go get one :slight_smile:

Is the SSD Apple or third party? I had a situation where I had upgraded three 2012 mac minis with SSDs, and they ran great, but a few years later the lack of TRIM support had caused all 3 to slow to a crawl (write speeds had dropped under 50MB/s) ‎Blackmagic Disk Speed Test on the Mac App Store is a great tool to use for diagnosis.

I force enabled trim and things got much better.

If this is an Apple stock SSD, then your situation is weird - normally when I do a big compile, the last steps are using all CPU cores (I think the actual compilation is multithreaded).

Also, check your Build step scripts - anything slow there?

apple stock ssd’s from 2009-2012 were toshiba drives, that were surely very robust, but they were outrageously slow.
especially at the price apple sold them.

A few years ago when I swapped out the Fusion drive of a 2012 Mac Mini I have, to a Western Digital SSD. I wasn’t too overwhelmed with the performance, so I installed Trim Enabler. That gave it a noticeable boost.

We do a lot of deep I/O testing with APFS vs. HFS+ and not only is APFS just slower than HFS+, the more you access the disk in a given process stack, the slower things get. This is really bad for backup operations when the principle task is possibly reading the entire filesystem. We’ve seen things slow to under 10 MiB/sec coming off of the APFS formatted NVME drive in a Mac Pro canister under 10.15.1 after 45 minutes of continuous I/O - Even with Full Disk Access enabled in our process. The same machine under 10.13.6 and HFS+ stays consistent at over 310MiB/sec for the entire operation.

We’ve provided full I/O numbers to Apple, but they’re so locked on APFS and turning your Mac into an iOS device that they’re seem to be simply ignoring what we’ve submitted.

The odd thing about this is that TRIM support has been “supposedly” included within macOS since 10.10 …

only for apple brand ssd. the other are ignored. trimenabler allows to trim other brand ssd’s.

I believe this was the case for me. I remember downloading some utility at the time I swapped out the drive and it flagged the new SSD as not having TRIM enabled. Which led me to Trim Enabler.

While you can’t move the OS, you can move your working environment. All of our Macs use either a Thunderbolt or eSATA four-drive array in stripe mode with Seagate Constellation or Exos drives formatted as HFS+. That works just fine and is much faster (even though they are not SSD/NVME drives) than editing and compiling on the internal APFS volumes under 10.14 and 10.15.

We create a Symlink between a “Developer” folder in our home folder to a real Developer on the HFS+ Striped (RAID 0) array volume. Works great. Here’s the Thunderbolt I/O I see on my rMBP to 4 Exos 10TB drives:

I reach 460 (write) and 510 (read) on my now old macbook pro 17" late 2011, with el Capitan ( so HFS+) on a 2Tb crucial ssd.
not bad too ? :wink:

For project, I formatted one of our stand-by units with the same model disks and chassis (ProMedia) as APFS and on that same rMBP. I’ve never seen it this skewed - usually around 30% difference, but here’s the difference after 50 runs:

To clarify, I go from my rMBP to a Sonnet Echo Express IIID with the OWC PCIe to eSATA HBA into the eSATA port of the MediaSonic ProBox 4-drive chassis:
MediaSonic ProBox ProRAID Chassis

[quote=466405:@Jeff Tullin]I always ask, but I’m sure you don’t…
Have you got iCloud synching turned on?[/quote]
Nope, I have none of that ■■■■ switched on (or in the case of DropBox and others, installed).

I do have an external SSD (which I use for video storage), I’ve don’t recall it’s speed test (but it HFS+).

Yup a Mobile Phone feature that’s replaced a professional function, again.

I’ll double check that.[quote=466410:@Beatrix Willius]Which format? How large is the project?[/quote]
Binary format, and I don’t know how large the project is, but it has a ton of externals. Maybe I’ll try bringing in as many as I can.

Default.

Tea for me, I have become t-total in my old age.

I know the machine is old, but I’ve been waiting for Apple to make a Professional laptop (again). Holding off the 16" for the moment, to see what hardware bugs are shaking out, broken speakers and unresponsive TouchBar so far (why is there a consumer orientated function replacing a professional input device on a pro level machine?).

Thanks, I’ll check it out.

Sounds like fun, man, I wish someone would fire Cook and replace him with someone who actually cares about the company and the products they make.

@Greg O’Lone it’s possible to make a memory backed DMG file, RAM DISK! I wonder what it would involve to trick Xojo to use a RAM disk for compiling (intermediates and final binary). This should give a tremendous speed boost and hopefully avoid the utterly terrible APFS during the compile phase.

Is Xojo multithreaded when compiling?

[quote=466488:@Sam Rowlands]@Greg O’Lone it’s possible to make a memory backed DMG file, RAM DISK! I wonder what it would involve to trick Xojo to use a RAM disk for compiling (intermediates and final binary). This should give a tremendous speed boost and hopefully avoid the utterly terrible APFS during the compile phase.
[/quote]
It’s something we’ve thought about but it hasn’t gone much further than that. FWIW, a minute is tiny compared to some projects I’ve seen.
Is Xojo multithreaded when compiling?[/quote]
We spawn off up to 16 console apps at a time. They’re named HoudiniAssistant.

https://blog.macsales.com/46348-how-to-create-and-use-a-ram-disk-with-your-mac-warnings-included/