Cutting Compile time

I am looking for input on a hardware upgrade with a firm budget of around $2,000. Our main application, which is the core of our business, currently takes an agonizing hour and a half to compile on our old Intel Mac mini. We are forced to run this build in excess of 10 times a week. My goal is to find the absolute best machine within this budget to slash this time as aggressively as possible.

We analyzed the entire build process on a newer M4 Mac mini with 32GB of RAM. Even on this more modern machine, the compile still took over 30 minutes. Using top to monitor the process, we uncovered very specific performance bottlenecks. The build consists of two distinct phases:

  1. The Parallel Crush: The first phase is a massively parallel process that completely saturated all available Performance-Cores with 4 HoudiniAssis compiler processes running at 100%. This ~20 minute phase also put the system under extreme memory pressure, forcing over 15GB of RAM into a compressed state.

  2. The Sequential Grind: The second phase was an unexpectedly long, single-threaded process (linking?) . For over 10 minutes, a single HoudiniAssis process pegged one CPU core at 100% while the rest of the multi-core CPU sat nearly idle. This proves our build is limited by both parallel processing power and single-core speed.

This analysis leads me to a hardware dilemma between two setups at the ~$2,000 price point.

  • Option A: Maxed-out M4 Mac mini (~$1,899): Configured with an M4 Pro chip (est. 8 Performance-Cores) and upgraded to a necessary 32GB of RAM.
  • Option B: Base M4 Mac Studio ($1,999): Comes standard with the M4 Max chip (est. 10 Performance-Cores) and 36GB of RAM.

My core question is this: given the evidence, which machine is the smarter investment? The Mac Studio offers a 25% increase in Performance-Cores to attack that brutal parallel phase, plus slightly more RAM. But perhaps more importantly, it has a superior thermal design.

How much of a factor is throttling on a Mac mini during a 30+ minute compile compared to a Mac Studio, which is built to sustain peak performance?

Should we spend a couple hundred bucks and get a better return or should we just back down to a smaller machine and get nearly as good?

Honestly - would it be possible to break up your app into components that could be compiled separately? That’s a crazy amount of compile time that must really be wasting a lot of developer time. A key way many people develop is develop-test run-fix but that only works if running your code isn’t a barrier due to compile time.

Aside from that
 since this is a web project, have you considered getting a beefy PC for your budget and using that to compile? It’s likely that you could get more RAM/CPU for your budget. You could fine-tune the specs to best serve your needs (no need for a fancy graphics card or fast storage, for example).

2 Likes

IMO $2000 isn’t enough to solve your problem. We code on 2019 Intel Xeon Mac Pro’s with 16 core CPUs and 128GB RAM. We code our main app in Parallels on Windows and that takes about 20 seconds to compile about 1,000,000 lines of code. Xojo performs very well on those machines as well. If you want drastic, I’d look at more CPU and RAM on a Mac Mx Machine, you’ll be glad five years from now when it still performs extremely well. I also think Eric has a good point about breaking up your app. Just my 2 cents.

1 Like

So the “brutal parallel phase” as you put it is the compile phase where each of the individual units gets compiled into machine code. Parts that can be, are done in parallel. Not every piece takes the same amount of time and you’ll often have one remaining piece after all the others are done that takes an extra bit of time to finish.

Once compiling is done, things are passed over to the linker for final assembly. This has to be done on a single processor in whatever order they need to be in. Generally:

The linker is inherently sequential for many because things like this must be done sequentially:

  • Symbol resolution
  • Relocation
  • Ordering sections and resolving dependencies
  • Writing to the output file
  • Final relocation fixes

In short, while some parallelism exists in modern linkers like lld, the nature of linking imposes limits. Full multicore usage is not realistic due to how tightly coupled the linking steps are.

So
 the more cores you have, the faster the compile will go, but not the linking. Newer chips are usually inherently faster than their predecessor, but not always. At least one of the M4 chips was actually slightly slower that the predecessor in the same machine in some MacBench tests.

I am curious though
 are you using the Medium or Aggressive compile option? 30 minutes sounds like a long time for a web app.

4 Likes

my thoughts:
it could be compiler optimization settings. and use of variant.
source code should be at harddisc/ssd.
i would save in project format .xojo_project.
images copied with build step and load on demand. (it was mentioned in other thread)
prevent copy/paste and use classes for business logic.
anti-virus software could slow down/delay.

After considering all the data and community feedback, we have decided to move forward with a custom-configured Mac Studio as our dedicated build machine. We believe this option gives us the best shot at drastically reducing our 90-minute compile times.

The specific configuration we are testing is:

  • Apple M4 Max with a 16-core CPU and 40-core GPU
  • 64GB unified memory

This build directly targets our two main bottlenecks: the 16 Performance-Cores should significantly accelerate the “Parallel Crush” phase, and the 64GB of RAM will completely eliminate the memory pressure we observed. We will report back with a detailed analysis of the new compile times once the machine is up and running.

A quick note on the final cost: yes, we went over our initial budget. The final configuration with the memory upgrade and tax brought the total to just under $3,000.

3 Likes

i like the PC option but am clueless on hardware builds

I think MarkusR has some good points as well. Great choice on the computer.

Throwing hardware at the problem is likely to get a 2x to 4x speedup


But it also feels like something is wrong - I have some pretty complex Xojo apps, and they never take anywhere near that long to compile.

Can you say more about your setup? Details about the following would be helpful:

  • Xojo Version
  • Project format (text, binary)
  • Where is your source code (local SSD? network drive?..)
  • Is the slowdown seen only when you Build? Or does it also happen during incremental compilation when you Run
  • what targets are you building?
  • what Build / Optimization Level are you using?
  • Is there anything weird about your project?
4 Likes

Thanks, Mike. We appreciate you taking the time to offer suggestions.

To provide a bit more context, we’re a fairly large and seasoned team of nine full time developers, and we’ve already thoroughly addressed the foundational aspects you’ve mentioned regarding Xojo versions, project formats, and source code locations. We also frequently collaborate with other Xojo community members on specific aspects of our projects, so we’re quite familiar with best practices and common pitfalls.

Our core applications ( about 15 of them) are very substantial Xojo Web API 2 projects, deployed across multiple large DigitalOcean droplets with various .htaccess configurations. Typically running half a dozen instance of the app on an one web server. Given its scale, and with Bob Keeney no longer building large applications for clients, we suspect we might indeed be working with one of the largest Xojo codebases currently in active development. Almost certainly the largest web project.

The specific challenge we’re wrestling with right now is pushing the performance limits of our Xojo web app compilation. The ‘aggressive compile’ option, while beneficial for runtime performance, significantly increased our compile times: from 5 minutes to 1.5 hours on our old Intel Mac mini build machine, and still around 30 minutes even on a basic M4 Mac mini.

Therefore, our current focus is purely on optimizing the hardware to reduce these critical build times as much as possible, given the bottlenecks we’ve identified. We’re looking for insights specifically on the M4 Mac mini vs. M4 Mac Studio thermal performance and core utilization for sustained, intensive compilation tasks.

Having said all that, We will soon have the new Mac studio in office and will report back our results.

2 Likes

had your team seen this thread? a possible reason maybe.

while beneficial for runtime performance, significantly increased our compile times: from 5 minutes to 1.5 hours on our old Intel Mac mini build machine, and still around 30 minutes even on a basic M4 Mac mini.

ok, so for a release the better pc save time with enabled.

my pain threshold are few minutes compile time.

1 Like

Let me ask this question. is the problem that it takes 30 minutes to compile or that someone needs to sit and wait for it? If it’s the latter, do you have a CI/CD system that builds all of this for you?

2 Likes

Our current build times are a significant bottleneck; just three or four morning builds push us well into the lunch hour. Even if it is unattended. This makes me wonder: rather than a direct hardware purchase, could a cloud solution be more effective, allowing us to simultaneously spin up five or six build environments? I’ve started researching Mac Stadium’s Orca – what are your thoughts on its suitability or other alternatives?

Re this thread: Array() function using Pairs causes slow Aggressive Compile

the title is dated and misleading - the actual core problem is the combination of

  • Aggressive compile option
  • any constant-to-variant conversions such as
    var v as variant = "Foobar"
  • note that the Xojo Pair class uses Variants internally

@Underwriters_Technologies : If I were you I would do this:

  • buy the beefiest hardware you can (because this is almost always cheaper than engineer time) and pays off in other ways
  • stop using Aggressive compilation for the daily edit/compile/test development phase
  • review your code for un-necessary use of Variants
  • use Aggressive compilation only for Release builds. This is easy to do using Xojo Build Scripts and/or the IDE communicator
  • as @Greg_O suggests, look into a CI/CD system for release builds
3 Likes

Since they’re web apps, the platform you use to build them shouldn’t matter unless you’ve got some windows or macOS native deliverable that you need to codesign. If so, I suggest building on Linux VMs in the cloud. They’re significantly cheaper and in some ways faster than Mac and Windows because of the file system. If you haven’t seen it, the IDE on Linux opens nearly instantaneously which means that you can open and close the IDE for each build if you want to be sure to get a nice clean build.

And to answer the next question
 yes, it is technically possible to build with Xojo in a headless Linux server environment. You can’t use the ide, except maybe to license it, but you can certainly build if you’re using build automation. I can try to put something together if anyone’s interested in seeing how it’s done.

8 Likes

And our large project was done in Web 1. We never attempted to port it to API 2 because of the number of WebContainers it used (IIRC WebContainers were not in the initial Web 2 portfolio). And the client really didn’t want to spend the money to rewrite the application for web 2. AFAIK that project is still in use though I doubt anyone is actively maintaining it.

1 Like

@Underwriters_Technologies

I work in a code base of ~400K lines of code (which was getting awfully close 1M lines a couple of years ago). Around 225K of that is for the main Web2 application

The largest “chunk” of code is a web 2 project, but there’s also a lot of code in external libraries (shared business logic & classes). The rest of the code is a Desktop project for certain admin tasks better suited to a full Desktop UI, and then a “slew” of services (migration, monitoring, retention) which are console apps.

FWIW - my compile times and web app performance are much better with Web 2, but
 I had to do a LOT of re-engineering of things before I got there.

Git + Github and pushing as much of the business rules/logic to external libraries (so they could be shared across the Web1 and Web2 projects while things transitioned) was crucial to getting to where I’m at today.

On an M4 Pro Mac mini, my main project normally builds in around 35 seconds. (vs. 5+ minutes when I was using 2019r3.1 on a very fast Intel iMac)

This is a very active, full-feature web application which often has 30-70 concurrent users/connections actively using a database driven UI with session management, transaction management, real time calculation using user-managed formulas (via Xojoscript) which is normally performing great**.

** We had bigger hiccup with Xojo’s MySQL plugin which kept us on the last 2024 release, but it should be fixed in an upcoming release.

Hope this is some useful information for you,
Anthony

5 Likes

Interesting. I’ll dig into building on Linux VMs. Thanks, Greg. I’d like to see what you come up with.

Buy used!!! (but within warranty) After the first day, your brand new machine would be used, too
 Use the price difference to buy the next bigger model.