I am looking for input on a hardware upgrade with a firm budget of around $2,000. Our main application, which is the core of our business, currently takes an agonizing hour and a half to compile on our old Intel Mac mini. We are forced to run this build in excess of 10 times a week. My goal is to find the absolute best machine within this budget to slash this time as aggressively as possible.
We analyzed the entire build process on a newer M4 Mac mini with 32GB of RAM. Even on this more modern machine, the compile still took over 30 minutes. Using top to monitor the process, we uncovered very specific performance bottlenecks. The build consists of two distinct phases:
The Parallel Crush: The first phase is a massively parallel process that completely saturated all available Performance-Cores with 4 HoudiniAssis compiler processes running at 100%. This ~20 minute phase also put the system under extreme memory pressure, forcing over 15GB of RAM into a compressed state.
The Sequential Grind: The second phase was an unexpectedly long, single-threaded process (linking?) . For over 10 minutes, a single HoudiniAssis process pegged one CPU core at 100% while the rest of the multi-core CPU sat nearly idle. This proves our build is limited by both parallel processing power and single-core speed.
This analysis leads me to a hardware dilemma between two setups at the ~$2,000 price point.
Option A: Maxed-out M4 Mac mini (~$1,899): Configured with an M4 Pro chip (est. 8 Performance-Cores) and upgraded to a necessary 32GB of RAM.
Option B: Base M4 Mac Studio ($1,999): Comes standard with the M4 Max chip (est. 10 Performance-Cores) and 36GB of RAM.
My core question is this: given the evidence, which machine is the smarter investment? The Mac Studio offers a 25% increase in Performance-Cores to attack that brutal parallel phase, plus slightly more RAM. But perhaps more importantly, it has a superior thermal design.
How much of a factor is throttling on a Mac mini during a 30+ minute compile compared to a Mac Studio, which is built to sustain peak performance?
Should we spend a couple hundred bucks and get a better return or should we just back down to a smaller machine and get nearly as good?
Honestly - would it be possible to break up your app into components that could be compiled separately? Thatâs a crazy amount of compile time that must really be wasting a lot of developer time. A key way many people develop is develop-test run-fix but that only works if running your code isnât a barrier due to compile time.
Aside from that⊠since this is a web project, have you considered getting a beefy PC for your budget and using that to compile? Itâs likely that you could get more RAM/CPU for your budget. You could fine-tune the specs to best serve your needs (no need for a fancy graphics card or fast storage, for example).
IMO $2000 isnât enough to solve your problem. We code on 2019 Intel Xeon Mac Proâs with 16 core CPUs and 128GB RAM. We code our main app in Parallels on Windows and that takes about 20 seconds to compile about 1,000,000 lines of code. Xojo performs very well on those machines as well. If you want drastic, Iâd look at more CPU and RAM on a Mac Mx Machine, youâll be glad five years from now when it still performs extremely well. I also think Eric has a good point about breaking up your app. Just my 2 cents.
So the âbrutal parallel phaseâ as you put it is the compile phase where each of the individual units gets compiled into machine code. Parts that can be, are done in parallel. Not every piece takes the same amount of time and youâll often have one remaining piece after all the others are done that takes an extra bit of time to finish.
Once compiling is done, things are passed over to the linker for final assembly. This has to be done on a single processor in whatever order they need to be in. Generally:
The linker is inherently sequential for many because things like this must be done sequentially:
Symbol resolution
Relocation
Ordering sections and resolving dependencies
Writing to the output file
Final relocation fixes
In short, while some parallelism exists in modern linkers like lld, the nature of linking imposes limits. Full multicore usage is not realistic due to how tightly coupled the linking steps are.
So⊠the more cores you have, the faster the compile will go, but not the linking. Newer chips are usually inherently faster than their predecessor, but not always. At least one of the M4 chips was actually slightly slower that the predecessor in the same machine in some MacBench tests.
I am curious though⊠are you using the Medium or Aggressive compile option? 30 minutes sounds like a long time for a web app.
my thoughts:
it could be compiler optimization settings. and use of variant.
source code should be at harddisc/ssd.
i would save in project format .xojo_project.
images copied with build step and load on demand. (it was mentioned in other thread)
prevent copy/paste and use classes for business logic.
anti-virus software could slow down/delay.
After considering all the data and community feedback, we have decided to move forward with a custom-configured Mac Studio as our dedicated build machine. We believe this option gives us the best shot at drastically reducing our 90-minute compile times.
The specific configuration we are testing is:
Apple M4 Max with a 16-core CPU and 40-core GPU
64GB unified memory
This build directly targets our two main bottlenecks: the 16 Performance-Cores should significantly accelerate the âParallel Crushâ phase, and the 64GB of RAM will completely eliminate the memory pressure we observed. We will report back with a detailed analysis of the new compile times once the machine is up and running.
A quick note on the final cost: yes, we went over our initial budget. The final configuration with the memory upgrade and tax brought the total to just under $3,000.
Thanks, Mike. We appreciate you taking the time to offer suggestions.
To provide a bit more context, weâre a fairly large and seasoned team of nine full time developers, and weâve already thoroughly addressed the foundational aspects youâve mentioned regarding Xojo versions, project formats, and source code locations. We also frequently collaborate with other Xojo community members on specific aspects of our projects, so weâre quite familiar with best practices and common pitfalls.
Our core applications ( about 15 of them) are very substantial Xojo Web API 2 projects, deployed across multiple large DigitalOcean droplets with various .htaccess configurations. Typically running half a dozen instance of the app on an one web server. Given its scale, and with Bob Keeney no longer building large applications for clients, we suspect we might indeed be working with one of the largest Xojo codebases currently in active development. Almost certainly the largest web project.
The specific challenge weâre wrestling with right now is pushing the performance limits of our Xojo web app compilation. The âaggressive compileâ option, while beneficial for runtime performance, significantly increased our compile times: from 5 minutes to 1.5 hours on our old Intel Mac mini build machine, and still around 30 minutes even on a basic M4 Mac mini.
Therefore, our current focus is purely on optimizing the hardware to reduce these critical build times as much as possible, given the bottlenecks weâve identified. Weâre looking for insights specifically on the M4 Mac mini vs. M4 Mac Studio thermal performance and core utilization for sustained, intensive compilation tasks.
Having said all that, We will soon have the new Mac studio in office and will report back our results.
had your team seen this thread? a possible reason maybe.
while beneficial for runtime performance, significantly increased our compile times: from 5 minutes to 1.5 hours on our old Intel Mac mini build machine, and still around 30 minutes even on a basic M4 Mac mini.
ok, so for a release the better pc save time with enabled.
Let me ask this question. is the problem that it takes 30 minutes to compile or that someone needs to sit and wait for it? If itâs the latter, do you have a CI/CD system that builds all of this for you?
Our current build times are a significant bottleneck; just three or four morning builds push us well into the lunch hour. Even if it is unattended. This makes me wonder: rather than a direct hardware purchase, could a cloud solution be more effective, allowing us to simultaneously spin up five or six build environments? Iâve started researching Mac Stadiumâs Orca â what are your thoughts on its suitability or other alternatives?
Since theyâre web apps, the platform you use to build them shouldnât matter unless youâve got some windows or macOS native deliverable that you need to codesign. If so, I suggest building on Linux VMs in the cloud. Theyâre significantly cheaper and in some ways faster than Mac and Windows because of the file system. If you havenât seen it, the IDE on Linux opens nearly instantaneously which means that you can open and close the IDE for each build if you want to be sure to get a nice clean build.
And to answer the next question⊠yes, it is technically possible to build with Xojo in a headless Linux server environment. You canât use the ide, except maybe to license it, but you can certainly build if youâre using build automation. I can try to put something together if anyoneâs interested in seeing how itâs done.
And our large project was done in Web 1. We never attempted to port it to API 2 because of the number of WebContainers it used (IIRC WebContainers were not in the initial Web 2 portfolio). And the client really didnât want to spend the money to rewrite the application for web 2. AFAIK that project is still in use though I doubt anyone is actively maintaining it.
I work in a code base of ~400K lines of code (which was getting awfully close 1M lines a couple of years ago). Around 225K of that is for the main Web2 application
The largest âchunkâ of code is a web 2 project, but thereâs also a lot of code in external libraries (shared business logic & classes). The rest of the code is a Desktop project for certain admin tasks better suited to a full Desktop UI, and then a âslewâ of services (migration, monitoring, retention) which are console apps.
FWIW - my compile times and web app performance are much better with Web 2, but⊠I had to do a LOT of re-engineering of things before I got there.
Git + Github and pushing as much of the business rules/logic to external libraries (so they could be shared across the Web1 and Web2 projects while things transitioned) was crucial to getting to where Iâm at today.
On an M4 Pro Mac mini, my main project normally builds in around 35 seconds. (vs. 5+ minutes when I was using 2019r3.1 on a very fast Intel iMac)
This is a very active, full-feature web application which often has 30-70 concurrent users/connections actively using a database driven UI with session management, transaction management, real time calculation using user-managed formulas (via Xojoscript) which is normally performing great**.
** We had bigger hiccup with Xojoâs MySQL plugin which kept us on the last 2024 release, but it should be fixed in an upcoming release.
Hope this is some useful information for you,
Anthony
Buy used!!! (but within warranty) After the first day, your brand new machine would be used, too⊠Use the price difference to buy the next bigger model.