I had posted something about this a couple years ago, but things have changed with what we’re doing and how we’re doing it. We are nearing completion of the mechanical and electronic aspects of a motion picture film scanner we’re building in house and I need to get started on the software side in the next couple weeks. The scanner is designed to handle multiple types of film, which are different sizes and shapes. The scanner’s resolution is 14k x 10k at the largest, though most scanning will be done at lower resolutions (more like 9k). Worst case, we’re talking about 300MB/image (monochrome 16 bit: 14500px x 10000px x 16bit), and will need to process about three frames per second - the max speed of the camera we’re using.
The hardware control is handled by a purpose-built microcontroller we communicate with via serial commands. The software on the workstation handles all of the coordination (telling the machine when to advance, which color light to enable, when to snap a frame, etc). It also handles the image processing: Frame alignment using film perforations as reference, merging three mono frames into one color image, color space conversions, inversion of negative images, correcting for color cast, and writing out the final file to disk.
My current thinking, based on past threads I’ve posted here, is that we’re going to make a bunch of small, very specific command line apps that do their thing and report back to a Master application, (at first, probably another CLI, but eventually a GUI). To keep things moving along, we’ll create a large RAM disk and read/write files to that until the final output file is ready to be written to our SAN.
Ok, so all that said, my thinking is that I’ll probably need a PC that has a high clock speed and tons of RAM. A massive number of cores probably isn’t necessary. Instead, each color channel might be processed on its own core. I do something like this now with a tool I built in Xojo to create digital package files to the Library of Congress Bagit standard, where we might be dealing with upwards of 300,000 files at a time). It’s significantly faster than any other bagit tool I’ve used, and works well on 4-6 cores, until it starts to saturate the older hardware’s bandwidth.
We will probably use some of the MBS plugins/ ImageMagick or GraphicsMagick to do the processing stuff, otherwise everything is off-the-shelf Xojo.
At the moment, I’m thinking:
Intel i9-10980XE (18core 3ghz)
256 GB DDR4 3200 RAM
M2 drive for the OS
We’re not planning to do any GPU processing, so we’ll just re-use an old GPU we have here for the basic video out. We don’t need any storage in the machine beyond the OS, all files will be written to our SAN over a 10gbE network.
So my question here is: should I be looking at these faster i9 CPUs for their clock speed, or should I be looking at more of a workstation class machine with Xeons? The thing with the Xeons is that they have slower clock speeds generally and excel at multi-core work, which I don’t think we’re going to need too much of. We have found in our day to day use with DaVinci Resolve (color correction software that is both CPU speed and multi-core dependent, depending on what you’re doing) that sometimes a faster clock speed with an 8 core i7 was faster than our machines with a 14-core Xeon, even on the same motherboard, with similar specs.
EDIT: I should also add that I’m not opposed to doing this on a mac, however, our frame grabber (which has mac drivers) requires PCIe Gen3. The new MacPro units, in the rack mount config we’d need, start at $6500, which is just crazy. If there’s a way to do this with a less expensive machine and an expansion chassis, I’m open to that as well.