Reading in Raw photographic files

I am wondering if there is a method or plugin available to allow one to read in raw photographic images. I do photography and use the raw image format on both my Nikon and Sony cameras. While I can convert these using Photoshop and Lightroom I would like to write a program in Xojo to do processing of raw files. I welcome any comments or suggestions on how I might do this in Xojo.


  1. The MBS plugins might be able to do it via CoreImage.
  2. ImageMagick / GraphicsMagick (MBS also has a plugin to help here).
  3. dcraw

I have the MBS plugins and will look into this. Thanks Kevin.

Have you tried opening a RAW image from a Xojo application? RAW support is baked into the OS, and you’ll get a regular image object back, converted how Apple sees fit.

Now to manipulate that RAW Data in all it’s glory, that sir is an entirely different beast and discussion for the next few weeks!

That is exactly what I want to do. So I want the raw “RGB” or “CMY” data matrix without having it modified by the OS. What I seek is to create something similar to the Raw front end that Adobe Photoshop provides, but I want to apply various transformations on the data prior to creating a Tiff or Psd file for subsequent manipulation.

Okay. So the only option you have really is via Core Image at this point. But there are some things I’d like to point out.

  1. You have limited control over Apple’s RAW processor.
  2. You can inject CIKL processing into Apple’s RAW processor on macOS 10.4 ~ 10.15. However for macOS 10.14 + you’ll need to use MTLSL via a Metal Library. This processing is meant to be done in a 16-Bit environment, so I would assume that the RAW processor generates an 8-Bit result.
  3. For our use, we wanted the extended dynamic range, which RAW offers, but Apple do not, so all values are clamped at 0.0 ~ 1.0.
  4. Some RAW files have a tone curve applied, some don’t, there’s no list so it’s a case of building a list yourself.
  5. To provide a live preview, you need to use OpenGL (macOS 10.4 ~ 10.13) or Metal if you target macOS 10.14 or newer. While Metal is “supported” on macOS 10.11 ~ 10.13, it’s machine dependent, 10.14 is the only reliable means of targeting Metal.

By default all the pixels are given to you in RGB 32-Bit float, the processing color space is Linear on macOS 10.4 ~ 10.13, but a form of sRGB extended on 10.14 or newer. What I find weird and it may be because I don’t understand something, you still need to apply a sRGB tone curve to do editing where the results will closer match other industry standard applications.


I was aware of some of the issues you mentioned, but had no idea Apple got into the middle of all this to the extent you outline. What I dont like is the fact that the processing and behavior is OS dependent. I am, however, willing to limit the app to 10.14 and 10.15 and based on your comments they both use Metal.

You say all values are clamped to the range 0.0 - 1.0. Fine by me, but are you also saying that this is an 8-bit representation. If so, that is a killer for me as the key advantage of working in Raw is the improved dynamic range. I need at least 16 bit for the transformations I have in mind.

Yeah me neither, especially as CIKL can be dynamically created and because of this, it’s a heck of a lot easier to debug and tweak. MTSL has to be compiled via Xcode into a Metal Library, then loaded by your application. My advice is to use CIKL for the time being, once your filter works as expected, then consider convert it to a Metal Shader.

I have petitioned Apple to not go through with this change as it means image processing is going to get a lot harder, and that’s without having to deal with the plethora of bugs in the Core Image framework.

I can’t say for sure at this time. I can do some digging, it was merely because Apple says that you can inject a filter into the 16-Bit workflow, so that’s what I assumed.

The actual data processing is done via floats, and is either a 32-Bit float or a 16-Bit half float. You can specify the precision when you create the context to draw the image into.

Core Image is easy to get into, but it’s hard to get right.

If you didn’t want to do RAW processing, I’d actually advise against Core Image and suggest looking at the Enihugur image processing library. It’s x-plat and way more stable than Core Image.

Some notes.

  • In 2015 Core Image was able to process 100 megapixel images with ease, until it was replaced with the iPhone version. Since 10.13, many customers struggle with 50 megapixel images as it seems Apple’s tiling engine is broken. I’ve been working on my own tiling engine for a while now, it’s far from working properly or performant.
  • When using a Gaussian blur via Metal, make sure that you clamp all values within 0.0 ~ 1.0, otherwise you’ll get black parts of an image or even a solid black image.
  • Do not use Pow on negative values when processing with Metal, this results in a corrupted CIImage.
  • Apple’s Area filters have been broken for several OS versions now, included their recently added “CIAreaMaxMin”.
  • Apple’s histogram functions, may simply not work or provide incorrect information for some channels on some machines.
  • Trying to read pixel values outside of a shader requires a render and transfer from VRAM to regular RAM, this is slow. If you need to get some statistics from an image to use during processing, design your solution to do this once (say when you first open the image).
  • Do not bother with Apple’s reference documentation for Core Image, it’s way out of date, use this one instead or even the Microsoft documentation.

You have convinced me to ty Einhugur or see if MBS has what I need. Core image seems to be so badly broken it will not be worth the effort to find work-arounds. I need to work on very large images, 100-300 MP.

I appreciate your detailed analysis Sam. You have saved me a lot of time!


Edit: To be honest at this point in the game, I would strongly suggest avoiding Apple’s APIs. If you can do what you want using alternatives, it may make it easier to provide x-plat (which is where I’m now stuck), and I think going forwards, you may want to consider where your target audience is. Are they going to be using iPads or iPad like devices, or stonking desktop machines with fast performance and loads a RAM. Right now AMD CPUs and NVIDIA GPUs are smoking Apple’s Mac Pro in photo & video editing.

Which is to say, unrealistic using Core Image. I would have to pull the image from VRAM into RAM to do what I seek. And I now realize that is asking for trouble.

Forgot one; that I’ve just run into again distance( float, float ) is broken with Metal, you have to use abs( float - float )

Ah yeah, the GPU is wickedly fast for manipulating pixels, but limited memory on most GPUs causes way too many bottlenecks, especially as Apple now likes to use triple buffering, using a chunk of memory on the GPU before you even get started.

In general if you’re processing an optimal sized preview (for the screen) it’s fine. The major problem comes from processing the full sized image.

Core Image is plenty fast enough for iPhone camera images though! After all Apple have been promoting the iPhone as for “Professional” camera for some time now. Except the panoramic images.