Image Resizing Techniques

In my app I want to give users to option to resize an iOSImage to a range of pre-set sizes, e.g. 640x480, 1024x768 or 1440x1080, before the image is submitted to our web service.

I’ve had a look at using the Core Image Filters and, in particular, the LanczosScaleTransform filter but it’s, um, a bit finicky. And then I came across this blog: Image Resizing Techniques

So that got me thinking (always good, huh?) - is there another/better way to resize images in Xojo?

I have a private method in the QR module of iOSKit that does image resize. When I get back to my computer later today I can rip it out and make its scaling customizable (currently only does noninterpolated scaling).

Sounds good to me. :slight_smile:

Ok, so you will need to add several things for this to work (it needs an enum and 6 structs). First the enum:

Public Enum InterpolationQuality Default = 0 None = 1 Low = 2 Medium = 4 High = 3

Now the 6 structs (3 for 32bit and 3 for 64bit):

[code]//32 bit structs
Structure CGPoint32
x as single
y as single

Structure CGSize32
w as single
h as single

Structure CGRect32
origin as CGPoint32
rsize as CGSize32

//64 bit structs
Structure CGPoint64
x as double
y as double

Structure CGSize64
w as double
h as double

Structure CGRect64
origin as CGPoint64
rsize as CGSize64[/code]

You will also need the following constants:

UIKitLib = "UIKit.framework" CoreGraphicsLib = "CoreGraphics.framework"

Now for the actual scaling method:

[code]Function Scale(extends img as iOSImage, scaleFactor as Double, mode as InterpolationQuality = InterpolationQuality.Default) As iOSImage
dim UIImageRef as ptr = img.Handle
#if Target32Bit
declare function size lib UIKitLib selector “size” (obj_id as ptr) as CGSize32
dim sz as CGSize32 = size(UIImageRef)
dim newSize as CGSize32
#Elseif Target64Bit
declare function size lib UIKitLib selector “size” (obj_id as ptr) as CGSize64
dim sz as CGSize64 = size(UIImageRef)
dim newSize as CGSize64
#Endif

newSize.w = sz.w * scaleFactor
newSize.h = sz.h * scaleFactor

#if Target32Bit
declare sub UIGraphicsBeginImageContext lib UIKitLib (mSize as CGSize32)
#Elseif Target64Bit
declare sub UIGraphicsBeginImageContext lib UIKitLib (mSize as CGSize64)
#Endif
UIGraphicsBeginImageContext(newSize)

declare function UIGraphicsGetCurrentContext lib UIKitLib () as ptr
dim CGContextRef as ptr = UIGraphicsGetCurrentContext

declare sub CGContextSetInterpolationQuality lib CoreGraphicsLib (context as ptr, quality as InterpolationQuality)
CGContextSetInterpolationQuality(CGContextRef, mode)

#if Target32Bit
declare sub drawInRect lib UIKitLib selector “drawInRect:” (obj_id as ptr, rect as CGRect32)
dim r as CGRect32
#Elseif Target64Bit
declare sub drawInRect lib UIKitLib selector “drawInRect:” (obj_id as ptr, rect as CGRect64)
dim r as CGRect64
#Endif

r.origin.x = 0
r.origin.y = 0
r.rsize.w = newSize.w
r.rsize.h = newSize.h

drawInRect(UIImageRef,r)

declare function UIGraphicsGetImageFromCurrentImageContext lib UIKitLib () as ptr
dim newUIImage as Ptr = UIGraphicsGetImageFromCurrentImageContext

declare sub UIGraphicsEndImageContext lib UIKitLib ()
UIGraphicsEndImageContext

Return iOSImage.FromHandle(newUIImage)
End Function
[/code]
Note that this does not alter the original image but instead returns a new copy which is scaled.

This has been added to iOSKit and will be included in the next update (hopefully I can push the update tomorrow since it is mostly fixes) if you don’t want to try to put it together yourself.

I’m happy to add this (or attempt to add this). Which module is it going to be in, in the update iOSKit? That way if I put it in the same place, it will be neatly updated, instead of duplicated, when you do push an update.

I put it in the Extensions module (all of the methods which start as “extends x as y” are there).

Great. That’s all working very well. Thank you! I renamed the method “reScale” because there is an existing iOSImage.Scale property.

Do you know what InterpolationQuality.Default does, as opposed to .Low, .Medium or .High? My resized file sizes are very small, which is good, but they’re a bit on the grainy/noisy side too. :slight_smile:

Oops, I noticed that after copy/pasting but forgot to change the method header in the post. I changed the name to ScaleImage. I’m not exactly clear on what the different qualities really mean, but from the Apple docs:

[code] - kCGInterpolationDefault: The default level of quality.

  • kCGInterpolationNone: No interpolation.
  • kCGInterpolationLow: A low level of interpolation quality. This setting may speed up image rendering.
  • kCGInterpolationMedium: A medium level of interpolation quality. This setting is slower than the low setting but faster than the high setting.
  • kCGInterpolationHigh: A high level of interpolation quality. This setting may slow down image rendering.[/code]
    To keep crisp edges in the QRCode module I used kCGInterpolationNone which basically means it does a proportional scale without trying to “anti-alias” the entire picture. I have a feeling that using None would probably look weird for images which aren’t very structured like a QRCode, though. You should probably try the different modes to see which you like best in your app.

Looking at the image sizes is interesting. If I take a photo on my iPhone 6, the raw 3264 x 2448 image is (according to Image Capture) 3Mb. However that same image, when saved on the iPhone as a JPEG using your UIImageJPEGRepresentation code, becomes only 390Kb. That’s before any of this re-sizing is applied.

You’re obviously doing some work with image files too. Does that dramatic size reduction bother you? I remember on desktop being able to specify conversion factors when creating a JPEG but I don’t seem to have a way to do that?

Well, according to wikipedia, using JPEG can result in a 10:1 or greater compression ratio so that 3Mb to 390Kb doesn’t seem too far fetched. That said, in the app that I’m using the UIImageJPEGRepresentation code in I don’t see a size decrease. All of the images saved are between 5.1 and 5.9Mb on my iPhone 6 at the full raw size. I wonder if anything else could be going on here?

OK so I found the issue.

Once I get the image from the camera, if I save it out to a file using UIImageJPEGRepresentation as you do then the file size on the device is around 5Mb (depending on image).

However I need to send it to our web service so I use the Foundation.NSData.DataMB method to get a memory block. That returns a memory block of around 200k to 300k depending on image. The resulting file uploaded to our web service is the exact same size as the memory block.

Does the image look the same when you receive it? I’ll probably have to see if there is another way to convert it to data using different declares if you aren’t getting the entire image or are loosing a lot of information.

Also does the rotation problem persist when it arrives at your web services or do the declares I gave you earlier fix that?

The image looks the same, but it’s much smaller in file size. It’s the complete image. The rotation problem is gone. The declares you originally gave were working all along, but we had a problem with our web app not “respecting” EXIF when it re-displayed the images.

So in summary, it seems that UIImageJPEGRepresentation works, but that the WriteToFile method saves out much more data than the DataMB method returns. Does that make sense?

Yes that makes sense. It’s interesting that so much more data is saved with WriteToFile than when the data is placed into a memory block. The DataMB computed property reads all of the bytes reported by the underlying NSData object… Anyway since it works I think you should be all set? The smaller size may even be better since it means less data used by the user and a lot faster sending times.

Yes I guess that was my question - am I all set or am I “short changing” my users in terms of their image quality?

So if the image when saved out is 5Mb, including when saved out as a JPEG on the device itself, but it’s only a 300Kb memory block, is it somehow of materially lower quality? I have a preference in the app for image size and the options are “Actual Size”, “Small”, “Medium” and “Large”. I’m using your handy resize code to create the last three but, really, given you “Actual Size” gives you a 300Kb JPEG there’s no need to bother with resizing. But I was kind of hoping that “Actual Size” would enable a high-end user who doesn’t care about bandwidth to upload a 5Mb image to our service if that was his/her intention.

Also when I take a photo on my iPhone and drag it out of there using Image Capture, the 5Mb photo on my iPhone is copied to a 5Mb JPEG file on my desktop. So it just doesn’t make a whole lot of sense that the 300Kb memory block is equivalent.

Yeah I’m confused by that part as well, and I don’t think that the 300Kb can be equivalent. I’ll have to do some more searching on NSData tomorrow to see if I did something wrong, but I tried to do it just like the docs say and MacOSLib does it, which produces the 300Kb memory block.

And in all of this, there’s every chance that it’s my bug… Get some sleep. :slight_smile:

If I convert the camera.originalimage to a MutableMemoryBlock using UIImageJPEGRepresentation in the camera.PictureTaken event, I get a 5Mb memory block.

However I don’t do that. I have a myPhoto class with an image property and a POST method. So in camera.PictureTaken I set the myPhoto.Image to camera.originalimage and then call myPhoto.POST. In that method, when I convert the image property to a MutableMemoryBlock using UIImageJPEGRepresentation, I get a 300Kb (or smaller) memory block.

So what is the difference between these two approaches that yields such a different result?