Image Resizing Techniques

I am not sure about the real meanings of the compression settings you mention. In Photoshop, saving a JPEG with highest quality gives you a level of detail that is supposed to be exactly like the original, while the file size often is only a fraction of the uncompressed data, and it looks like a compressionQuality setting of 1.0 is exactly that.
On a Mac, I would use a TIFFRepresentation but UIImage knows only PNG besides JPEG. Have you tried to save an image uncompressed and as JPEG, match them exactly on 2 Photoshop layers and multiply them negatively? You should then be able to spot any possible differences between them. If there are none, I would say you are on the safe side and could save bandwith by sending them as JPEG. If you should find artifacts, you could always use the CGImageRep to get a TIFF (which can be compressed too, preferably with a lossless method).

Thanks Ulrich. Yeah I’ve taken my own thread off topic :). Compression/resizing is sorted thanks to Jason King, but I’ve got an issue saving an uncompressed image - the memory block is a different size depending on where in my app I generate it. Since Jason is doing this the same way in his own app (and I know he outputs his JPEG image specifically in the camera.PictureTaken event) then I thought he might be able to pinpoint what I’m doing wrong. :slight_smile:

Glad to read that, especially because my idea was nonsense – XCode’s docs are still a bit buggy and searching CGImage brought me to NSImageRep … which is OS X of course. :wink:
My last idea for the extremely different data sizes (and none for your problem) would be to examine both data for colorspaces, alpha and possible included preview images. I am completely unsure about how a JPEG file is structured, it is basically just a container for a variety of formats.

It’s weird that the result is so different using the two methods. If you really want the “full size” memory block posted could you make the image property a memory block instead and set it in the PictureTaken event and then post it. I see no reason why there should be a difference though.

Thanks Jason. I’ll give that a try. It is very odd. I thought that perhaps the camera.originalimage property, which is just an iOSImage, might have something “in it” which, when copied to another iOSImage, is somehow lost/truncated. But then the image looks complete so that doesn’t seem likely.

I know I shouldn’t probably care so much, but I know we’ll get users complaining that their images are being reduced when they haven’t chosen any reduction level…

Keep in mind that the image quality of JPEG images is all about perception. JPEG compresses images in blocks, and the size of these blocks is determined by the quality setting you use. High quality = Smaller blocks. If you have a program like Photoshop around, try loading the original image and put the compressed version in a layer above it. Then set the effect to Difference. You’ll probably need to increase the contrast a lot, but it’ll show you the places where the JPEG algorithm made small changes to make the image more compressible.

Right, but I think he said that the size of the memory block representation of the image changes depending on which method he creates the memory block in. If he uses the image directly in the PictureTaken event then he gets a 5Mb memory block, but if he assigns the picture to a property and then creates the memory block in a different method he gets only a 300Kb memory block. The difference is what is confusing since he is using the same exact code in both places.

Yes that’s right Jason. My original question about image resizing is making this thread confusing. I don’t have a resizing issue anymore. It’s now about how to take the iOSImage returned by the camera class convert to a memory block of the same size outside of the camera.picturetaken event. I might make up a little demo project just as a sanity check.

@Jason King creating a MemoryBlock property for my class and setting it in the camera.picturetaken event has solved the problem. I still really don’t understand why it makes any difference where the MemoryBlock is created and all that I can think is that the iOSImage is somehow different. It burns me to not know, but at least it’s working. :slight_smile: