CIFilter vs ImagePlay = CIFilter slow ?

Hi,

I finally got the CIFilter working but wondering about the speed comparing it with
ImagePlay. Is there anyone who knows where the overhead maybe is ?

It seems the more iterations the higher the calculation time for the Filter.
What I am doing wrong ?

https://www.dropbox.com/s/vobp7eo0ysxiawb/CIFilter2.xojo_binary_project?dl=0

Hi Rob,

I am not sure why you are iterating of the images. If you use only one turnaround, CIFIlter is faster.
You are building a new filter each loop and especially create a new Context each time. Apple says a context is very expensive. Have you tried to buffer it? Or even more, buffer the whole filter and just change its properties?

This is right but I need to run the filter several times as I capture the screen behind
a window and blur it when the window is moving. See my posts here:
https://forum.xojo.com/20693-blurring-background-of-window-canvas/p1#p175211

[quote]
You are building a new filter each loop and especially create a new Context each time. Apple says a context is very expensive. Have you tried to buffer it? Or even more, buffer the whole filter and just change its properties?[/quote]

I also think creating the CGImage->CIContext and back is expensive but I’ll need to pass the current screen content while behind the moving moving. For one iteration it’s ok of course.

got them equal and the CIFilter is really superior n terms of the qulity.
See code here:
https://www.dropbox.com/s/x3oemslerfql6jy/CIFilter3.xojo_binary_project?dl=0

Thank for your lead Uli.

When caching the CIContext, make sure you compare the underlying CGContext, if the user moves the window you need to recreate the context (else it crashes).

Tomorrow, I’ll take a look at your code (if I can find a few minutes), but your declares are very different from what I use.

cleanup up the code and made it as Apple suggests:
iOS = contextWithOptions:
OSX = contextWithCGContext:options or NSGraphicsContext currentContext CIContext

https://www.dropbox.com/s/i965hp9qz004x5o/CIFilter4.xojo_binary_project?dl=0

Will look into that.

Edit:
No, everthing is fine as it should be.

I’d be interested to know where you got the information on how to use Core Image.

The first thing I notice is that you’re rasterizing the CIImage to a CGImage and then drawing that CGImage to a Xojo picture, then drawing the Xojo picture to a canvas. You have 3 conversions going on there.

Instead, you should be creating a CIContext from the canvas, then drawing the CIImage into the CIContext, there is then only 1 conversion. Hence why I mentioned about caching the context reference, and double checking the CGBitmapContext.

The other thing is that you’re combining the CIFilter and CIContext into one object. This may work for one CIFilter, but will cause serious slowdown when working with multiple filters.

Instead, keep the CI objects separate. This way you can chain the filters together, and then pass to a CIContext when you want to render the image.

[quote=175496:@Sam Rowlands]I’d be interested to know where you got the information on how to use Core Image.

The first thing I notice is that you’re rasterizing the CIImage to a CGImage and then drawing that CGImage to a Xojo picture, then drawing the Xojo picture to a canvas. You have 3 conversions going on there.

Instead, you should be creating a CIContext from the canvas, then drawing the CIImage into the CIContext, there is then only 1 conversion. Hence why I mentioned about caching the context reference, and double checking the CGBitmapContext.

The other thing is that you’re combining the CIFilter and CIContext into one object. This may work for one CIFilter, but will cause serious slowdown when working with multiple filters.

Instead, keep the CI objects separate. This way you can chain the filters together, and then pass to a CIContext when you want to render the image.[/quote]

Hi Sam,

you’re right for the conversion. I could save 1-2 steps depending what you’re doing with but if you only work with Xojo
pictures and want to manipulate it further guess there’s no other way. Of course if you print it directly into a canvas you can create a context from. That could be a possible solution. I also added a method "createCIImage(p as ptr) " in case you’ve already a ptr from a system function and don’t need the picture.CopyOSHandle.

True. I only did that for the example and only use one filter.

Why? Because I am not using it correctly ?

Not because you’re using it incorrectly (Apple gives lots of advice, some of which is contradictory) but because it’s very different than I’ve been using Core Image.

I’ve read the apple documentation and tried to work my way thru with you folks. :slight_smile:
https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_tasks/ci_tasks.html

huh… I never noticed that it could be done in that way, I’ve always created a CIContext from a CGContext. Both functions were added in 10.4, so one’s not newer than t’other.

Only difference is that creating a CIContext from a CGContext allows you to choose the renderer and to apply/not apply color profile calibration. For on-screen stuff, you’ll never want to use software rendering, but saving images (especially on older hardware) is less likely to crash the GPU if you default with software rendering.

Hi Sam,
Hi Uli,

Do you know how to get a masked outputImage from a Filter? I need to mask it as I use a
GaussianGradient.

 dim filter as new CIFIlter("CIGaussianGradient")
      
      declare Function initWithColor lib QuartzLib selector "initWithColor:" (id as ptr, c as ptr) as ptr
      declare Function vectorWithXY lib QuartzLib selector "vectorWithX:Y:" ( id as ptr, x as single, y as single) as ptr
       
      const xSize = 50
      dim ciVectorRef as ptr = vectorWithXY(NSClassFromString("CIVector"), xSize/2,xSize/2)
      filter.setValue(ciVectorRef, "inputCenter")
      filter.setValue(xSize/2, "inputRadius")
      dim cic0 as ptr = initWithColor( allocate("CIColor"),new NSColor( &cffffff ) )
      dim cic1 as ptr = initWithColor( allocate("CIColor"), new NSColor( &c000000) )
      filter.setValue( cic0, "inputColor0")
      filter.setValue( cic1, "inputColor1")
      dim bg as Picture = filter.outputAsPicture( CGMakeRect(0,0,xSize,xSize), false )

outputAsPicture:

[code]
declare function createCGImage lib QuartzLib selector “createCGImage:fromRect:” (id as ptr, img as ptr ,r as CGRect) as ptr
declare sub CGContextDrawImage lib CarbonLib (context as integer, rect as CGRect, image as Ptr)

dim result as ptr = self.outputImage
if result<>nil then
dim r as CGRect = newSize
dim cgRef as ptr= createCGImage(self.ciCntx, result, newSize)
if cgRef<>nil then
dim d as Picture
if alpha then d = new Picture(r.w,r.h,32) else d = new Picture(r.w,r.h)
CGContextDrawImage d.Graphics.Handle(Graphics.HandleTypeCGContextRef), r, cgRef
release(cgRef)
return d
end if
end if[/code]

First thing that comes to mind is CALayer again. This has a mask property which you can assign the CGImageRef you received from the filter to it. In theory, have not tested it yet.

You need to use a bunch of filters,

CIMaskToAlpha & then CISourceAtopCompositing

Documentation can be found here:
https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CIMaskToAlpha

I’ve uploaded a simple example project again (blurr filter).

https://www.dropbox.com/s/h7b7w5qvrigwsvg/CIFilter_example.zip?dl=0