I am not sure why you are iterating of the images. If you use only one turnaround, CIFIlter is faster.
You are building a new filter each loop and especially create a new Context each time. Apple says a context is very expensive. Have you tried to buffer it? Or even more, buffer the whole filter and just change its properties?
[quote]
You are building a new filter each loop and especially create a new Context each time. Apple says a context is very expensive. Have you tried to buffer it? Or even more, buffer the whole filter and just change its properties?[/quote]
I also think creating the CGImage->CIContext and back is expensive but I’ll need to pass the current screen content while behind the moving moving. For one iteration it’s ok of course.
When caching the CIContext, make sure you compare the underlying CGContext, if the user moves the window you need to recreate the context (else it crashes).
cleanup up the code and made it as Apple suggests:
iOS = contextWithOptions:
OSX = contextWithCGContext:options or NSGraphicsContext currentContext CIContext
I’d be interested to know where you got the information on how to use Core Image.
The first thing I notice is that you’re rasterizing the CIImage to a CGImage and then drawing that CGImage to a Xojo picture, then drawing the Xojo picture to a canvas. You have 3 conversions going on there.
Instead, you should be creating a CIContext from the canvas, then drawing the CIImage into the CIContext, there is then only 1 conversion. Hence why I mentioned about caching the context reference, and double checking the CGBitmapContext.
The other thing is that you’re combining the CIFilter and CIContext into one object. This may work for one CIFilter, but will cause serious slowdown when working with multiple filters.
Instead, keep the CI objects separate. This way you can chain the filters together, and then pass to a CIContext when you want to render the image.
[quote=175496:@Sam Rowlands]I’d be interested to know where you got the information on how to use Core Image.
The first thing I notice is that you’re rasterizing the CIImage to a CGImage and then drawing that CGImage to a Xojo picture, then drawing the Xojo picture to a canvas. You have 3 conversions going on there.
Instead, you should be creating a CIContext from the canvas, then drawing the CIImage into the CIContext, there is then only 1 conversion. Hence why I mentioned about caching the context reference, and double checking the CGBitmapContext.
The other thing is that you’re combining the CIFilter and CIContext into one object. This may work for one CIFilter, but will cause serious slowdown when working with multiple filters.
Instead, keep the CI objects separate. This way you can chain the filters together, and then pass to a CIContext when you want to render the image.[/quote]
Hi Sam,
you’re right for the conversion. I could save 1-2 steps depending what you’re doing with but if you only work with Xojo
pictures and want to manipulate it further guess there’s no other way. Of course if you print it directly into a canvas you can create a context from. That could be a possible solution. I also added a method "createCIImage(p as ptr) " in case you’ve already a ptr from a system function and don’t need the picture.CopyOSHandle.
True. I only did that for the example and only use one filter.
Not because you’re using it incorrectly (Apple gives lots of advice, some of which is contradictory) but because it’s very different than I’ve been using Core Image.
huh… I never noticed that it could be done in that way, I’ve always created a CIContext from a CGContext. Both functions were added in 10.4, so one’s not newer than t’other.
Only difference is that creating a CIContext from a CGContext allows you to choose the renderer and to apply/not apply color profile calibration. For on-screen stuff, you’ll never want to use software rendering, but saving images (especially on older hardware) is less likely to crash the GPU if you default with software rendering.
Do you know how to get a masked outputImage from a Filter? I need to mask it as I use a
GaussianGradient.
dim filter as new CIFIlter("CIGaussianGradient")
declare Function initWithColor lib QuartzLib selector "initWithColor:" (id as ptr, c as ptr) as ptr
declare Function vectorWithXY lib QuartzLib selector "vectorWithX:Y:" ( id as ptr, x as single, y as single) as ptr
const xSize = 50
dim ciVectorRef as ptr = vectorWithXY(NSClassFromString("CIVector"), xSize/2,xSize/2)
filter.setValue(ciVectorRef, "inputCenter")
filter.setValue(xSize/2, "inputRadius")
dim cic0 as ptr = initWithColor( allocate("CIColor"),new NSColor( &cffffff ) )
dim cic1 as ptr = initWithColor( allocate("CIColor"), new NSColor( &c000000) )
filter.setValue( cic0, "inputColor0")
filter.setValue( cic1, "inputColor1")
dim bg as Picture = filter.outputAsPicture( CGMakeRect(0,0,xSize,xSize), false )
outputAsPicture:
[code]
declare function createCGImage lib QuartzLib selector “createCGImage:fromRect:” (id as ptr, img as ptr ,r as CGRect) as ptr
declare sub CGContextDrawImage lib CarbonLib (context as integer, rect as CGRect, image as Ptr)
dim result as ptr = self.outputImage
if result<>nil then
dim r as CGRect = newSize
dim cgRef as ptr= createCGImage(self.ciCntx, result, newSize)
if cgRef<>nil then
dim d as Picture
if alpha then d = new Picture(r.w,r.h,32) else d = new Picture(r.w,r.h)
CGContextDrawImage d.Graphics.Handle(Graphics.HandleTypeCGContextRef), r, cgRef
release(cgRef)
return d
end if
end if[/code]
First thing that comes to mind is CALayer again. This has a mask property which you can assign the CGImageRef you received from the filter to it. In theory, have not tested it yet.