Is this even possible?
I have an application that processes video using Apple's API, and while I can achieve about 150~200 FPS (with HD video, it drops significantly for 4K), activity monitor shows it uses about 130~190% of my CPU, which leaves about 75% doing nothing. My guess is that it's maxing out one or two cores, leaving the other 6 idling.
There's several steps in processing a video.
- Reading each frame from the video (First time the HDD causes this to be the slowest part).
- Applying effects to each frame (I'm using Core Image for this, so it's using mainly GPU processing, but does jump in and out of the main CPU for some of the processing that Core Image simply isn't able to do).
- Writing the frames back out (sometimes end up waiting for the writer).
The most consuming part is processing the frames, and being GPU based I am not sure that I can speed that up significantly. So what I am hoping is that I can separate the three tasks.
- The reader, loads up frames and sticks them into a queue called "In".
- The processor monitors the "In" queue and when there's more than zero frames in the queue, it gets to work and then pushes the processed frames into a "Out" queue (and removes them from the "In" queue).
- The writer monitors the "Out" queue and you've guessed it, writes the frames to the movie file and removes them from the queue.
This way the processor spends no time reading or writing and therefore more time actually processing the frames.