Move a block of text using AVFoundationMBS


My goal is to draw some text that moves upward, like scrolling ending credits in a movie.

After seeing the “Add text to video” example, there are things I’ve understood about how all this works, but I still can’t figure out how to go further with my problem.

Since the text must scroll live in the video, I’m looking at the AVVideoCompositionCoreAnimationToolMBS, AVMutableVideoCompositionLayerInstructionMBS, AVMutableVideoCompositionInstructionMBS and AVMutableVideoCompositionMBS classes so the renderer would do the move, but I can’t find a property for moving text while rendering.

While burying in the documentation, I eventually found the Translate method of the CGAffineTransformMBS class. The only way I found to add an CGAffineTransform to either classes listed above is
AVMutableVideoCompositionLayerInstructionMBS.setTransform, but this moves the whole movie (minus the text) instead of only the text (black borders are added). Here’s the code I tried, which I added right before the “// 4.3 - Add instructions” comment:

Var cga As new CGAffineTransformMBS
cga=cga.Translate(100,100) 'for testing, move by 100x100

var cmt As new CMTimeMBS(3000,600) 'arbitrary time


I now realise it’s being sent to the wrong layer, but I can’t find another way.

Once I find how the correct layer can be targeted, my idea would be do add CGAffineTransformMBS objects at every tenth of seconds. I know it’s a lot of objects and not guaranteed to work, but the documentation and examples don’t cover my needs.

Help would be welcome, please.

If my question was not clear, please tell me.

Sorry, I don’t have a correct answer directly.

For this example, I translated an example from C to Xojo to get it working.

Reading this tutorial

I think we would need some classes like CABasicAnimation added to the plugin (or you do it yourself with declares).

But it may be easier for you to just make all the frame images in Xojo and join them together to a video.

Thanks for your reply.

Actually, I tried your last suggestion (extracting all the frames, drawing to them and applying them back to another file). This alone worked, but then I wanted to add audio tracks back.
So I looked at the examples, but couldn’t adapt them to my app (the closer I got was getting NSExceptions).
I was also confused by the fact that it appears unsupported to add other tracks after the writer has started (but I can’t write several tracks at once when one is a picture-by-picture track and others are just tracks to clone).
But I can probably dive deeper here.

The other concern about adding modified pictures frame by frame is about metadata. For example, what if the FPS changes in the middle of a movie? Or does converting an NSImage to a Xojo picture change the picture’s colour profile? In short, I’m feeling I can lose data by converting each frame in a Xojo function (but it’s perhaps meaningless).

Those are concerns that I hope someone more advanced than me may answer so I can choose the best approach (picture by picture or jumping to the declares).

The technique I’ve come to use is to extract each frame of the video, modify the pictures, save them into a temporary file’s video track, close the temporary file and then merge the temporary file and the original file’s audio tracks.

Seems to fit my needs.
Thanks, Christian.