Passing a “lot” of pictures between processes?

Hello,

Currently, one of my apps is stuck in Xojo 2013, the last version to allow the EditableMovie class. Of course, I’m losing a lot of improvements by not using the current (2019) version.
So, I’m planning to redo most of my app in Xojo 2019 and beyond. Since I must stay with Xojo 2013 to grab the source pictures from the EditableMovie class, I’ll have to make 2 apps (one (2013) collects the frames from the EditableMovie object and the other (2019) processes each frame).

I’m hesitant as to the best way to synchronise both apps, passing each picture from the 2013 app to the 2019 one. I’ve considered these solutions:
• Convert to string, send using IPC and rebuild as picture in the other app. This will be time consuming, and slow down the whole process, I guess.
• Save each frame to picture files in a temporary folder (e.g. {“0001.PNG”, “0002.PNG”, …, “1234.PNG”}. The 2nd app would then, asynchronously, process each file. Here, I’m not sure I won’t lose quality by saving and re-loading to/from files. And synchronising between both apps won’t be trivial either.
• Have some sort of shared memory between both apps. Each obtained picture in app 1 would be stored in a RAM part that app 2 can access and get the corresponding picture. However, I don’t know if that’s doable.

I don’t have much more ideas than these. Advices would be welcome.

Requested, but it is in the “Needs Review” bin. <https://xojo.com/issue/59548>

Please check FileMappingMBS class for shared memory in MBS Xojo Plugins.

Check NamedMutexMBS class to synchronize access from two application.

Thank you. I’m guessing there’s a current way to achieve what I’m after, though.

[quote=490764:@Christian Schmitz]Please check FileMappingMBS class for shared memory in MBS Xojo Plugins.

Check NamedMutexMBS class to synchronize access from two application.[/quote]
Interesting; that may do it. I’ll check…

If I could get rid of the EditableMovie class altogether, I would stop using Xojo 2013 at all and have a single application; it would be easier.
I understand QuickTime is near its end, but it’s still working; therefore, I’m considering switching to AVFoundation, which your plugin provides, but I’m fearing about the transition, since my code currently works in Xojo 2013 (but with the bugs/“improvements” of that version).

Currently, my app gets every frame of a movie using a QTFrameExtractorMBS object, transforms the pictures and saves them to another movie using a QTPictureMovieTrackMBS. It then copies other tracks (e.g. sound tracks) from the source to the destination. It takes into consideration possible FPS changes between frames. How hard/different would it be moving that to AVFoundation?

Thank you.

Did you check our AVFoundation examples?
I think we have examples to make movie to pictures and back.

[quote=490769:@Christian Schmitz]Did you check our AVFoundation examples?
I think we have examples to make movie to pictures and back.[/quote]
Actually, not yet. The last time I tried an AV class (a replacement for some QuickTime functions), I wanted streaming, it was so different than my knowledge of QuickTime that I couldn’t achieve it and pretended newer Apple’s AV ways were just nonsense.

My current QT functions kind of work, albeit I’d like to rewrite my app. I have the fear to spend too much time learning AVFoundation since I have some deadlines with my app to rewrite. That’s why I’m wondering whether it’s worth doing the move.

I’ll check the examples; thank you.

Yes it is.

AVFoundation is incredible, it is far more capable than Quicktime, but it’s also far more complex than Quicktime. Ironically also, the classes for grabbing a frame in AVFoundation are absolute garbage (they’re slow, far too resource hungry and just downright bad).
The best thing I found to grab a frame at a certain time is actually go and extract the frame myself, and use that, rather than the convenience classes.

It will take you time to get up and running with AVFoundation, unfortunately I am not in a position to be more of assistance with this at the moment. But later in the year I might be able to help more.

Thank you very much.

My current testings with MBS examples are “promising” (I can’t say I understand all the needed classes used “just” to retrieve frames yet, but this is working…).
The current implementation of my app needs to be redone (I started somewhat wrong and it’s now hardly maintainable). Enough reasons to start a new version, keeping the old one in the meantime, just in case.

For the moment, I think I’m missing a concept.
In the “Extract video frames” example project (inside the MBS plugin), there’s this block:

[code]// setup track output
dim d as new Dictionary
d.Value(CVPixelBufferMBS.kCVPixelBufferPixelFormatTypeKey) = CVPixelBufferMBS.kCVPixelFormatType_32ARGB

dim arto as new AVAssetReaderTrackOutputMBS(VideoTrack, d)
ar.addOutput arto
[/code]

So we have to use a track output to retrieve frames?

Depends on what you want.
The track output is to deliver all frames when playing video.

AVAssetImageGeneratorMBS class is better to just get a few frames.

Looks like even this definition is confusing me. Can’t we just get each picture from a video track without passing by an output, additional, track?

I need to process all frames, actually.
Thank you.

The output works asynchronously on a background thread.
When the picture is finished, a Xojo event is triggered.
To get 30 fps, we need background threads to do the work.

If I get it right, the “output” track stays internal to the framework and when a new picture is available, it is drawn to it; is that it?
Thanks.