Apple M1, Rosetta, Neural Engine & ML accelerators

I’m not even clear on the difference between the function and possibilities of the Neural Engine and machine learning accelerators, so any info is welcome, but the concept got me thinking.

Is it possible that an emulation layer like Rosetta uses these parts of the hardware to speed up certain aspects of the conversion process? For example can one use the ML accelerators to mimic specific x86 functions?

And in general, could one use this AI functionality for more ‘generic’ administrative tasks too? E.g. if I were to write an accounting app, can these specific parts of the M1 be of any use for me at all? Or is it purely for fancy / abstract pattern recognition tricks?

Well, my understanding ist that the NeuralEngine, beside AR stuff etc, is basically used (on the iThings) to optimize the performance and memory handling by learning what the user is doing with the device. For instance apps used regularly will open faster etc. So it will probably help on Rosetta2 by having the necessary emulation faster for those apps you are running regularly.

But I don’t see how the neural engine will help on something completely missing like the x86 architecture. Parallels and Co will have to “emulate” this if they want to bring Windows or any other OS depending on X86 architecture up and running.

You can use Neural Engine with CoreML classes in MBS Xojo Plugins.
See MLModelMBS.

2 Likes

That’s most likely a good thing and good that we have already a plug in, but I still would not know for what I could use it for? SPAM recognition?, picture recognition?, predicting the favorite path a user wants to store files? Predicting when he prefers dark mode or not?

I understand it as such, that you must present questions to the user for interaction and in future the system will predict the right answer? I’m missing the real world examples for using this plugin. But it is a serious question no cynicism!

Thumbnail
CoreML Example

Maybe like here? Let the computer tell you what is on a picture.

or this blog entry:
Use Machine Learning for detecting porn images

2 Likes

Finally a real life business case, now I’m all in :-).

I understand it for pictures (that’s basically what we are helping Google with by helping them on all the captcha questions), but I would still struggle with all other examples not involving pictures. I’m sure there are more cases … but it seems to be beyond my head using them.

Where do you get that idea from???

As the captcha answers must already be known for the captcha to work that is incorrect.

Well, text based algorithms are available.
e.g. to detect language or the mood of a person, maybe even extra key elements like a date and location.

Another use case for the high performance neural network stuff in the A14/M1 chips are improvements to things like the Vision Framework for doing fast, real-time analysis of video like sports, fitness monitoring, etc – not for the ML side of training object “models” but just the raw power to do analysis based on already trained “models” – what can be done with the A14 (and presumably M1) is astounding on consumer priced devices like the new iPhones, iPads, and presumably now M1 machines as well.

The new Vision Framework stuff’s ability to do body/hand pose detection, moving object detection, etc is pretty amazing for the consumer price point.

Not necessarily, let‘s say the captcha needs 3 pictures showing cars and Google knows which 3 are really showing cars that‘s enough to let you in. But they might show a forth picture where they are not sure what it is showing. If enough people are always marking this 4th picture than it is most likely a car as well and they are getting further information from us for free.

You might want to browse more deeply into what ML is with the info here:

1 Like

I’ve read somewhere that was the main idea behind captcha in the first place. To improve the detection…

@Markus_Winter @Arthur_van_den_Boogaart

indeed, there are tons of articles about his: https://www.techradar.com/news/captcha-if-you-can-how-youve-been-training-ai-for-years-without-realising-it

@Tim_Jones My emphasize was on: I’m missing the real world examples for using this plugin ;-). I personally see less business cases for my customers to train what are porn pics and what not :-).

For instance, could this plugin help us in fine-tuning the layout of an app dynamically per user? A few ideas:

  • You have a list of preferences for the user: could we sort that list as such that those preferences the user is changing often on the top.
  • We have tabs in our app. Content in those tabs is not following a certain logic, it is just what we build for the user. But one user is all the time using the tab at the end of the list, could the app learn that and change the sort of the tabs?

I know that we can solve the above examples with a simple counting of the accesses, but it is just an example. I’m just missing how to use this machine learning in real life Xojo apps.

Me too. That’s why I was referencing ‘abstract’ pattern recognition. On the other hand, maybe I can use it to develop a ‘creative’ accounting option. :wink:

1 Like

Ah - I was a bit too quick on the draw, then.

1 Like

There is a very nice introduction into ML at the Apple site:

Apple Core ML

And btw. I like it that already someone (Christian) takes care about this. I have several “real life” problems - starting from predicting stocks and ending in making personal decisions.

I think the biggest problem we have is to get sufficient input data to train that thing.