tl;dr: When an application tried to use Apple Script, the OS will produce a “Do you allow this operation dialog”. Which in itself is fine, but it’s not easy for a user to change their mind, so you can almost consider a user clicking on “Don’t Allow” is “Don’t Allow” for life.
Worse, I’ve seen reports that for some developers, this leads to applications crashing and for others it disables features, with no indication to the user as to why. Even Xcode uses Apple Script and accidentally clicking “Don’t Allow”, breaks some Xcode features.
My suggestion is to try to find API to replace Apple Script. There are additional API added that can help with this, it’s documented in the article and the header files (not Apple’s documentation).
More nanny mode, unnecessary, baby sitting. If our app has been approved for the MAS, does this go away? Since Apple has already vetted the app’s operations before approving it, wouldn’t they have flagged any dangerous AS in the app?
When do we start to see things like -
The application has tried to display a dialog window. Do you wish to allow this?
The application has updated the mouse pointer location. Do you wish to allow this?
The application has called the POSIX function sprintf. Do you wish to allow this?
This is turning into the old Windows 95 joke -
“The user has taken a breath. Windows must restart to enable the change.”
This has become one of my beef’s with the Apple App Store, in theory Sandboxing isn’t actually needed, because who else to better vet an application against known vulnerabilities than the OS developer themselves?
Instead what we have is an over protective App Store and zero protection outside, should be t’other way round if you ask me.
Youre all assuming that all developers are honest, which is unfortunately not true.
Imagine the trouble there would be if an app, disguised as a restaurant tip calculator could send all of your contacts, calendar items, reminders and photos wherever and whenever it wanted. Someone on the receiving end would have a very good idea of your lifestyle based on your photos, where you live because of your contacts and when youre away because of your calendar.
Apple is leaving it up to the user to make informed choices about what sensitive information is given out to protect their users privacy and to protect themselves legally if theres ever a lawsuit.
And it would be very easy to hide illegal tasks while your App is in Review…
I am sure the bold marked part is the most important here. Apple tries to protect the User, i truly believe in this. But they know they can’t protect the User from Data leaks and that’s why they must protect Apple.
I also believe while more and more “non-digital-experienced” people get access to the internet, the System Developers are forced to create “smart” mechanism to protect the users.
This is exactly what Sam and I are talking about - such a malicious app should DEFINITELY be caught by the MAS reviewer. They refuse apps for the simplest of things, but your statement posits that they can’t catch a malicious effort in a tested application.
How? They flagged one of mine BECAUSE it phoned home - a simple ping test to see if the app had network connectivity. And the API checker catches things that even Xcode misses in the compile and linker warnings.
In many ways. The App needs access to the Internet for legal reasons, so that the Review Team does not reject it because it tries to access the Network without a real need for this. Do not try to do the “evil things” within the next x weeks for example. Every communication should be encrypted. Apps like “Little Snitch” will still be able to catch the communication, but if a users asks why your App tries to connect to your site, you could say the App does it for service, support, whatever sounds “nice”…
And this is just a very simple approach. If you put more energy/fantasy into such tasks, you can trick nearly every system. The hard part is to get your App onto the users System with the needed rights. But the App Store helps to get around this.
But this is all just in theory and i hope Apple continues to try to help the user to catch/prevent such Apps.
It’s the execution of the system that history has proved a failure, just ask Microsoft how it worked for them with Vista. Heck even Apple made a joke about it. Here we are almost a decade later and they’re making the same mistake.
The biggest problems with this is that users get trained by the excessive dialogs to simply click “OK” or “Allow” every time one pops up. Especially when developers also pop-up a dialog that reads hey my app needs to connect to the internet to check available underpant sizes, then you get an Apple one which says something similar.
IMHO, there should be a way for a user to see what permissions an application has, and to provide fine grained control over those permissions. It’s a developers nightmare, because nearly everything you do, you have to check to see if you can do it, before you do it.
May I present what I believe to be a better solution.
When an application is code signed, all the executable code is verified (like they do when you submit to the Mac App Store). It builds a list of functions that your application uses, which could be used to compromise the user’s “Private” data. This list is then written into the code signature.
When an application is launched, the OS checks it’s cache to see if that application has already been run and if the previous list of privacy options matches this application, if not it displays the following dialog.
In this dialog the user is presented with all the options at once, they can pick and choose or deny this time or deny for life. However as this is part of the OS, it also add a menuitem to the application menu, allow the user to change these options easily and at will, rather than hiding them in the System Preferences (which is disconnected in usability IMHO).
This way, when you first open “Tip Calculator”, you immediately see everything that it’s asking permission to do. Much cleaner than having to click through 6 different dialogs and potentially choosing the wrong option, just to get rid of them. And of course, if you do change your mind, we’ve illustrated to you how to get back here, to change the settings.
I think what Greg is implying is it’s not a case of they can’t catch anything malicious, it’s a case of the legal team said it should be done in a way, that a) Apple doesn’t have to and b) if it gets through it’s not Apple’s responsibility or importantly, not their fault, and therefore cannot be held liable for cases where Apple distributed a malicious application, which may lead to potential lose of life or damage.
As you and I pointed out there is a reason as to why Microsoft dropped this technique.
Of course pushing the onus of having to make the decision wether an Applescript is allowed to execute onto the end user is pretty terrible. It basically means that Apple admits defeat and just doesn’t want to be held responsible. I’d be happy to submit my source code to Apple if that solved the problem. I wonder if maybe they could just allow the scripts to execute if the scripts were external files so that they could look at the code.