Hi. Does anyone know if there is a way (I’m assuming via declares) to ascertain whether the user’s computer has the Apple Speech Engine actively running/listening? Or is the engine always running?
In relation to this, do most Speech Recognition/Dictation apps (like MacSpeech) use the in-built Apple Engine or are there various speech engines available and these are installed with eachrespective app?
Ultimately I want to be able to determine on my app’s launch whether speech recognition is running and if not tell the user to enable it.
Right. So you’re saying that if it’s 10.8 then it’s available but that doesn’t necessarily mean it’s active and listening? Does the user on 10.8 have to manually enable it via System Preferences or does an app accessing it need to enable it?
If so then there must be an “active/listening” state and and “non-active/sleeping” state…perhaps the equivalent to the SystemParametersInfo SPI_GETSPEECHRECOGNITION check on Windows to see whether the Windows engine is listening etc. That’s what I need to understand better and determine.
And before 10.8, was there an equivalent OS X Speech Engine? If there was, was it Carbon only? And if so, could it still be accessed via Carbon Declares on a Cocoa target build?
I’m not sure if this is possible, but with NSSpeechRecognizer you can block other applications from recognizing text while yours is running. Also it will automatically enable speech recognition on the computer when it is created so you shouldn’t need to worry if the user already has speech recognition enabled or not.
It would be good to know whether there is a full proof way to determine whether a speech recognition system (OS X or third party) is actively “listening” for commands…so if anyone knows if this is possible then please advise.
Jason, thanks again for the clarification and all your help!
My understanding is that the NSSpeechSynthesizer and NSSpeechRecognizer Cocoa classes where introduced in OSX 10.3.
If your designing an App to use the NSSpeechRecognizer, then it’s irrelevant wether it’s running already, or wether another App is also already using it.
You have to ask the NSSpeechRecognizerServer to start and stop listening for certain words or phrases.
I’ve put a link to Apples developer documentation below which explains the process, although you may not understand the Objective-C language, I’m sure you’ll understand the requirements.
Thanks Mark. I’m not actually writing an app to directly use these functions but to enable a feature within my own app that can help a third party app communicate with my app because I have a canvas based text field that is not recognized by speech recognition like regular text area controls.
So on my app’s launch I would ideally like to determine whether a speech recognition engine is currently active/listening so I can then enable my “helper” feature…and if it is not active then I can alert the user to enable speech recognition in order to communicate with my app.
This may sound confusing but the whole point is not to confuse a new user of my app to think that I have “integrated” speech recognition but that I just have a feature that assists this. Otherwise a new user may launch the “speech recognition” helper feature and just expect it to start working for them…but it can’t…it needs to be the middle man between two apps.
I kind of understand, but not fully.
You see the speech recognition system is neither running or not running.
A particular application can request the feature with the aforementioned Cocoa classes, but a different application can never know this because of the sandboxing rules.
So I personaly cannot see a way of your own App knowing what another app is doing, unless the other App is scriptable, and also reveals it’s speech recognition status with a scriptable property.
But because I don’t know of a way, that does not mean it isn’t possible.
Sorry I could not be more help.
As far as I can tell from the apple docs, Mark is correct and you will not be able to “listen in” on the speech recognition used by another application.
Although you will need to talk to Norman for confirmation, using the TextInputCanvas plugin from Xojo may allow your textfield canvas to be scriptable like a standard textfield because it is just like a standard textfield in the eyes of OSX, just with added canvas capabilities. Good luck figuring this out.
Denise, did you ever get a usable answer to your original question?
I have the same question: namely, how can I tell if the speech commands recognizer is actively listening;
that is, is the Microphone window showing. Im using OS X 10.11.2. I am able to toggle Listening ON/OFF using an Apple script, but one needs to know the actual state before toggling.
A word of warning with Carbon; I have several 32-Bit Cocoa application with some serious bugs (in El Capitan) and after several months of communication, I was told unofficially that anything other than 64-Bit Cocoa should be considered deprecated and avoided.
Has anyone a really working example of Xojo Speech recognition with Sierra and Xcode 8. All suggestions here cannot be compiled error free. I am using Xojo Release 3. I’ve been already searching for weeks.