Today I played around a bit with speech recognition on OSX. One of the things I did, was create a simple bridge between Adobe AIR and the NSSpeechRecognizer API on OSX. This api allows you to set a predefined list of commands, and listen for those spoken commands.
The API is quite simple, and so is the native extension api. After including the ANE file, you’ll create an instance of the bridge:
After that, you add a list of valid commands:
1 2 3 4
You add an event handler, which is triggered when a command is recognized:
And you start the recognizer:
This will open the speech recognition widget of OSX, with your AIR application. The event handler is triggered when the bridge recognizes one of the commands. The CommandRecognizedEvent object, will contain the command that was recognized:
1 2 3 4
Note that this built-in speech recognition engine is quite sensitive to background noise, and only recognizes US-English spoken words.