Question

In Mavericks Apple introduced "Enhanced Dictation" -- an ability to transcribe speech into text locally, in off-line mode. Unfortunately, they also introduced another feature -- while the dictation is active, all the sound is muted. A bit of digging around turned out that the "muted" sound is still being played. For example, Audio Hijack captures the sound as it should be played and saves it into a file. I'm making an application that requires sound output during dictation (I'm assuming that the user is wearing headphones). It does not look like they change the volume settings: querying the master volume level on the headphone device shows that it is the same before and during dictation. The Sound volume indicator in the menu bar does not change either. As far as the rest of the system is concerned the sound is playing.

I'm a CoreAudio noob. I can do basic things with recording and playback, but not much more. Is it possible to get the "muted" sound back? Is there a switch, a flag, a feature in CoreAudio that would enable the sound from my application to reach the headphones with the dictation active?

Was it helpful?

Solution

For people who would stumble on this page: I did find an answer eventually. You can disable audio ducking by setting the following user defaults:

defaults write com.apple.SpeechRecognitionCore AllowAudioDucking -bool NO
defaults write com.apple.speech.recognition.AppleSpeechRecognition.prefs DictationIMAllowAudioDucking -bool NO

See the detailed explanation on Youtube.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top