Question

I am trying to write an application by using Microsoft in-process speech recognition engine. My application uses sometimes dictation grammar and sometimes SRGS. Obviously, I do not have any problem when I use SRGS.

Even though I use one of the best available microphone (Sennheiser ME3 with Andrea usb sound card), the recognition results are far from being acceptable. My application operates in a specific domain, there are some words and phrases which are more likely to be spoken by a user of the system. My question is, is there any way to use dictation grammar and at the same time specifying important words in the domain of application. It is a kind of partially modifying the language model of the speech recognizer, only for a list of words and phrases provided by developer.

Was it helpful?

Solution

There are a couple of options.

  1. If you have a set of unusual words, you can add words using the ISpLexicon interface (or use the Windows Speech Recognition Speech Dictionary).
  2. Dictation recognition improves dramatically with context. You should callSetDictationContext as you update your recognition (and as the user changes the caret position).
  3. Last, you can use the Dictation Resource Kit to define a new dictation grammar. Only do this as a last resort, as it's a very complex process.
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top