The offline voice capabilities of Jelly Bean are handled by the Google Search application internally. There has been no change to either the RecognizerIntent or the SpeechRecognizer API.
This isn't ideal for what you want to achieve, as having a dependency to a closed sourced application that isn't cross platform will throw a spanner in the works.... Regardless of that, a simple offline = true parameter is nowhere to be seen and you'll end up having to coerce this behaviour. I have requested this parameter by the way!
Google handle their wake up phrase with a dedicated processor core, but it looks unlikely that the manufacturers intend to expose this functionality to anyone other than OEMs.
That leaves other alternative recognition providers, that have RESTful services, such as iSpeech, AT&T and Nuance, but again, you'll be murdering the battery and using significant data if you take this approach. Not to mention the audio conflicts that occur on the Android platform.
Finally, you end up with Sphinx. At present, I consider it the only viable solution to lower the resource usage, but it doesn't get around the audio conflict issues. I've been working on getting it running within my application for a long time, but I still have major issues with false positives that have stopped me including it in production.
It is probably your only option until Google, processor manufactures and OEMs work out how to offer such functionality, without every application installed on the device wanting a piece of the action, which is inevitable.....
I'm not sure this response actually provided and answer, more excludes some!
Good luck
EDIT: In an environment of wearables, such products will have access to the dedicated cores - at least they need to make sure they do and use a processor with such capabilities. From my interaction with companies developing such tech, they often overlook this or are unaware of its necessity.