Ok, I finally managed to make sense of Apple APIs :
[NSLocale currentLocale]
: DOESN'T return current language picked by User in Settings > General > international, but returns the region code selected by user in same screen.[NSLocale preferredLanguages]
: This list DOES give device language, it's the first string in this list[[NSBundle mainBundle] preferredLocalizations]
return language bundle resolved by application. I guess this is whatNSLocalizedString
uses. It only has 1 object in my case, but I wonder in which cases it can have more than one.[AVSpeechSynthesisVoice currentLanguageCode]
returns the system predefined language code.[AVSpeechSynthesisVoice voiceWithLanguage:]
class instanciation method needs complete language code : with language AND region. (e.g. : passing @"en" to it will return nil object, it needs @"en-US", or @"en-GB"... )[AVSpeechSynthesisVoice currentLanguageCode]
gives default voice, determined by OS.
So this is what my final code looks like
// current user locale (language & region)
NSString *voiceLangCode = [AVSpeechSynthesisVoice currentLanguageCode];
NSString *defaultAppLang = [[[NSBundle mainBundle] preferredLocalizations] firstObject];
// nil voice will use default system voice
AVSpeechSynthesisVoice *voice = nil;
// is default voice language compatible with our application language ?
if ([voiceLangCode rangeOfString:defaultAppLang].location == NSNotFound) {
// if not, select voice from application language
NSString *pickedVoiceLang = nil;
if ([defaultAppLang isEqualToString:@"en"]) {
pickedVoiceLang = @"en-US";
} else {
pickedVoiceLang = @"fr-FR";
}
voice = [AVSpeechSynthesisVoice voiceWithLanguage:pickedVoiceLang];
}
AVSpeechUtterance *mySpeech = [[AVSpeechUtterance alloc] initWithString:NSLocalizedString(@"MY_SPEECH_LOCALIZED_KEY", nil)];
frontPicUtterance.voice = voice;
This way, a user from NewZealand, Australien, GreatBritain, or Canada will get the voice that correspond most to his usual settings.