Question

Why does this code:

char a[10]; 
wchar_t w[10] = L"ä"; // German a Umlaut
int e = wcstombs(a, w, 10);

return e == -1?

I am using Oracle Solaris Studio 10 on Solaris 11. The locale is Latin-1, which contains the German Umlauts. All docs I have found indicate (to me) that the conversion should succeed.

If I do this:

char a[10] = "ä"; // German a Umlaut
wchar_t w[10];
int e = mbstowcs(w, a, 10);
e = wcstombs(a, w, 10);

there is no error, but the result is wrong. (Some variant of upper A.)

I also tried wstostr with similar result.

Was it helpful?

Solution

1) verify that the correct value is getting into the wchar_t. The compiler producing the wide character string literal has to convert L"ä" from the source code encoding to the wide execution charset.

2) verify that the program's locale is correct. You can do this with printf("%s\n", setlocale(LC_ALL, NULL));

I suspect that the problem is 1) because for me even if the program's locale isn't set correctly I still get the expected output. To avoid problems with the source code encoding you can escape non-ascii characters like L"\x00E4".

 1  #include <iostream>
 2  #include <clocale>
 3
 4  int main () {
 5    std::printf("%s\n", std::setlocale(LC_ALL, NULL));   // prints "C"
 6
 7    char a[10];
 8    wchar_t w[10] = L"\x00E4"; // German a Umlaut
 9    std::printf("0x%04x\n", (unsigned)w[0]);             // prints "0x00e4"
10
11    std::setlocale(LC_ALL, "");
12    printf("%s\n", std::setlocale(LC_ALL, NULL));        // print something that indicates the encoding is ISO 8859-1
13    int e = std::wcstombs(a, w, 10);
14    std::printf("%i 0x%02x\n", e, (unsigned char)a[0]);  // print "1 0xe4"
15  }
16



Character Sets in C and C++ Programs

In your source code you can use any character from the 'source character set', which is a superset of the 'basic source character set'. The compiler will convert characters in string and character literals from the source character set into the execution character set (or wide execution character set for wide string and character literals).

The issue is that the source character set is implementation dependent. Typically the compiler simply has to know what encoding you use for the source code and then it will accept any characters from that encoding. GCC has command line arguments for setting the source encoding, Visual Studio will assume that the source is in the user's codepage unless it detects one of the so-called Unicode signatures for UTF-8 or UTF-16, and Clang currently always uses UTF-8.

Once the compiler is using the right source character set for your code it will then produce string and character literals in the 'execution character set'. The execution character set is another superset of the basic source character set, and is also implementation dependent. GCC takes a command line argument to set the execution character set, VS uses the user's locale, and Clang uses UTF-8.

Because the source character set is implementation dependent, the portable way to write characters outside the basic set is to either use hex encoding to directly specify the numeric values to be used in execution, or (if you're not using C89/90) to use universal character names (UCNs), which are converted to the execution character set (or wide execution character set when used in wide string and character literals). UCNs look like \uNNNN or \UNNNNNNNN and specify the character from the Unicode character set with the code point value NNNN or NNNNNNNN. (Note that C99 and C++11 prohibit you from using surrogate code points, if you want a character from outside the BMP just directly write the character's value using \U.)

The source and execution character sets are determined at compile time and do not change based on the locale of the system running the program. That is, the program locale uses another encoding not necessarily matching the execution character set. The wide execution character set should correspond to the wide character encoding used by supported locales, however.


Solaris Studio's behavior

Oracle's compiler for Solaris has very simple behavior. For narrow string and character literals no particular source encoding is specified, bytes from the source code are simply used directly as the execution literal. This effectively means that the execution character set is the same as the encoding of the source files. For wide character literals the source bytes are converted using the system locale. This means that you have to save the source file using the locale encoding in order to get correct wide literals.

I suspect that your source code is being saved in an encoding other than the one specified by the locale, so your compiler was failing to produce the correct wide string literal from L"ä". Your editor might be using UTF-8. You can check using the following program.

 1  #include <iostream>
 2  #include <clocale>
 3
 4  int main () {
 5    wchar_t w[10] = L"ä"; // German a Umlaut
 6    std::printf("0x%04x 0x%04x\n", (unsigned)w[0], (unsigned)w[1]);
 7  }
 8

Since wcstombs can correctly convert the wide character 0x00E4 to the latin-1 encoding of 'ä' you want the above to display 0x00E4 0x0000. If the source code encoding is UTF-8 then you should see 0x00C3 0x00A4.

OTHER TIPS

You may have to set the locale to understand German. Specifically you want the ctype facet.

Try this:

setlocale( LC_ALL, ".1252" );

or specifically this:

setlocale( LC_CTYPE, ".1252" );

You may have to search for a better codepage than ".1252". Good luck.

The codepage examples above are Windows. On Unixy systems try "de_DE" for the codepage.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top