Question

I'm trying to output non-ascii characters to the Windows CMD, but the problem is, it's not working. I didn't write the code below, I stuck the two pieces together. The code is supposed to convert a character to UTF-8 and then from UTF-8 to UTF-16 so that it will display correctly on Windows. Here's the code:

// codecvt::in example
#include <iostream>       // std::wcout, std::wcout
#include <locale>         // std::locale, std::codecvt, std::use_facet
#include <string>         // std::wstring
#include <cwchar>         // std::mbstate_t

void GetUnicodeChar(unsigned int code, char chars[5]) {
        if (code <= 0x7F) {
            chars[0] = (code & 0x7F); chars[1] = '\0';
        } else if (code <= 0x7FF) {
            // one continuation byte
            chars[1] = 0x80 | (code & 0x3F); code = (code >> 6);
            chars[0] = 0xC0 | (code & 0x1F); chars[2] = '\0';
        } else if (code <= 0xFFFF) {
            // two continuation bytes
            chars[2] = 0x80 | (code & 0x3F); code = (code >> 6);
            chars[1] = 0x80 | (code & 0x3F); code = (code >> 6);
            chars[0] = 0xE0 | (code & 0xF); chars[3] = '\0';
        } else if (code <= 0x10FFFF) {
            // three continuation bytes
            chars[3] = 0x80 | (code & 0x3F); code = (code >> 6);
            chars[2] = 0x80 | (code & 0x3F); code = (code >> 6);
            chars[1] = 0x80 | (code & 0x3F); code = (code >> 6);
            chars[0] = 0xF0 | (code & 0x7); chars[4] = '\0';
        } else {
            // unicode replacement character
            chars[2] = 0xEF; chars[1] = 0xBF; chars[0] = 0xBD;
            chars[3] = '\0';
        }
    }

int main ()
{
  typedef std::codecvt<wchar_t,char,std::mbstate_t> facet_type;

  std::locale mylocale;

  const facet_type& myfacet = std::use_facet<facet_type>(mylocale);

  char mystr[5];
  GetUnicodeChar(225, mystr);

  // prepare objects to be filled by codecvt::in :
  wchar_t pwstr[sizeof(mystr)];              // the destination buffer (might be too short)
  std::mbstate_t mystate = std::mbstate_t(); // the shift state object
  const char* pc;                            // from_next
  wchar_t* pwc;                              // to_next

  // translate characters:
  facet_type::result myresult = myfacet.in (mystate,
      mystr, mystr+sizeof(mystr), pc,
      pwstr, pwstr+sizeof(mystr), pwc);

  if ( myresult == facet_type::ok )
  {
    std::wcout << L"Translation successful: ";
    std::wcout << pwstr << std::endl;
  }
  return 0;
}

The problem is, when I supply the number 225 (the decimal representation for the unicode character á) to the GetUnicodeChar function the output on OSX is correct because it displays the letter á but on Windows it displays the character ├í. But I thought Windows used UTF-16 internally which is why I thought this should work. But it doesn't.

Was it helpful?

Solution

You'll need to set _O_U16TEXT mode first:

_setmode(_fileno(stdout), _O_U16TEXT);

More info at an old Michael Kaplain's blog entry: http://www.siao2.com/2008/03/18/8306597.aspx

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top