Pregunta

I have written a custom std::basic_streambuf and std::basic_ostream because I want an output stream that I can get a JNI string from in a manner similar to how you can call std::ostringstream::str(). These classes are quite simple.

namespace myns {

class jni_utf16_streambuf : public std::basic_streambuf<char16_t>
{
    JNIEnv * d_env;
    std::vector<char16_t> d_buf;
    virtual int_type overflow(int_type);

public:
    jni_utf16_streambuf(JNIEnv *);
    jstring jstr() const;
};

typedef std::basic_ostream<char16_t, std::char_traits<char16_t>> utf16_ostream;

class jni_utf16_ostream : public utf16_ostream
{
    jni_utf16_streambuf d_buf;

public:
    jni_utf16_ostream(JNIEnv *);
    jstring jstr() const;
};

// ...

} // namespace myns

In addition, I have made four overloads of operator<<, all in the same namespace:

namespace myns {

// ...

utf16_ostream& operator<<(utf16_ostream&, jstring) throw(std::bad_cast);

utf16_ostream& operator<<(utf16_ostream&, const char *);

utf16_ostream& operator<<(utf16_ostream&, const jni_utf16_string_region&);

jni_utf16_ostream& operator<<(jni_utf16_ostream&, jstring);

// ...

} // namespace myns

The implementation of jni_utf16_streambuf::overflow(int_type) is trivial. It just doubles the buffer width, puts the requested character, and sets the base, put, and end pointers correctly. It is tested and I am quite sure it works.

The jni_utf16_ostream works fine inserting unicode characters. For example, this works fine and results in the stream containing "hello, world":

myns::jni_utf16_ostream o(env);
o << u"hello, wor" << u'l' << u'd';

My problem is as soon as I try to insert an integer value, the stream's bad bit gets set, for example:

myns::jni_utf16_ostream o(env);
if (o.badbit()) throw "bad bit before"; // does not throw
int32_t x(5);
o << x;
if (o.badbit()) throw "bad bit after"; // throws :(

I don't understand why this is happening! Is there some other method on std::basic_streambuf I need to be implementing????

¿Fue útil?

Solución

It looks like the answer is that char16_t support is only partly implemented in GCC 4.8. The library headers don't install facets needed to convert numbers. Here is what the Boost.Locale project says about it:

GNU GCC 4.5/C++0x Status

GNU C++ compiler provides decent support of C++0x characters however:

Standard library does not install any std::locale::facets for this support so any attempt to format numbers using char16_t or char32_t streams would just fail. Standard library misses specialization for required char16_t/char32_t locale facets, so "std" backends is not build-able as essential symbols missing, also codecvt facet can't be created as well.

Visual Studio 2010 (MSVC10)/C++0x Status

MSVC provides all required facets however:

Standard library does not provide installations of std::locale::id for these facets in DLL so it is not usable with /MD, /MDd compiler flags and requires static link of the runtime library. char16_t and char32_t are not distinct types but rather aliases of unsigned short and unsigned types which contradicts to C++0x requirements making it impossible to write char16_t/char32_t to stream and causing multiple faults.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top