I think the first optimization you could make here would be to make your first try calling MultiByteToWideChar
start with a buffer instead of a null pointer. Because you specified CP_UTF8
, MultiByteToWideChar
must walk over the whole string to determine the expected length. If there is some length which is longer than the vast majority of your strings, you might consider optimistically allocating a buffer of that size on the stack; and if that fails, then going to dynamic allocation. That is, move the first branch if your if/else
block outside of the if/else
.
You might also save some time by calculating the length of the source string once and passing it in explicitly -- that way MultiByteToWideChar doesn't have to do a strlen
every time you call it.
That said, it sounds like if the rest of your project is C#, you should use the .NET BCL class libraries designed to do this rather than having a side by side assembly in C++/CLI for the sole purpose of converting strings. That's what System.Text.Encoding
is for.
I doubt any kind of caching data structure you could use here is going to make any significant difference.
Oh, and don't ignore the result of MultiByteToWideChar
-- not only should you never cast anything to void
, you've got undefined behavior in the event MultiByteToWideChar
fails.