Is it possible to convert a string containing "high" unicode chars to an array consisting of dec values derived from utf-32 ("real") codes?

StackOverflow https://stackoverflow.com/questions/14679920

Question

Please, look at this script operating on a (theoretically possible) string:

<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title></title>
<script src="jquery.js"></script>
<script>
    $(function () {
        $("#click").click(function () {
            var txt = $('#high-unicode').text();
            var codes = '';
            for (var i = 0; i < txt.length; i++) {
                if (i > 0) codes += ',';
                codes += txt.charCodeAt(i);
            }
            alert(codes);
        });
    });
</script>
</head>
<body>
<span id="click">click</span><br />
<span id="high-unicode">&#x1D465;<!-- mathematical italic small x -->&#xF31E0;<!-- some char from Supplementary Private Use Area-A -->A<!-- char A -->&#x108171;<!-- some char from Supplementary Private Use Area-B --></span>
</body>
</html>

Instead of "55349,56421,56204,56800,65,56288,56689", is it possible to get "119909,995808,65,1081713"? I've read more-utf-32-aware-javascript-string and Q: What’s the algorithm to convert from UTF-16 to character codes? + Q: Isn’t there a simpler way to do this? from unicode.org/faq/utf_bom, but I'm not sure how to use this info.

Was it helpful?

Solution

It looks like you have to decode surrogate pairs manually. For example:

function decodeUnicode(str) {
    var r = [], i = 0;
    while(i < str.length) {
        var chr = str.charCodeAt(i++);
        if(chr >= 0xD800 && chr <= 0xDBFF) {
            // surrogate pair
            var low = str.charCodeAt(i++);
            r.push(0x10000 + ((chr - 0xD800) << 10) | (low - 0xDC00));
        } else {
            // ordinary character
            r.push(chr);
        }
    }
    return r;
}

Complete code: http://jsfiddle.net/twQWU/

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top