Question

I'm using SimpleDB for my application. Everything goes well unless the limitation of one attribute is 1024 bytes. So for a long string I have to chop the string into chunks and save it.

My problem is that sometimes my string contains unicode character (chinese, japanese, greek) and the substr() function is based on character count not byte.

I tried to use use bytes for byte semantic or later substr(encode_utf8($str), $start, $length) but it does not help at all.

Any help would be appreciated.

Was it helpful?

Solution

UTF-8 was engineered so that character boundaries are easy to detect. To split the string into chunks of valid UTF-8, you can simply use the following:

my $utf8 = encode_utf8($text);
my @utf8_chunks = $utf8 =~ /\G(.{1,1024})(?![\x80-\xBF])/sg;

Then either

# The saving code expects bytes.
store($_) for @utf8_chunks;

or

# The saving code expects decoded text.
store(decode_utf8($_)) for @utf8_chunks;

Demonstration:

$ perl -e'
    use Encode qw( encode_utf8 );

    # This character encodes to three bytes using UTF-8.
    my $text = "\N{U+2660}" x 342;

    my $utf8 = encode_utf8($text);
    my @utf8_chunks = $utf8 =~ /\G(.{1,1024})(?![\x80-\xBF])/sg;

    CORE::say(length($_)) for @utf8_chunks;
'
1023
3

OTHER TIPS

substr operates on 1-byte characters unless the string has the UTF-8 flag on. So this will give you the first 1024 bytes of a decoded string:

substr encode_utf8($str), 0, 1024;

although, not necessarily splitting the string on character boundaries. To discard any split characters at the end, you can use:

$str = decode_utf8($str, Encode::FB_QUIET);
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top