Calculating length in UTF-8 of Java String without actually encoding it
Solution 1:
Here's an implementation based on the UTF-8 specification:
public class Utf8LenCounter {
public static int length(CharSequence sequence) {
int count = 0;
for (int i = 0, len = sequence.length(); i < len; i++) {
char ch = sequence.charAt(i);
if (ch <= 0x7F) {
count++;
} else if (ch <= 0x7FF) {
count += 2;
} else if (Character.isHighSurrogate(ch)) {
count += 4;
++i;
} else {
count += 3;
}
}
return count;
}
}
This implementation is not tolerant of malformed strings.
Here's a JUnit 4 test for verification:
public class LenCounterTest {
@Test public void testUtf8Len() {
Charset utf8 = Charset.forName("UTF-8");
AllCodepointsIterator iterator = new AllCodepointsIterator();
while (iterator.hasNext()) {
String test = new String(Character.toChars(iterator.next()));
Assert.assertEquals(test.getBytes(utf8).length,
Utf8LenCounter.length(test));
}
}
private static class AllCodepointsIterator {
private static final int MAX = 0x10FFFF; //see http://unicode.org/glossary/
private static final int SURROGATE_FIRST = 0xD800;
private static final int SURROGATE_LAST = 0xDFFF;
private int codepoint = 0;
public boolean hasNext() { return codepoint < MAX; }
public int next() {
int ret = codepoint;
codepoint = next(codepoint);
return ret;
}
private int next(int codepoint) {
while (codepoint++ < MAX) {
if (codepoint == SURROGATE_FIRST) { codepoint = SURROGATE_LAST + 1; }
if (!Character.isDefined(codepoint)) { continue; }
return codepoint;
}
return MAX;
}
}
}
Please excuse the compact formatting.
Solution 2:
Using Guava's Utf8:
Utf8.encodedLength("some really long string")