Why isn't everything we do in Unicode?

Given that Unicode has been around for 18 years, why are there still apps that don't have Unicode support? Even my experiences with some operating systems and Unicode have been painful to say the least. As Joel Spolsky pointed out in 2003, it's not that hard. So what's the deal? Why can't we get it together?


Solution 1:

Start with a few questions

How often...

  • do you need to write an application that deals with something else than ascii?
  • do you need to write a multi-language application?
  • do you write an application that has to be multi-language from its first version?
  • have you heard that Unicode is used to represent non-ascii characters?
  • have you read that Unicode is a charset? That Unicode is an encoding?
  • do you see people confusing UTF-8 encoded bytestrings and Unicode data?

Do you know the difference between a collation and an encoding?

Where did you first heard of Unicode?

  • At school? (really?)
  • at work?
  • on a trendy blog?

Have you ever, in your young days, experienced moving source files from a system in locale A to a system in locale B, edited a typo on system B, saved the files, b0rking all the non-ascii comments and... ending up wasting a lot of time trying to understand what happened? (did your editor mix things up? the compiler? the system? the... ?)

Did you end up deciding that never again you will comment your code using non-ascii characters?

Have a look at what's being done elsewhere

Python

Did I mention on SO that I love Python? No? Well I love Python.

But until Python3.0, its Unicode support sucked. And there were all those rookie programmers, who at that time knew barely how to write a loop, getting UnicodeDecodeError and UnicodeEncodeError from nowhere when trying to deal with non-ascii characters. Well they basically got life-traumatized by the Unicode monster, and I know a lot of very efficient/experienced Python coders that are still frightened today about the idea of having to deal with Unicode data.

And with Python3, there is a clear separation between Unicode & bytestrings, but... look at how much trouble it is to port an application from Python 2.x to Python 3.x if you previously did not care much about the separation/if you don't really understand what Unicode is.

Databases, PHP

Do you know a popular commercial website that stores its international text as Unicode?

You will (perhaps) be surprised to learn that Wikipedia backend does not store its data using Unicode. All text is encoded in UTF-8 and is stored as binary data in the Database.

One key issue here is how to sort text data if you store it as Unicode codepoints. Here comes the Unicode collations, which define a sorting order on Unicode codepoints. But proper support for collations in Databases is missing/is in active development. (There are probably a lot of performance issues, too. -- IANADBA) Also, there is no widely-accepted standard for collations yet: for some languages, people don't agree on how words/letters/wordgroups should be sorted.

Have you heard of Unicode normalization? (Basically, you should convert your Unicode data to a canonical representation before storing it) Of course it's critical for Database storage, or local comparisons. But PHP for example only provides support for normalization since 5.2.4 which came out in August 2007.

And in fact, PHP does not completely supports Unicode yet. We'll have to wait PHP6 to get Unicode-compatible functions everywhere.

So, why isn't everything we do in Unicode?

  1. Some people don't need Unicode.
  2. Some people don't care.
  3. Some people don't understand that they will need Unicode support later.
  4. Some people don't understand Unicode.
  5. For some others, Unicode is a bit like accessibility for webapps: you start without, and will add support for it later
  6. A lot of popular libraries/languages/applications lack proper, complete Unicode support, not to mention collation & normalization issues. And until all items in your development stack completely support Unicode, you can't write a clean Unicode application.

The Internet clearly helps spreading the Unicode trend. And it's a good thing. Initiatives like Python3 breaking changes help educating people about the issue. But we will have to wait patiently a bit more to see Unicode everywhere and new programmers instinctively using Unicode instead of Strings where it matters.

For the anecdote, because FedEx does not apparently support international addresses, the Google Summer of Code '09 students all got asked by Google to provide an ascii-only name and address for shipping. If you think that most business actors understand stakes behind Unicode support, you are just wrong. FedEx does not understand, and their clients do not really care. Yet.

Solution 2:

  • Many product developers don't consider their apps being used in Asia or other regions where Unicode is a requirement.
  • Converting existing apps to Unicode is expensive and usually driven by sales opportunities.
  • Many companies have products maintained on legacy systems and migrating to Unicode means a totally new development platform.
  • You'd be surprised how many developers don't understand the full implications of Unicode in a multi-language environment. It's not just a case of using wide strings.

Bottom line - cost.

Solution 3:

Probably because people are used to ASCII and a lot of programming is done by native English speakers.

IMO, it's a function of collective habit, rather than conscious choice.

Solution 4:

The widespread availability of development tools for working with Unicode may be a more recent event than you suppose. Working with Unicode was, until just a few years ago, a painful task of converting between character formats and dealing with incomplete or buggy implementations. You say it's not that hard, and as the tools improve that is becoming more true, but there are a lot of ways to trip up unless the details are hidden from you by good languages and libraries. Hell, just cutting and pasting unicode characters could be a questionable proposition a few years back. Developer education also took some time, and you still see people make a ton of really basic mistakes.

The Unicode standard weighs probably ten pounds. Even just an overview of it would have to discuss the subtle distinctions between characters, glyphs, codepoints, etc. Now think about ASCII. It's 128 characters. I can explain the entire thing to someone that knows binary in about 5 minutes.

I believe that almost all software should be written with full Unicode support these days, but it's been a long road to achieving a truly international character set with encoding to suit a variety of purposes, and it's not over just yet.

Solution 5:

Laziness, ignorance.