How to Output Unicode Strings on the Windows Console

Solution 1:

The general strategy I/we use in most (cross platform) applications/projects is: We just use UTF-8 (I mean the real standard) everywhere. We use std::string as the container and we just interpret everything as UTF8. And we also handle all file IO this way, i.e. we expect UTF8 and save UTF8. In the case when we get a string from somewhere and we know that it is not UTF8, we will convert it to UTF8.

The most common case where we stumble upon WinUTF16 is for filenames. So for every filename handling, we will always convert the UTF8 string to WinUTF16. And also the other way if we search through a directory for files.

The console isn't really used in our Windows build (in the Windows build, all console output is wrapped into a file). As we have UTF8 everywhere, also our console output is UTF8 which is fine for most modern systems. And also the Windows console log file has its content in UTF8 and most text-editors on Windows can read that without problems.

If we would use the WinConsole more and if we would care a lot that all special chars are displayed correctly, we maybe would write some automatic pipe handler which we install in between fileno=0 and the real stdout which will use WriteConsoleW as you have suggested (if there is really no easier way).

If you wonder about how to realize such automatic pipe handler: We have implemented such thing already for all POSIX-like systems. The code probably doesn't work on Windows as it is but I think it should be possible to port it. Our current pipe handler is similar to what tee does. I.e. if you do a cout << "Hello" << endl, it will both be printed on stdout and in some log-file. Look at the code if you are interested how this is done.

Solution 2:

Several points:

  1. One important difference between Windows "WriteConsoleW" and printf is that WriteConsoleW looks at console as GUI rather them text stream. For example if you use it and use pipe you would not capture output.
  2. I would never said that code-pages are obsolete. Maybe windows developers would like them to be so, but they never would be. All world, but windows api, uses byte oriented streams to represent data: XML, HTML, HTTP, Unix, etc, etc use encodings and most popular and most powerful one is UTF-8. So you may use Wide strings internally but in external world you'll need something else.

    Even when you print wcout << L"Hello World" << endl it is converted under the hood to byte oriented stream, on most systems (but windows) to UTF-8.

  3. My personal opinion, Microsoft did mistake when changed their API in every place to wide instead of supporting UTF-8 everywhere. Of course you may argue about it. But in fact you have to separate text and byte oriented streams and convert between them.

Solution 3:

To answer your first question, you can output Unicode strings to the Windows console using _setmode. Specific details regarding this can be found on Michael Kaplan's blog. By default, the console is not Unicode (UCS-2/UTF-16). It works in an Ansi (locale/code page) manner and must specifically be configured to use Unicode.

Also, you have to change the console font, as the default font only supports Ansi characters. There are some minor exceptions here, such as zero-extended ASCII characters, but printing actual Unicode characters requires the use of _setmode.

In Windows, everything is UTF-16. Regardless of whether you're dealing with the kernel, the graphics subsystem, the filesystem or whatever, you're passing UTF-16 strings. There are no locales or charsets in the Unix sense.

This is not completely true. While the underlying core of Windows does use Unicode, there is a huge amount of interoperability that comes into play that lets Windows interact with a wide variety of software.

Consider notepad (yes, notepad is far from a core component, but it gets my point across). Notepad has the ability to read files that contain Ansi (your current code page), Unicode or UTF-8. You might consider notepad to be a Unicode application, but that is not entirely accurate.

A better example is drivers. Drivers can be written in either Unicode or Ansi. It really depends on the nature of the interface. To further this point, Microsoft provides the StrSafe library, which was specifically written with Kernel-mode drivers in mind, and it includes both Unicode and Ansi versions. While the drivers are either Ansi or Unicode, the Windows kernel must interact with them - correctly - regardless of whatever form they take.

The further away you get from the core of Windows, the more interoperability comes into play. This includes code pages and locales. You have to remember that not all software is written with Unicode in mind. Visual C++ 2010 still has the ability to build using Ansi, Multi-Byte or Unicode. This includes the use of code pages and locales, which are part of the C/C++ standard.

However, I think this represents a design flaw in the Windows API

the following two articles discuss this fairly well.

  • Conventional wisdom is retarded, aka What the @#%&* is _O_U16TEXT?
  • Header files are not retarded, aka What the @#%&* is _O_WTEXT?

So my questions are: What to do in this situation? And why isn't this problem solved even in Microsoft's own libraries? Both the .NET Framework and the C and C++ libraries seem to adhere to the obsolete codepage model. How would you design the Windows API or an application framework to circumvent this issue?

On this point, I think you are looking at Windows in hindsight. Unicode did not come first, ASCII did. After ASCII, came code pages. After code pages, came DBCS. After DBCS came MBCS (and eventually UTF-8). After UTF-8, came Unicode (UTF-16/UCS-2).

Each of these technologies was incorporated into the Windows OS over the years. Each building on the last, but without breaking each other. Software was written with each of these in mind. While it may not seem like it sometimes, Microsoft puts a huge amount of effort into not breaking software it didn't write. Even now, you can write new software that takes advantage of any of these technologies and it will work.

The real answer here is "compatibility". Microsoft still uses these technologies and so do many other companies. There are an untold number of programs, components and libraries which have not been updated (or ever will be updated) to use Unicode. Even as newer technologies arise - like .NET - the older technologies must stick around. At the very least for interoperability.

For example, say you have a DLL that you need to interact with from .NET, but this DLL was written using Ansi (single byte code page localized). To make it worse, you don't have the source for the DLL. The only answer here is to use those obsolete features.