6. Using Unicode with Libtabula

6.1. A Short History of Unicode

...with a focus on relevance to Libtabula

In the old days, computer operating systems only dealt with 8-bit character sets. That only allows for 256 possible characters, but the modern Western languages have more characters combined than that alone. Add in all the other languages of the world plus the various symbols people use in writing, and you have a real mess!

Since no standards body held sway over things like international character encoding in the early days of computing, many different character sets were invented. These character sets weren’t even standardized between operating systems, so heaven help you if you needed to move localized Greek text on a DOS box to a Russian Macintosh! The only way we got any international communication done at all was to build standards on top of the common 7-bit ASCII subset. Either people used approximations like a plain “c” instead of the French “ç”, or they invented things like HTML entities (“ç” in this case) to encode these additional characters using only 7-bit ASCII.

Unicode solves this problem. It encodes every character used for writing in the world, using up to 4 bytes per character. The subset covering the most economically valuable cases takes two bytes per character, so many Unicode-aware programs only support this subset, storing characters as 2-byte values, rather than use 4-byte characters so as to cover all possible cases, however rare. This subset of Unicode is called the Basic Multilingual Plane, or BMP.

Unfortunately, Unicode was invented about two decades too late for Unix and C. Those decades of legacy created an immense inertia preventing a widespread move away from 8-bit characters. MySQL and C++ come out of these older traditions, and so they share the same practical limitations. Libtabula currently doesn’t have any code in it for Unicode conversions; it just passes data along unchanged from the underlying DBMS C API, so you still need to be aware of these underlying issues.

During the development of the Plan 9 operating system (a kind of successor to Unix) Ken Thompson invented the UTF-8 encoding. UTF-8 is a superset of 7-bit ASCII and is compatible with C strings, since it doesn’t use 0 bytes anywhere as multi-byte Unicode encodings do. As a result, many programs that deal in text will cope with UTF-8 data even though they have no explicit support for UTF-8. (Follow the last link above to see how the design of UTF-8 allows this.) Thus, when explicit support for Unicode was added in MySQL v4.1, they chose to make UTF-8 the native encoding, to preserve backward compatibility with programs that had no Unicode support.

6.2. Unicode on Unixy Systems

Linux and Unix have system-wide UTF-8 support these days. If your operating system is of 2001 or newer vintage, it probably has such support.

On such a system, the terminal I/O code understands UTF-8 encoded data, so your program doesn’t require any special code to correctly display a UTF-8 string. If you aren’t sure whether your system supports UTF-8 natively, just run the simple1 example: if the first item has two high-ASCII characters in place of the “ü” in “Nürnberger Brats”, you know it’s not handling UTF-8.

If your Unix doesn’t support UTF-8 natively, it likely doesn’t support any form of Unicode at all, for the historical reasons I gave above. Therefore, you will have to convert the UTF-8 data to the local 8-bit character set. The standard Unix function iconv() can help here. If your system doesn’t have the iconv() facility, there is a free implementation available from the GNU Project. Another library you might check out is IBM’s ICU. This is rather heavy-weight, so if you just need basic conversions, iconv() should suffice.

6.3. Unicode on Windows

Each Windows API function that takes a string actually comes in two versions. One version supports only 1-byte “ANSI” characters[15] so they end in 'A'. The first Unicode-aware versions of Windows supported a 2-byte subset of Unicode called UCS-2. Since Windows XP, Windows now uses the UTF-16 encoding natively instead; it also takes 2 bytes per character for typical Western texts, but unlike UCS-2, it can extend to up to 4 bytes per character to encode the rarer Unicode characters. Regardless of whether you use UTF-16 or UCS-2, these are often generically called “wide” characters, so the other set of Windows API functions end in 'W'. The MessageBox() API, for instance, is actually a macro, not a real function. If you define the UNICODE macro when building your program, the MessageBox() macro evaluates to MessageBoxW(); otherwise, to MessageBoxA().

Most open source DBMSes these days use the UTF-8 Unicode encoding by preference but Windows uses UTF-16 instead. Because of that, you probably need to convert data when passing text between Libtabula and the Windows API. Since there’s no point in trying for portability — no other OS I’m aware of natively uses UTF-16 — you might as well use platform-specific functions to do this translation. Libtabula ships with two Visual C++ specific examples showing how to do this in a GUI program.[16]

How you handle Unicode data depends on whether you’re using the native Windows API, or the newer .NET API. First, the native case:

// Convert a C string in UTF-8 format to UTF-16 format.
void ToUCS2(LPTSTR pcOut, int nOutLen, const char* kpcIn)
  MultiByteToWideChar(CP_UTF8, 0, kpcIn, -1, pcOut, nOutLen);

// Convert a UTF-16 string to C string in UTF-8 format.
void ToUTF8(char* pcOut, int nOutLen, LPCWSTR kpcIn)
  WideCharToMultiByte(CP_UTF8, 0, kpcIn, -1, pcOut, nOutLen, 0, 0);

These functions leave out some important error checking, so see examples/vstudio/mfc/mfc_dlg.cpp for the complete version.

If you’re building a .NET application (such as, perhaps, because you’re using Windows Forms), it’s better to use the .NET libraries for this:

// Convert a C string in UTF-8 format to a .NET String in UTF-16 format.
String^ ToUCS2(const char* utf8)
  return gcnew String(utf8, 0, strlen(utf8), System::Text::Encoding::UTF8);

// Convert a .NET String in UTF-16 format to a C string in UTF-8 format.
System::Void ToUTF8(char* pcOut, int nOutLen, String^ sIn)
  array<Byte>^ bytes = System::Text::Encoding::UTF8->GetBytes(sIn);
  nOutLen = Math::Min(nOutLen - 1, bytes->Length);
  System::Runtime::InteropServices::Marshal::Copy(bytes, 0,
    IntPtr(pcOut), nOutLen);
  pcOut[nOutLen] = '\0';

Unlike the native API versions, these examples are complete, since the .NET platform handles a lot of things behind the scenes for us. We don’t need any error-checking code for such simple routines.

All of this assumes you’re using Windows NT or one of its direct descendants: Windows 2000, Windows XP, Windows Vista, Windows 7, or any “Server” variant of Windows. Windows 95 and its descendants (98, ME, and CE) do not support Unicode. They still have the 'W' APIs for compatibility, but they just smash the data down to 8-bit and call the 'A' version for you.

6.4. For More Information

The Unicode FAQs page has copious information on this complex topic.

When it comes to Unix and UTF-8 specific items, the UTF-8 and Unicode FAQ for Unix/Linux is a quicker way to find basic information.

[15] A superset of ASCII

[16] The console examples don’t bother with such conversions since console programs are relatively rare on Windows. If you want them to run correctly on Windows, you can run them under the Cygwin MinTTY terminal, as it is natively UTF-8 aware. You don’t need to build the console programs as native Cygwin programs for this to work, just run the ones you built under VC++ in the Cygwin terminal.