From Wikipedia, the free encyclopedia
UTF-8 (8-bit UCS/Unicode Transformation Format) is a variable-length character encoding for Unicode. It is able to represent any character in the Unicode standard, yet the initial encoding of byte codes and character assignments for UTF-8 is backwards compatible with ASCII. For these reasons, it is steadily becoming the preferred encoding for e-mail, web pages, and other places where characters are stored or streamed.
The Internet Engineering Task Force (IETF) requires all Internet protocols to identify the encoding used for character data, and the supported character encodings must include UTF-8. The Internet Mail Consortium (IMC) recommends that all email programs be able to display and create mail using UTF-8.
|Unicode and HTML|
|Unicode and E-mail|
By early 1992 the search was on for a good byte-stream encoding of multi-byte character sets. The draft ISO 10646 standard contained a non-required annex called UTF that provided a byte-stream encoding of its 32-bit characters. This encoding was not satisfactory on performance grounds, but did introduce the notion that bytes in the ASCII range of 0–127 represent themselves in UTF, thereby providing backward compatibility.
In July 1992 the X/Open committee XoJIG was looking for a better encoding. Dave Prosser of Unix System Laboratories submitted a proposal for one that had faster implementation characteristics and introduced the improvement that 7-bit ASCII characters would only represent themselves; all multibyte sequences would include only 8-bit characters, i.e., those where the high bit was set.
In August 1992 this proposal was circulated by an IBM X/Open representative to interested parties. Ken Thompson of the Plan 9 operating system group at Bell Laboratories then made a crucial modification to the encoding to allow it to be self-synchronizing, meaning that it was not necessary to read from the beginning of the string in order to find character boundaries. Thompson's design was outlined on September 2, 1992, on a placemat in a New Jersey diner with Rob Pike. The following days, Pike and Thompson implemented it and updated Plan 9 to use it throughout, and then communicated their success back to X/Open.
The bits of a Unicode character are distributed into the lower bit positions inside the UTF-8 bytes, with the lowest bit going into the last bit of the last byte:
So the first 128 characters (US-ASCII) need one byte. The next 1920 characters need two bytes to encode. This includes Latin letters with diacritics and characters from Greek, Cyrillic, Coptic, Armenian, Hebrew, Arabic, Syriac and Thaana alphabets. Three bytes are needed for the rest of the Basic Multilingual Plane (which contains virtually all characters in common use). Four bytes are needed for characters in the other planes of Unicode, which are rarely used in practice.
By continuing the pattern given above it is possible to deal with
much larger numbers. The original specification allowed for sequences
of up to six bytes covering numbers up to 31 bits (the original limit
of the universal character set). However, UTF-8 was restricted by RFC 3629 to use only the area covered by the formal Unicode definition, U+
0000 to U+
10FFFF, in November 2003.
With these restrictions, bytes in a UTF-8 sequence have the following meanings. The ones marked in red can never appear in a legal UTF-8 sequence. The ones in green are represented in a single byte. The ones in white must only appear as the first byte in a multi-byte sequence, and the ones in orange can only appear as the second or later byte in a multi-byte sequence:
|00000000-01111111||00-7F||0-127||US-ASCII (single byte)|
|10000000-10111111||80-BF||128-191||Second, third, or fourth byte of a multi-byte sequence|
|11000000-11000001||C0-C1||192-193||Overlong encoding: start of a 2-byte sequence, but code point <= 127|
|11000010-11011111||C2-DF||194-223||Start of 2-byte sequence|
|11100000-11101111||E0-EF||224-239||Start of 3-byte sequence|
|11110000-11110100||F0-F4||240-244||Start of 4-byte sequence|
|11110101-11110111||F5-F7||245-247||Restricted by RFC 3629: start of 4-byte sequence for codepoint above 10FFFF|
|11111000-11111011||F8-FB||248-251||Restricted by RFC 3629: start of 5-byte sequence|
|11111100-11111101||FC-FD||252-253||Restricted by RFC 3629: start of 6-byte sequence|
|11111110-11111111||FE-FF||254-255||Invalid: not defined by original UTF-8 specification|
Unicode also disallows the 2048 code points U+D800..U+DFFF (the UTF-16 surrogate pairs) and also the 32 code points U+FDD0..U+FDEF (noncharacters) and all 34 code points of the form U+xxFFFE and U+xxFFFF (more noncharacters). See Table 3-7 in the Unicode 5.0 standard. UTF-8 reliably transforms these values, but they are not valid scalar values in Unicode, and thus the UTF-8 encodings of them may be considered invalid sequences.
The official name is "UTF-8", which is used in all the documents relating to the encoding. There are many instances, particularly for documents to be transmitted across the internet, where the character set in which the document is encoded is declared by the name near the start of the document. In this case, the correct name to use is "UTF-8". In addition, all standards conforming to the Internet Assigned Numbers Authority (IANA) list, which include CSS, HTML, XML, and HTTP headers may also use the name "utf-8", as the declaration is case insensitive. Despite this, alternative forms, usually "utf8" or "UTF8", are seen; while this is incorrect and should be avoided, most agents such as browsers can understand this.
 Rationale behind UTF-8's design
UTF-8 was designed to satisfy the following properties:
- The ASCII characters are represented by themselves as a single bytes that don't appear anywhere else
- This makes UTF-8 work with the large number of existing APIs that take bytes strings but only treat a small number of ASCII codes specially. This removes the need to write a UTF-8 version of an API. More importantly it removes the need to identify whether the text is ASCII or UTF-8. This makes it enormously easier to convert an existing systems to UTF-8 than any other Unicode encoding.
- No first byte can appear as a later byte in a character
- This makes UTF-8 "self-synchronizing". If one or more complete bytes are lost due to error or corruption, one can always locate the beginning of the next character and thus limit the damage.
- The first byte of a character determines the number of bytes
- This guarantees that no byte sequence of one character is contained within a longer byte sequence that is not that character. This ensures that byte-wise sub-string matching can be applied to search for words or phrases within a text; some older variable-length encodings (such as Shift-JIS) did not have this property and thus made string-matching algorithms rather complicated.
- The sequences 0xfe,0xff and 0xff,0xfe do not appear
- This means a UTF-8 stream never matches the UTF-16 byte-order mark and thus cannot be confused with it (this property may not have been intended but is quite useful in practice).
These properties add redundancy to UTF-8–encoded text. Redundancy makes it very unlikely that a random sequence of bytes will validate as UTF-8. The lack of a practical validity test results in errors like mojibake for Shift-JIS and ISO-8859-1 and requires usage of a byte-order mark in UTF-16. The chance of a random sequence of bytes being valid UTF-8 and not pure ASCII is 3.9% for a 2-byte sequence, 0.41% for a 3-byte sequence and 0.026% for a 4-byte sequence. While natural languages encoded in traditional encodings are not random byte sequences, they are even less likely to pass a UTF-8 validity test and then be misinterpreted. For example, for ISO-8859-1 text to be mis-recognized as UTF-8, the only non-ASCII characters in it would have to be in sequences starting with either an accented letter or the multiplication symbol and ending with a symbol (pure ASCII text would pass a UTF-8 validity test, but it is UTF-8 by definition).
Redundancy means that UTF-8 text does not use memory as efficiently as possible. However data compression is not one of the aims of the Unicode encodings. A modern compression algorithm such as used by gzip will compress any encoding of Unicode to about the same size and (if the source text is more than a few hundred characters) to less than one byte per character. For short items of text where traditional algorithms do not perform well and size is important, the Standard Compression Scheme for Unicode could be considered instead. Size arguments (both for and against UTF-8) are listed in the advantages/disadvantages below.
 UTF-8 derivations
The following implementations are slight differences from the UTF-8 specification. They are incompatible with the UTF-8 specification.
Many pieces of software added UTF-8 conversions for UCS-2 data and did not alter their UTF-8 conversion when UCS-2 was replaced with the surrogate-pair supporting UTF-16. The result is that each half of a UTF-16 surrogate pair is encoded as its own 3-byte UTF-8 encoding, resulting in 6 bytes rather than 4 for characters outside the Basic Multilingual Plane. Oracle databases use this, as well as Java and Tcl as described below, and probably a great deal of other Windows software where the programmers were unaware of the complexities of UTF-16. Although most usage is by accident, a supposed benefit is that this preserves UTF-16 binary sorting order when CESU-8 is binary sorted.
 Modified UTF-8
In Modified UTF-8 the null character (U+0000) is encoded as 0xc0,0x80 rather than 0x00. (0xc0,0x80 is not valid UTF-8 because it is not the shortest possible representation.) This means that the encoding of an array of Unicode containing the null character will not have a null byte in it, and thus will not be truncated if processed in a language such as C using traditional ASCIIZ string functions.
All known Modified UTF-8 implementations also treat the surrogate pairs as in CESU-8.
In normal usage, the Java programming language supports standard UTF-8 when reading and writing strings through
OutputStreamWriter. However it uses
modified UTF-8 for object serialization, for the Java Native Interface, and for embedding constants in class files. Tcl also uses the same modified UTF-8 as Java for internal representation of Unicode data.
 Byte-order mark
Many Windows programs (including Windows Notepad) add the bytes 0xEF,0xBB,0xBF at the start of any document saved as UTF-8. This is the UTF-8 encoding of the Unicode byte-order mark. This causes interoperability problems with software that does not expect the BOM. In particular:
- It removes the desirable feature that UTF-8 is identical to ASCII for ASCII-only text. For instance a text editor that does not recognize UTF-8 will display "ï»¿" at the start of the document, even if the UTF-8 contains only ASCII and would otherwise display correctly.
- Programs that identify file types by special leading characters will fail to identify the UTF-8 files, even for file types that can otherwise contain UTF-8. A notable example is the Unix shebang syntax.
Some Windows software (including Notepad) will sometimes misidentify UTF-8 (and thus plain ASCII) documents as UTF-16LE if this BOM is missing, a bug commonly known as "Bush hid the facts" after a particular phrase that can trigger it.
 Overlong forms, invalid input, and security considerations
The exact response required of a UTF-8 decoder on invalid input is not uniformly defined by the standards. In general, there are several ways a UTF-8 decoder might behave in the event of an invalid byte sequence:
- Not notice and decode as if the bytes were some similar bit of UTF-8.
- Replace the bytes with a replacement character (usually '?' or '�' (U+FFFD)).
- Ignore the bytes.
- Interpret the bytes according to another encoding (often ISO-8859-1 or CP1252).
- Act like the string ends at that point and report an error
- Undo (or avoid) any result of the already-decoded part and report an error
Decoders may also differ in what bytes are part of the error. The sequence 0xF0,0x20,0x20,0x20 might be considered a single 4-byte error, or a 1-byte error followed by 3 space characters.
It is possible for a decoder to behave in different ways for different types of invalid input.
RFC 3629 states that "Implementations of the decoding algorithm MUST protect against decoding invalid sequences." The Unicode Standard requires a Unicode-compliant decoder to "…treat any ill-formed code unit sequence as an error condition. This guarantees that it will neither interpret nor emit an ill-formed code unit sequence."
Overlong forms are one of the most troublesome types of UTF-8 data. The current RFC says they must not be decoded, but older specifications for UTF-8 only gave a warning, and many simpler decoders will happily decode them. Overlong forms have been used to bypass security validations in high profile products including Microsoft's IIS web server. Therefore, great care must be taken to avoid security issues if validation is performed before conversion from UTF-8, and it is generally much simpler to handle overlong forms before any input validation is done.
To maintain security in the case of invalid input, there are a few options. The first is to decode the UTF-8 before doing any input validation checks. The second is to use a decoder that, in the event of invalid input either returns an error or text that the application knows to be harmless. A third possibility is to never decode the UTF-8 at all, and only look for the byte patterns you wish to match, but this requires that you know that no other part of your system will attempt a decoding, a catch-22 that makes this simple solution difficult to use in many systems.
 Advantages and disadvantages
- UTF-8 is a superset of ASCII. Since a plain ASCII string is also a valid UTF-8 string, no conversion needs to be done for existing ASCII text. Software designed for traditional code-page-specific character sets can generally be used with UTF-8 with few or no changes.
- Sorting of UTF-8 strings using standard byte-oriented sorting routines will produce the same results as sorting them based on Unicode code points. (This has limited usefulness, though, since it is unlikely to represent the culturally acceptable sort order of any particular language or locale.) For the sorting to work correctly, the bytes must be treated as unsigned values.
- UTF-8 and UTF-16 are the standard encodings for XML documents. All other encodings must be specified explicitly either externally or through a text declaration. 
- Any byte oriented string search algorithm can be used with UTF-8 data (as long as one ensures that the inputs only consist of complete UTF-8 characters). Care must be taken with regular expressions and other constructs that count characters, however.
- UTF-8 strings can be fairly reliably recognized as such by a simple algorithm. That is, the probability that a string of characters in any other encoding appears as valid UTF-8 is low, diminishing with increasing string length. For instance, the octet values C0, C1, and F5 to FF never appear. For better reliability, regular expressions can be used to take into account illegal overlong and surrogate values (see the W3 FAQ: Multilingual Forms for a Perl regular expression to validate a UTF-8 string).
- A badly-written (and not compliant with current versions of the standard) UTF-8 parser could accept a number of different pseudo-UTF-8 representations and convert them to the same Unicode output. This provides a way for information to leak past validation routines designed to process data in its eight-bit representation.
 Compared to single-byte encodings
- UTF-8 can encode any Unicode character, avoiding the need to figure out and set a "code page" or otherwise indicate what character set is in use, and allowing output in multiple languages at the same time.
- UTF-8 encoded text is larger than the appropriate single-byte encoding except for plain ASCII characters. In the case of languages which commonly used 8-bit character sets with non-Latin alphabets encoded in the upper half (such as most Cyrillic and Greek alphabet code pages), UTF-8 text will be almost double the size of the same text in a single-byte encoding.
- Single byte per character encodings make string cutting easy even with simple-minded APIs.
 Compared to other multi-byte encodings
- UTF-8 can encode any Unicode character, avoiding the need to figure out and set a "code page" or otherwise indicate what character set is in use, and allowing output in multiple languages at the same time.
- Character boundaries are easily found from anywhere in an octet stream (scanning either forwards or backwards). This implies that if a stream of bytes is scanned starting in the middle of a multi-byte sequence, only the information represented by the partial sequence is lost and decoding can begin correctly on the next character. Similarly, if a number of bytes are corrupted or dropped, then correct decoding can resume on the next character boundary. Many multi-byte encodings are much harder to resynchronise.
- A byte sequence for one character never occurs as part of a longer sequence for another character as it did in older variable-length encodings like Shift-JIS (see the previous section on this). For instance, US-ASCII octet values do not appear otherwise in a UTF-8 encoded character stream. This provides compatibility with file systems or other software (e.g., the printf() function in C libraries) that parse based on US-ASCII values but are transparent to other values.
- The first byte of a multi-byte sequence is enough to determine the length of the multi-byte sequence. This makes it extremely simple to extract a sub-string from a given string without elaborate parsing. This was often not the case in many multi-byte encodings.
- Efficient to encode using simple bit operations. UTF-8 does not require slower mathematical operations such as multiplication or division (unlike the obsolete UTF-1 encoding).
- UTF-8 often takes more space than an encoding made for one or a few languages. Latin letters with diacritics and characters from other alphabetic scripts typically take one byte per character in the appropriate multi-byte encoding but take two in UTF-8. East Asian scripts generally have two bytes per character in their multi-byte encodings yet take three bytes per character in UTF-8.
 Compared to UTF-7
- UTF-8 uses significantly fewer bytes per character for all non-ASCII characters.
- UTF-8 encodes "+" as itself whereas UTF-7 encodes it as "+-".
- UTF-8 requires the transmission system to be eight-bit clean. In the case of e-mail this means it has to be further encoded using quoted printable or base64 in some cases. This extra stage of encoding carries a significant size penalty. However, this disadvantage is not so important an issue any more because most mail transfer agents in modern use are eight-bit clean and support the 8BITMIME SMTP extension as specified in RFC 1869.
 Compared to UTF-16
- Converting to UTF-16 while maintaining compatibility with existing programs (such as was done with Windows) requires every API and data structure that takes a string to be duplicated. Handling of invalid encodings in each API makes this much more difficult than it may first appear.
- Conversion of a string of random 16-bit values that is assumed to be UTF-16 to UTF-8 is lossless. But invalid UTF-8 cannot be converted losslessly to UTF-16. This makes UTF-8 a safe way to hold data that might be text; this is surprisingly important. For instance an API to control filesystems with both UTF-8 and UTF-16 filenames, but where either one may contain names with invalid encodings, can be written using UTF-8 but not UTF-16.
- In UTF-8, characters outside the basic multilingual plane are not a special case. UTF-16 is often mistaken to be constant-length, leading to code that works for most text but suddenly fails for non-BMP characters. Retrofitting code tends to be hard, so it's better to implement support for the entire range of Unicode from the start.
- Text consisting of mostly diacritic-free Latin letters will be around half the size in UTF-8 than it would be in UTF-16. Text in all languages using codepoints below U+0800 (which includes all modern European languages) will be smaller in UTF-8 than it would be in UTF-16 because of the presence of spaces, newlines, numbers, and ASCII punctuation, all of which are encoded in one byte per character.
- Most communication and storage protocols were designed for a stream
of bytes. A UTF-16 string must use a pair of bytes for each code word,
which introduces a couple of potential problems:
- The order of those two bytes becomes an issue. One can say that UTF-16 has two variants when used for text files. A variety of mechanisms can be used to deal with this issue (for example, the byte-order mark), but they still present an added complication for software and protocol design.
- If a byte is missing from a character in UTF-16, the whole rest of the string (or the entire text file) will be either invalid UTF-16 or meaningless text. In UTF-8, if part of a multi-byte character is removed, only that character is affected and not the rest of the text. i.e. UTF-8 was made to be self-synchronizing, whereas UTF-16 was not.
- Characters U+0800 and above in the BMP use three bytes in UTF-8, but only two in UTF-16. As a result, text in (for example) Chinese, Japanese or Hindi will take more space in UTF-8 if there are more of these characters than there are ASCII characters (ASCII includes spaces, numbers, newlines, and some punctuation, and XML markup, so it is not unusual for ASCII characters to dominate anyway). This could be understood for Chinese where each character is a word, but in India there have been complaints since their western neighbours using the Arabic letters have only two bytes per character.
- Although both UTF-8 and UTF-16 suffer from the need to handle invalid sequence as described above under general disadvantages, a simplistic parser for UTF-16 is unlikely to convert invalid sequences to ASCII. Since the dangerous characters in most situations are generally ASCII a simplistic UTF-16 parser is much less dangerous than a simplistic UTF-8 parser.
 See also
- Alt codes
- Byte-order mark
- Comparison of email clients#Features
- Comparison of Unicode encodings
- Character encodings in HTML
- ISO 8859
- iconv—a standardized API used to convert between different character encodings
- UTF-8 in URIs
- Unicode and e-mail
- Unicode and HTML
- Universal Character Set
- UTF-9 and UTF-18
- ^ "Moving to Unicode 5.1". Official Google Blog (May 5, 2008). Retrieved on 2008-05-08.
- ^ Alvestrand, H. (1998), "IETF Policy on Character Sets and Languages", RFC 2277, Internet Engineering Task Force
- ^ "Using International Characters in Internet Mail". Internet Mail Consortium (August 1, 1998). Retrieved on 2007-11-08.
- ^ Pipe, Rob (2003-04-03). "UTF-8 history".
- ^ Internet Assigned Numbers Authority Character Sets
- ^ W3C: Setting the HTTP charset parameter notes that the IANA list is used for HTTP
- ^ There are 256*256-128*128 not-pure-ASCII 2-byte sequences, and of those, only 1920 encode valid UTF-8 characters (the range U+0080 to U+07FF), so the proportion of valid not-pure-ASCII 2-byte sequences is 3.9%. Similarly, there are 256*256*256-128*128*128 not-pure-ASCII 3-byte sequences, and 61406 valid 3-byte UTF-8 sequences (U+000800-U+00FFFF minus surrogate pairs and non-characters), so the proportion is 0.41%; finally, there are 256^4-128^4 non-ASCII 4-byte sequences, and 1048544 valid 4-byte UTF-8 sequences (U+010000-U+10FFFF minus non-characters), so the proportion is 0.026%. Note that this assumes that control characters pass as ASCII; without the control characters, the percentage proportions drop somewhat).
- ^ Yergeau, F. (2003), "UTF-8, a transformation format of ISO 10646", RFC 3629, Internet Engineering Task Force
 External links
There are several current definitions of UTF-8 in various standards documents:
- RFC 3629 / STD 63 (2003), which establishes UTF-8 as a standard Internet protocol element
- The Unicode Standard, Version 5.0, §3.9 D92, $3.10 D95 (2007)
- The Unicode Standard, Version 4.0, §3.9–§3.10 (2003)
- ISO/IEC 10646:2003 Annex D (2003)
They supersede the definitions given in the following obsolete works:
- ISO/IEC 10646-1:1993 Amendment 2 / Annex R (1996)
- The Unicode Standard, Version 2.0, Appendix A (1996)
- RFC 2044 (1996)
- RFC 2279 (1998)
- The Unicode Standard, Version 3.0, §2.3 (2000) plus Corrigendum #1 : UTF-8 Shortest Form (2000)
- Unicode Standard Annex #27: Unicode 3.1 (2001)
They are all the same in their general mechanics, with the main differences being on issues such as allowed range of code point values and safe handling of invalid input.
- Original UTF-8 paper (or pdf) for Plan 9 from Bell Labs
- RFC 5198 defines UTF-8 NFC for Network Interchange
- UTF-8 test pages by Andreas Prilop and the World Wide Web Consortium
- Unix/Linux: UTF-8/Unicode FAQ, Linux Unicode HOWTO, UTF-8 and Gentoo
- The Unicode/UTF-8-character table displays UTF-8 in a variety of formats (with Unicode and HTML encoding information)
- Unicode and Multilingual Web Browsers from Alan Wood’s Unicode Resources describes support and additional configuration of Unicode/UTF-8 in modern browsers
- JSP Wiki Browser Compatibility page details specific problems with UTF-8 in older browsers
- Mathematical Symbols in Unicode
- Unicode.se shows how to set your homepages and databases to UTF-8
- Graphical View of UTF-8 in ICU's Converter Explorer