New Unicode logo.svg
Logo of the Unicode Consortium
Alias(es)Universal Coded Character Set (UCS)
StandardUnicode Standard
Encoding formats
Preceded byISO/IEC 8859, various others

Unicode, formally the Unicode Standard, is an information technology standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard, which is maintained by the Unicode Consortium, defines 144,697 characters[1][2] covering 159 modern and historic scripts, as well as symbols, emoji, and non-visual control and formatting codes.

The Unicode character repertoire is synchronized with ISO/IEC 10646, each being code-for-code identical with the other. The Unicode Standard, however, includes more than just the base code. Alongside the character encodings, the Consortium's official publication includes a wide variety of details about the scripts and how to display them: normalization rules, decomposition, collation, rendering, and bidirectional text display order for multilingual texts, and so on.[3] The Standard also includes reference data files and visual charts to help developers and designers correctly implement the repertoire.

Unicode's success at unifying character sets has led to its widespread and predominant use in the internationalization and localization of computer software. The standard has been implemented in many recent technologies, including modern operating systems, XML, Java (and other programming languages), and the .NET Framework.

Unicode can be implemented by different character encodings. The Unicode standard defines Unicode Transformation Formats (UTF): UTF-8, UTF-16, and UTF-32, and several other encodings. The most commonly used encodings are UTF-8, UTF-16, and the obsolete UCS-2 (a precursor of UTF-16 without full support for Unicode); GB18030, while not an official Unicode standard, is standardized in China and implements Unicode fully.

UTF-8, the dominant encoding on the World Wide Web (used in over 95% of websites as of 2020, and up to 100% for some languages)[4] and on most Unix-like operating systems, uses one byte[note 1] (8 bits) for the first 128 code points, and up to 4 bytes for other characters.[5] The first 128 Unicode code points represent the ASCII characters, which means that any ASCII text is also a UTF-8 text.

UCS-2 uses two bytes (16 bits) for each character but can only encode the first 65,536 code points, the so-called Basic Multilingual Plane (BMP). With 1,112,064 possible Unicode code points corresponding to characters (see below) on 17 planes, and with over 144,000 code points defined as of version 14.0, UCS-2 is only able to represent less than half of all encoded Unicode characters. Therefore, UCS-2 is obsolete, though still used in software. UTF-16 extends UCS-2, by using the same 16-bit encoding as UCS-2 for the Basic Multilingual Plane, and a 4-byte encoding for the other planes. As long as it contains no code points in the reserved range U+D800–U+DFFF,[clarification needed] a UCS-2 text is valid UTF-16 text.

UTF-32 (also referred to as UCS-4) uses four bytes to encode any given code point, but not necessarily any given user-perceived character (loosely speaking, a grapheme), since a user-perceived character may be represented by a grapheme cluster (a sequence of multiple code points).[6] Like UCS-2, the number of bytes per code point is fixed, facilitating code point indexing; but unlike UCS-2, UTF-32 is able to encode all Unicode code points. However, because each code point uses four bytes, UTF-32 takes significantly more space than other encodings, and is not widely used. Although UTF-32 has a fixed size for each code point, it is also variable-length with respect to user-perceived characters. Examples include: the Devanagari kshi, which is encoded by 4 code points, and national flag emojis, which are composed of two code points.[7] All combining character sequences are graphemes, but there are other sequences of code points that are as well, for example \r\n.[8][9][10][11]

  1. ^ "Unicode 14.0.0".
  2. ^ "Unicode Version 14.0 Character Counts".
  3. ^ "The Unicode Standard: A Technical Introduction". Retrieved 2010-03-16.
  4. ^ "Usage Survey of Character Encodings broken down by Ranking". w3techs.com. Retrieved 2020-06-09.
  5. ^ "Conformance" (PDF). The Unicode Standard. September 2021. Retrieved 2021-09-16.
  6. ^ "UAX #29: Unicode Text Segmentation §3 Grapheme Cluster Boundaries". unicode.org. 2020-02-19. Retrieved 2020-06-27.
  7. ^ "Unicode – a brief introduction (advanced) • JavaScript for impatient programmers". exploringjs.com. Retrieved 2020-06-14.
  8. ^ "Introduction to Unicode". mathias.gaunard.com. Retrieved 2020-06-14.
  9. ^ "Strings and Characters — The Swift Programming Language (Swift 5.2)". docs.swift.org. Retrieved 2020-06-14.
  10. ^ "Breaking Our Latin-1 Assumptions - In Pursuit of Laziness". manishearth.github.io. Retrieved 2020-06-14. Unicode didn't want to deal with adding new flags each time a new country or territory pops up. Nor did they want to get into the tricky business of determining what a country is, for example when dealing with disputed territories. [..] On some Chinese systems, for example, the flag for Taiwan (🇹🇼) may not render.
  11. ^ "Let's Stop Ascribing Meaning to Code Points - In Pursuit of Laziness". manishearth.github.io. Retrieved 2020-06-14. Folks start implying that code points mean something, and that O(1) indexing or slicing at code point boundaries is a useful operation.

Cite error: There are <ref group=note> tags on this page, but the references will not show without a {{reflist|group=note}} template (see the help page).

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Nelliwinne