Computers work in numbers. This is because they work with zeros and ones. They work in zeros and ones (2-states) because they electronics can be either a high charge (meaning one) or a low charge (meaning zero). You can use a series of these bits to represent a number in binary, but what about letters? Character sets are needed. They translate characters to numbers.
To display an HTML page correctly, a web browser must know which character set (character encoding) to use. Here is a website with the list of characters. There is a very good explanation and reference at w3schools.com. The Unicode Consortium develops the Unicode Standard. Their goal is to replace the existing character sets with its standard Unicode Transformation Format (UTF). The website Joel On Software has a good article on Unicode and Character Sets. By the way, Joel Spolsky is the CEO and co-founder of Stack Overflow and the co-founder of Trello. We have a post here at this site that brings out a few of the key points of that article. It is called Unicode and Character Sets.
Let’s go back in time a few decades. ASCII (American Standard Code for Information Interchange) was the first character encoding standard (also called character set). ASCII defined 127 different alphanumeric characters that could be used on the internet: numbers (0-9), English letters (A-Z), and some special characters like ! $ + – ( ) @ < > . In HTML you can use the ampersand-hash followed by a valid number to display these characters. ASCII is defined in the ISO 464 standard and it uses a byte (8 bits) for each character.
A character in UTF8 can be from 1 to 4 bytes long. UTF-8 can represent any character in the Unicode standard. UTF-8 is backwards compatible with ASCII. UTF-8 is the preferred encoding for e-mail and web pages. The first 128 characters of Unicode (which correspond one-to-one with ASCII) are encoded using a single octet with the same binary value as ASCII, making valid ASCII text valid UTF-8-encoded Unicode as well.
There is a previous post at this website called HTML Character References.
Strings and Character Sets
At SQLServerCentral.com there is a good article on Strings as part of their Stairway to Data series. As they say: “Characters are represented internally in the computer as bits, and there are several different systems for doing this. Almost all schemes use fixed length bit strings and the three most common ones in the computer trade are EBCDIC, ASCII and Unicode.”
Extending ASCII causes problems (ambiguity for one). Unicode a 16-bit system. The goal is for Unicode to provides a unique number (bit pattern) for every character, no matter what the platform, no matter what the program, no matter what the language.
Languages use three types of symbolic representations of language: alphabet, syllabary or ideogram
An alphabet is a system of characters in which each symbol has a single sound associated with it. The most common alphabets in use today are Latin, Greek, Arabic and Cyrillic. A syllabary is a system of characters in which each symbol has a single syllable associated with it. The most common syllabaries in use today are Korean and part of Japanese. An ideogram system uses characters in which each symbol is a single word and Chinese is the only such system use today, however, other Asians borrow from the Chinese character set.
A “code point” is the place of a character within an encoding scheme. Thus, in all ISO standard character sets, upper case “A” is encoded as 0x41 in Hexadecimal — 65 in Decimal — so we say that A’s code point is 65. This is an abstraction of the symbol and had nothing to do with font, size or other physical display attributes.
Unicode is a character set. UTF-8 is encoding. Unicode is a list of characters with unique decimal numbers (code points). A = 65, B = 66, C = 67, …. This list of decimal numbers represent the string “hello”: 104 101 108 108 111 Encoding is how these numbers are translated into binary numbers to be stored in a computer: UTF-8 encoding will store “hello” like this (binary): 01101000 01100101 01101100 01101100 01101111. Encoding translates numbers into binary. Character sets translates characters to numbers.
Here is an article called What is UTF-8 Encoding? A guide for Non-Programmers.