What Is ASCII Text and How Is It Used?

You may have heard about ASCII text in relation to computer text, but it’s a term that’s quickly losing favor due to a more powerful newcomer. But what exactly is ASCII and what does it do?

What Is ASCII Text and How Is It Used?

What is ASCII

Telegraph code was used to create ASCII. Bell data services promoted it as a seven-bit teleprinter code, which was its first commercial use.

The first meeting of the American Standards Association’s (ASA) (now the American National Standards Institute or ANSI) X3.2 subcommittee on the ASCII standard took place in May 1961.

The first edition of the standard was published in 1963, and it was updated in 1986 after undergoing a major revision in 1967. The proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists and added features for devices other than teleprinters when compared to earlier telegraph codes. [requires citation]

In 1969, the use of ASCII for Network Interchange was described. In 2015, the document was formally upgraded to an Internet Standard.

ASCII, which was inspired by the English alphabet, converts 128 characters into seven-bit integers, as shown in the ASCII chart above.

The digits 0 to 9, lowercase letters a to z, uppercase letters A to Z, and punctuation symbols are among the 95 encoded characters that can be printed.

The original ASCII specification also included 33 non-printing control codes that originated with Teletype machines; most of these are now obsolete, but a few, such as the carriage return, line feed, and tab codes, are still widely used.

Lowercase I, for instance, would be encoded as binary 1101001 = hexadecimal 69 (I is the ninth letter) = decimal 105 in ASCII.

What Does ASCII Stand For?

Let’s start with the acronym itself, which is probably the easiest place to start:

American Standard Code for Information Interchange

Although this lengthy-phrase does not provide a complete picture, some parts, particularly the first two words, provide immediate clues.

ASCII stands for American Standard Code for Information Interchange, and its significance will become clear shortly.

The phrase “Code for Information Interchange” implies that we’re discussing a data-transfer format. ASCII is concerned with textual data, or the characters that makeup words in a typically human-readable language.

When letters and other characters are stored as ones and zeroes in a file, ASCII solves the problem of how to assign values to them so that they can be translated back into letters when the file is read later.

Information can be reliably exchanged if different computer systems agree on the same code to use.

The History of ASCII

ASCII was developed in the United States in the 1960s and is sometimes referred to as US-ASCII. Since 1977 and 1986, when ASCII was last updated, the standard has undergone numerous revisions.

Over time, extensions and variations have been added to ASCII, primarily to account for the fact that ASCII omits many characters that are required or used by languages other than US English.

Although the pound is present in Latin-1, an 8-bit extension developed in the 1980s that also encodes several other currencies, ASCII does not support the UK currency symbol (“£”).

ASCII was greatly expanded and superseded by Unicode, a far more comprehensive and ambitious standard that will be discussed further down. Unicode surpassed ASCII in popularity for online use in 2008.

You may like to read our guide to Fix Language Issues For Non-Unicode Programs in Windows 10.

What Characters Does ASCII Represent?

The letter “A” is as strange to a computer as the color purple or the feeling of jealousy. Computers work with ones and zeroes, and it’s up to humans to figure out how to represent numbers, words, images, and anything else with those ones and zeroes.

You can think of ASCII as the digital equivalent of Morse code—at least the first attempt. While Morse code can only represent 36 different characters (26 letters and 10 digits), ASCII can represent up to 128 characters in just 7 bits of data.

ASCII is case-sensitive, which means it can represent all 52 letters of the English alphabet in upper and lower case. That’s about half the space used alongside the same 10 digits.

The remaining space is taken up by punctuation, mathematical and typographic symbols, as well as a collection of control characters, which are non-printable codes with functional meanings—for more information, see below).

Binary Decimal Character
010 0001 33 !
011 0000 48 0
011 1001 57 9
011 1011 59 ;
100 0001 65 A
100 0010 66 B
101 1010 90 Z
101 1011 91 [
110 0001 97 a
110 0010 98 b
111 1101 125 }

 

It’s worth noting that the values chosen have some useful characteristics, including:

  • Because they’re in order, letters of the same case can always be sorted numerically. A has a lower value than B, which has a lower value than Z, for example.
  • Different case letters are offset by exactly 32. Because only a single bit needs to be switched for each letter in either direction, converting between lower and upper case is a breeze.

Control Characters

Aside from letters, punctuation, and digits, ASCII can represent a variety of control characters, which are special code points that do not produce single-character output but instead provide alternative meanings to whatever is consuming the data.

The horizontal tab character, for example, is ASCII 000 1001. When you press the TAB key, it represents the space you’ll get. Such characters are rarely seen directly, but their impact is frequently demonstrated. Here are a few more instances:

Binary Decimal Character
000 1001 9 Horizontal Tab
000 1010 10 Line Feed
001 0111 23 End of Transmission Block

What About Other Characters?

Because it was simple and widely used, ASCII was a huge hit in the early days of computing. However, in a world with a more global perspective, one writing system will not suffice.

Modern communications must be possible in French, Japanese, and any other language in which text is stored. Although the Unicode character set can address a total of 1,112,064 characters, only about a tenth of them are currently defined.

That may seem like a lot, but the encoding is designed to accommodate not only tens of thousands of Chinese characters, but also emoji (nearly a half-thousand) and even extinct writing systems like Jurchen.

Unicode recognized ASCII’s dominance by selecting the first 128 characters, which are identical to ASCII. This provides backward compatibility by allowing ASCII-encoded files to be used in situations where Unicode is expected.

Bottomline

The 26 letters of the English alphabet are represented in ASCII text, along with digits, punctuation, and a few other symbols. For the better part of a half-century, it served its purpose admirably.

Unicode, which supports a large number of languages and other symbols, including emoji, has now surpassed it. UTF-8 is the encoding that should be used to represent Unicode characters online for all practical purposes.

Well, that’s all we have for you about ASCII text and how it is used. We hope this guide helped you.

If you liked this, don’t forget to check out our explainer guidesFurthermore, if you have any questions or suggestions, please use the comment below to contact us.

Posted by
Johnson Miller

Miller has been a video game journalist for over 3 years, contributing to publications all over the world in his freelance capacity. Magic: The Gathering, Dark Souls, Diablo, and Divinity: Original Sin are some of his favorite games.

Leave a Reply

Your email address will not be published. Required fields are marked *