Courses
Courses for Kids
Free study material
Offline Centres
More
Store Icon
Store

ASCII Full Form

Reviewed by:
ffImage
hightlight icon
highlight icon
highlight icon
share icon
copy icon
SearchIcon

ASCII Full Meaning

ASCII stands for the American Standard Code for Information Interchange. It is a method to define a set of characters for encoding the text documents that are present on computers. The ASCII codes are used to represent the text in computers and other communication devices. It was the most common computer encoding on the World Wide Web (WWW) until December 2007, after that it was surpassed by UTF-8 which uses ASCII as a subset.


ASCII was developed with the help of telegraph code. Its first commercial use was in the form of a seven-bit teleprinter code that was promoted by Bell data services. Work on the ASCII standard began in May 1961, with the first meeting of the American Standards Association's (ASA) now known as the American National Standards Institute or ANSI. The first edition of the standard was published in 1963, underwent a major revision during 1967, and it experienced its most recent update during 1986. When compared to the telegraph codes that were present earlier, the both proposed Bell code and ASCII code were ordered for the more convenient form of sorting of lists, and it added certain features for devices other than that of the teleprinters.


Originally based on the English alphabet, the ASCII encoded about 128 specific characters into seven-bit integers as is shown by the ASCII chart. Ninety-five of these encoded characters are printable, these include the digits from zero to nine, lowercase letters from a to z, uppercase letters from A to Z, and the punctuation symbols. In addition to these characters, the original ASCII code specification includes about 33 non-printing control codes that originated with the Teletype machines. Most of these codes are now obsolete, although a few of these codes are still commonly used like the carriage return, line feed, and tab codes.


ASCII Full Form in Computer

ASCII is abbreviated from American Standard Code for Information Interchange, is a character that is used for encoding standards for electronic communication. Most of the modern character-encoding schemes are depending on the ASCII code full form, although they support many additional characters. The ASCII (American Standard Code for Information Interchange) was developed under the auspices of a committee of the ASA-American Standards Association; it is also called the X3 committee. The ASA became the United States of America Standards Institute (USASI) and ultimately the American National Standards Institute (ANSI).


Some other special characters and control codes are filled in ASCII and were published as ASA X3.4-1963. By leaving the 28 code positions without any assigned meaning, it was also reserved for the purpose of future standardization and one unassigned control code. There was some debate at the time to determine whether there should be more number control characters rather than that of the lowercase alphabet. The X3.2.4 task group voted for the approval to change to ASCII at its May 1963 meeting. Locating the lowercase letters in sticks by these six and seven digits caused the characters to differ in the arrangement of bit pattern from the upper case by a single bit. This was simplified as a case-insensitive character matching and constructing keyboards and that of the printers.


ASCII Stands for in Computer - Design Consideration

We have seen what is ASCII in computer, now let us see its design consideration.

Bit Width: The X3.2 subcommittee involved in designing ASCII was based on the earlier teleprinter encoding systems. ASCII specifies a correspondence that is present in between the digital bit patterns like other character encodings, and that of the character symbols. This allows the digital devices to communicate with the other devices and with themselves and to store, process, and communicate character-oriented information such as written language. Before the ASCII code was developed, the encodings that are in use include 26 alphabetic characters, 10 numerical digits, and special graphic symbols from 11 to 25. To include all of these, and control characters compatible with the CCITT, where 64 codes were required. 


The committee considered an eight-bit code since eight bits such as octets would allow two four-bit patterns to efficiently encode two digits with binary-coded decimal. However, it would require all the data transmission to send the eight bits when the seven of them could suffice. The committee voted to make use of a seven-bit code in order to minimize the costs that are associated with data transmission. Since the perforated tape was designed at the time to record eight bits in one position, it also allowed for a parity bit if desired for error checking. Eight-bit machines that did not use parity checking typically set the eighth bit to zero.


Internal Organization: The code itself was patterned in such a way that most of the control codes were present together and all graphic codes were kept together, for an easier process of identification. The first two so-called ASCII sticks where 32 positions were reserved for control characters. The "space" character had to be used before the graphics to make the sorting easier, many of the special signs commonly used as separators were placed before digits. The committee decided that it was important to support the uppercase 64-character alphabets and to choose the pattern of ASCII so it could be reduced easily to a usable 64-character set of graphic codes. Lowercase letters were therefore not interleaved with the help of uppercase. To keep the options available for the lowercase letters and other graphics, the special and other numeric codes were arranged before the letters. Whereas the letter A was placed in a position of 41 hex to match the draft of the corresponding British standard. The digits from zero to nine are prefixed with “011”, but the remaining four bits that are corresponding to their respective values in binary make the conversion with the binary-coded decimal straightforward.


Character Order: ASCII-code order is also called ASCIIbetical order. Collation of data is sometimes done in this order rather than that of the "standard" alphabetical order or collating sequence. The main deviations in ASCII order are:

  • All the uppercase alphabets come before the lowercase letters; for example, "Z" precedes "a".

  • Digits and many of the punctuation marks come before the letters.

  • An intermediate order that converts the uppercase letters to the lowercase letters before comparing the ASCII values.

ASCII Code Variants and Derivations

7-bit Codes: From early in its development, ASCII code was intended to be just one of the several national variants of an international character code standard. Some of the other international standards bodies that have ratified the character encodings such as ISO 646. These are identical or nearly identical to that of the ASCII code. With the extensions used for the characters outside the English alphabet and symbols that are used outside the United States, the symbol for the United Kingdom's currency, the pound is sterling it is represented as £. Almost every country requires an adapted version of ASCII code since ASCII suited the needs of only the US and some of a few other countries.


Many other countries developed variants of the ASCII code in order to include non-English letters, currency symbols, etc. It would share most of the characters in common, but assign other locally useful characters to the several code points that are reserved for "national use". However, the four years that have elapsed in between the publication of ASCII-1963 and ISO's first acceptance of an international recommendation during 1967. It caused ASCII's choices for the nation to act as standards for the world, causing confusion and incompatibility once other countries did begin to make their own assignments to these code points.


8-bit Codes: Eventually, like eight, 16- and 32-bit, and later 64-bit computers began to replace the 12-bit, 18-bit, and 36-bit computers as the norm. These became common to use as an eight-bit byte to store each of the characters that are present in the memory, providing an opportunity for extension, 8-bit relatives of ASCII codes. In most cases, these codes are developed as true extensions of the ASCII, leaving the original character-mapping intact, but by adding the additional character definitions after the first 128 characters.


Encodings include ISCII which was related to India, VISCII which was related to Vietnam. Although these encodings are sometimes known as ASCII, true ASCII is only defined strictly by the ANSI standard. Most of the early home computer systems were developed with their own eight-bit character set that contained line-drawing and game glyphs and often these filled-in some or all of the control characters from zero to thirty-one with more graphics.


Unicode: The ISO/IEC 10646 Universal Character Set or UCS that have a much wider array of characters and their various encoding forms that have begun to supplant ISO/IEC 8859 and ASCII rapidly in most of the environments. While ASCII code is limited to only 128 characters, Unicode and the UCS support more characters by separating the concepts of unique identification and encoding.


The Unicode character set was incorporated by ASCII as the first 128 symbols, so the seven-bit ASCII characters have the same numeric codes in both of the sets. This allows the UTF-8 to be backwards compatible with the seven-bit ASCII, as a UTF-8 file containing only ASCII characters, UTF-8 file is identical to an ASCII file that is containing the same sequence of the characters. Even more importantly, the forward compatibility is ensured as the software that recognizes only seven-bit ASCII characters as special ones and does not alter the bytes with the highest bit set will preserve UTF-8 data unchanged.

FAQs on ASCII Full Form

1. What is the Full Form of ASCII in Computer Language?

Ans: ASCII full form in computer is given as American Standard Code for Information Interchange.

2. Expand ASCII. Mention its Importance.

Ans: The full form of ASCII code is American Standard Code for Information Interchange. It is used to define a set of characters that are used for encoding the text documents that are present on computers.

The importance of ASCII code is:

  • It allows communication between all the computers by allowing them to share texts and documents.

  • Since it is a small bit pattern with one byte or seven bits for a particular character thus the conversion will take less time.

  • A lesser amount of memory is required for the storage of data.