The term BIT is used in computer science, information technology, communications engineering. Basically, everything that involves computers and related fields are going around bit.

Bit is the unit of measure for the information content.

What is 1 BIT?

1 bit is the information content contained in a selection from two equally probable options. The information content can be any real, non-negative value.

Also, the Bit can be defined as a unit of measure for the amount of data represented digitally (stored, transmitted). The amount of data is the maximum information content of data with the same size representation. The maximum is reached if all possible states are equally probable. The maximum is an integer multiple of 1 bit. It is the number of binary elementary states used for the representation .

The word origin of bit

The word “bit” is summing “binary” and “digit” in one short word . It was “created” by the mathematician John Tukey in 1943, suggested probably in 1946, according to other sources. The term was first mentioned in writing in 1948 on page one of Claude Shannon’s famous work A Mathematical Theory of Communication. George Boole was the first to use the bits as truth values.

Brief History of Bit

The encoding of the data was used in Bacon’s cipher as a separate code (1626). The coding of the data was invented by Basile Bouchon and Jean-Baptiste Falcon (1732) as separate bits, developed by Joseph Marie Jacquard (1804), and later by early computer manufacturers such as Semen Korsakov, Charles Babbage, Herman Holleris, IBM.

Text-to-bit encoding was used in early digital communications devices such as Morse code (1840) and telegraph printers and stock tickers (receiver tape) (1870).

In 1928 Ralph Hartley proposed the use of algebraic measurement of information. In 1948, for the first time, Claude Shannon used the word bit in his journal “A Mathematical Theory of Communication. He is based on a simple abbreviation of “bit” instead of the “binary digit” in John Tukey ‘s memo written on January 9, 1947 by Bell Labs.

Interestingly, Burnaby Bush wrote in 1936 that the punch card used for the mechanical machines of those days stored a “bit of information.” Contrad’s deduction created the first programmable computer using a binary representation of numbers.

Grouping bits

For convenience of handling and using information, bits are grouped into physical and logical sets. The most common are:

  • Nibl – a group of 4 bits, a physical set of bits but not addressable.
  • Byte – the smallest addressable group of bits. Initially the number was very variable and later it was almost completely standardized to 8.
  • Octet is a group of 8 bits and, unlike in France, the word byte is often used instead of octet because it is understood to be the same.
  • A word is a larger group of bits, usually 2 bytes, but not standardized (there are architectures with words of 4, 8 or more bytes). The word is the most common addressable memory unit for data and for a program. Computer architectures vary in the length of a word, so it’s about sixteen-bit, thirty-two-bit, or sixty-four-bit architecture.
  • SI system prefixes (k-kilo, M-mega, G-giga, etc.) were initially used to denote similar but not identical multiplications. So kilobit was 2 10 = 1024 bits, megabit 1024 kilobits, etc. because the decadal value of 1000 is not adapted to the binary number system of the computer and 1024 is the approximate value. Later, however, there was confusion and standardization of new prefixes (kibi-, furniture-, etc.).