# Taking a bite out of bits and bytes

By | January 22, 2013

A bit, short for binary digit, is the smallest unit of measurement used for information storage in computers. A bit is represented by a 1 or a 0 with a value of true or false, sometimes expressed as on or off. Eight bits form a single byte of information, also known as an octet. Thus, the difference between a bit and a byte is size, or the amount of information stored.

It takes eight bits (1 byte) to store a single character. The capital letter “A” is expressed digitally as 01000001. A small case “a” is represented in binary code as 01100001. Notice the third bit is different in each octet. By rearranging the bits within the octet, a byte is capable of producing 256 unique combinations to form letters, numbers, special characters and symbols.

It can get confusing keeping units of storage straight, but if you have trouble remembering which is a bit and a byte, note that the smaller word is the smaller unit of storage. Once the difference between aaa bit and a byte is understood, this helps to remember the difference between greater units such as the kilobit and kilobyte.

A kilobit is 1000 bits, though in the binary system it is designated as 1024 bits due to the amount of space required to store a kilobit using common operating systems and storage types. But, you can think of kilo as referring to 1000 to more easily remember what a kilobit is. A kilobyte would be 1000 bytes. But actually it’s 1024 bytes see?

Here are the number of bytes in some commonly used terms:

Kilobyte 1024 bytes

Megabyte 1,048,576 bytes

Gigabyte 1,073, 741, 824 bytes

Terabtye 1,099,511,627,776 bytes

To put things in perspective: The King James Version of The Holy Bible contains 4 MB of information. You could store approximately 500 copies of the King James Version Holy Bible on one 2 gigabyte flash drive.

The human brain can store approximately 4 terabytes of information. Makes you wonder where all that information goes, doesn’t it?