Indiana University
University Information Technology Services
  
What are archived documents?
Login>>
Login

Login is for authorized groups (e.g., UITS, OVPIT, and TCC) that need access to specialized Knowledge Base documents. Otherwise, simply use the Knowledge Base without logging in.

Close

ARCHIVED: What are binary, octal, and hexadecimal notation?

Binary notation

All data in modern computers is stored as series of bits. A bit is a binary digit and can have one of two values; the two values are generally represented as the numbers 0 and 1. The most basic form of representing computer data, then, is to represent a piece of data as a string of 1s and 0s, one for each bit. What you end up with is a binary or base-2 number; this is binary notation. For example, the number 42 would be represented in binary as: 101010

Interpreting binary notation

In normal decimal (base-10) notation, each digit, moving from right to left, represents an increasing order of magnitude (or power of ten). With decimal notation, each succeeding digit's contribution is ten times greater than the previous digit. Increasing the first digit by one increases the number represented by one, increasing the second digit by one increases the number by ten, the third digit increases the number by 100, and so on. The number 111 is one less than 112, ten less than 121, and one hundred less than the number 211.

The concept is the same with binary notation, except that each digit is a power of two greater than the preceding digit, rather than a power of ten. Instead of 1s, 10s, 100s, and 1000s digits, binary numbers have 1s, 2s, 4s, and 8s. Thus, the number two in binary would be represented as a 0 in the ones place and a 1 in the twos place, i.e., 10. Three would be 11, a 1 in the ones place and a 1 in the twos place. No numeral greater than 1 is ever used in binary notation.

Octal and hexadecimal notation

Because binary notation can be cumbersome, two more compact notations are often used, octal and hexadecimal. Octal notation represents data as base-8 numbers. Each digit in an octal number represents three bits. Similarly, hexadecimal notation uses base-16 numbers, representing four bits with each digit. Octal numbers use only the digits 0-7, while hexadecimal numbers use all ten base-10 digits (0-9) and the letters a-f (representing the numbers 10-15). The number 42 is written in octal as: 52 In hexadecimal, the number 42 is written as: 2a

Knowing whether data is being represented as octal or hexadecimal is sometimes difficult (especially if a hexadecimal number doesn't use one of the digits a-f), so one convention that is often used to distinguish these is to put "0x" in front of hexadecimal numbers. So you might see, for example: 0x2a This is a less ambiguous way of representing the number 42 in hexadecimal. You can see an example of this usage in the Character set comparison chart.

Note: The term "binary" when used in phrases such as "binary file" or "binary attachment" has a related but slightly different meaning than the one discussed here. For more information, see ARCHIVED: What is a binary file?

This is document agxz in domain all.
Last modified on January 07, 2013.

I need help with a computing problem

  • Fill out this form to submit your issue to the UITS Support Center.
  • Please note that you must be affiliated with Indiana University to receive support.
  • All fields are required.



Please provide your IU email address. If you currently have a problem receiving email at your IU account, enter an alternate email address.

I have a comment for the Knowledge Base

  • Fill out this form to submit your comment to the IU Knowledge Base.
  • If you are affiliated with Indiana University and need help with a computing problem, please use the I need help with a computing problem section above, or contact your campus Support Center.