Hexadecimal Explained — Why Programmers Use Hex and How to Read It
Hexadecimal (base-16) is used in programming for memory addresses, color codes, binary data, and debugging. Here's why hex exists, how to read it, and how to convert it.
Hexadecimal (base-16) appears everywhere in programming: #FF5733 in CSS, 0xDEADBEEF in memory dumps, 0x1F4A9 for emoji code points, 0xFF in network masks. It’s not arbitrary — hex is the most human-readable encoding of binary data.
Use the Number Base Converter to convert between hex, decimal, binary, and octal instantly.
Why hexadecimal exists
Computers work in binary. Humans struggle to read long binary strings. Hexadecimal is the compact translation layer between the two.
One hex digit represents exactly 4 binary bits. One byte = 8 bits = 2 hex digits. This is the key insight:
Binary: 1111 1111
Hex: F F
Decimal: 255
A 32-bit integer in binary is 32 characters. In hex, it’s 8 characters. In decimal, it’s 1–10 characters but doesn’t map cleanly to bytes.
DEADBEEF (hex) = 11011110101011011011111011101111 (binary) = 3,735,928,559 (decimal)
The decimal form gives no indication of the byte boundaries. The hex form makes them obvious: DE AD BE EF — four bytes.
The hexadecimal digit set
Base-16 requires 16 symbols for its digits. After 0–9, the letters A–F represent values 10–15:
| Hex | Decimal | Binary |
|---|---|---|
| 0 | 0 | 0000 |
| 1 | 1 | 0001 |
| 2 | 2 | 0010 |
| 3 | 3 | 0011 |
| 4 | 4 | 0100 |
| 5 | 5 | 0101 |
| 6 | 6 | 0110 |
| 7 | 7 | 0111 |
| 8 | 8 | 1000 |
| 9 | 9 | 1001 |
| A | 10 | 1010 |
| B | 11 | 1011 |
| C | 12 | 1100 |
| D | 13 | 1101 |
| E | 14 | 1110 |
| F | 15 | 1111 |
Hex is case-insensitive: 0xFF = 0XFF = 0Xff. Convention varies by context:
- CSS colors: lowercase (
#ff5733) - C/C++ literals: often lowercase (
0xdeadbeef) - Assembly and debugging output: often uppercase (
0xDEADBEEF)
How to read hex values
2-digit hex (one byte)
Each hex digit maps to 4 bits. A 2-digit hex number represents one byte (0–255):
FF = 1111 1111 = 255
7F = 0111 1111 = 127
80 = 1000 0000 = 128
00 = 0000 0000 = 0
The boundary between 7F and 80 is significant: in signed 8-bit integers, 0x7F (127) is the maximum positive value and 0x80 (-128) is the most negative value.
4-digit hex (two bytes)
FFFF = 65,535 (max unsigned 16-bit)
7FFF = 32,767 (max signed 16-bit)
0001 = 1
8-digit hex (four bytes / 32-bit)
FFFFFFFF = 4,294,967,295 (max unsigned 32-bit / max IPv4 address)
7FFFFFFF = 2,147,483,647 (max signed 32-bit)
00000000 = 0
Converting hex to decimal by hand
Position values in hex are powers of 16:
Hex: A B C
Power: 16² 16¹ 16⁰
Value: 256 16 1
For 0xABC:
A = 10
B = 11
C = 12
10 × 256 + 11 × 16 + 12 × 1
= 2560 + 176 + 12
= 2748
Where hex appears in programming
Memory addresses
Debuggers and disassemblers display memory addresses in hex:
0x00401000 ; Code segment start (common in 32-bit Windows PE files)
0x7FFD0000 ; Typical user-space DLL load address
0xC0000005 ; Windows error code: Access Violation
The hex format makes it clear how many bytes of address space is involved, and which high/low bytes are set.
Error codes and status codes
Windows HRESULT error codes are 32-bit hex values with structured fields:
0x80070002 = "The system cannot find the file specified"
0x80004005 = "Unspecified error"
0xC0000005 = "Access violation"
The leading nibble encodes severity: 8xxx = failure, Cxxx = critical failure, 0xxx = success.
Color codes in CSS
CSS hex colors are 3 bytes of RGB (explained in the color picker article):
#RRGGBB
#FF5733 = R:255, G:87, B:51 = red-orange
#3498DB = R:52, G:152, B:219 = medium blue
Optionally with alpha (#RRGGBBAA):
#FF573380 = same color at ~50% opacity (0x80 = 128/255 ≈ 50%)
Unicode code points
Unicode code points are expressed in hex:
U+0041 = 'A' (LATIN CAPITAL LETTER A)
U+00E9 = 'é' (LATIN SMALL LETTER E WITH ACUTE)
U+4E2D = '中' (CJK UNIFIED IDEOGRAPH)
U+1F525 = '🔥' (FIRE)
U+1F4A9 = '💩' (PILE OF POO)
The hex code point is what you need when:
- Inserting a character from its code point:
\u{1F525}in JS,\U0001F525in Python - Looking up a character in Unicode charts
- Debugging encoding issues
Network addresses
MAC addresses are 6 bytes in hex: AA:BB:CC:DD:EE:FF
IPv6 addresses are 16 bytes (128 bits) in hex groups: 2001:0db8:85a3:0000:0000:8a2e:0370:7334
IPv4 in hex: 192.168.1.1 = 0xC0A80101
Hex in programming languages
Hex literals
// C / C++
int n = 0xFF; // 255
int addr = 0xDEADBEEF; // hex literal
unsigned char byte = 0x80; // 128
# Python
n = 0xFF # 255
n = 0xff # Same (case insensitive)
hex(255) # '0xff'
int('ff', 16) # 255
// JavaScript
const n = 0xFF; // 255
const n2 = 0x1F525; // Fire emoji code point
n.toString(16); // 'ff'
parseInt('ff', 16); // 255
// Go
n := 0xFF // 255
fmt.Sprintf("%x", n) // "ff"
fmt.Sprintf("%X", n) // "FF"
fmt.Sprintf("%#x", n) // "0xff"
Hex formatting
| Language | Hex output | Uppercase hex |
|---|---|---|
| Python | format(n, 'x') | format(n, 'X') |
| Python | f'{n:#010x}' (padded with prefix) | |
| JavaScript | n.toString(16) | n.toString(16).toUpperCase() |
| C | printf("%x", n) | printf("%X", n) |
| Java | Integer.toHexString(n) | .toUpperCase() |
| Go | fmt.Sprintf("%x", n) | fmt.Sprintf("%X", n) |
| Rust | format!("{:x}", n) | format!("{:X}", n) |
Reading hex dumps
When debugging binary data, tools like xxd, hexdump, or od display raw bytes as hex:
$ echo -n "Hello" | xxd
00000000: 4865 6c6c 6f Hello
Left column: file offset (in hex). Middle: hex bytes (grouped in 2-byte pairs). Right: ASCII representation (dots for non-printable characters).
Reading a network packet or binary file:
00 00 00 0c ← 4-byte big-endian integer: 12
48 65 6c 6c ← ASCII: "Hell"
6f 20 57 6f ← ASCII: "o Wo"
72 6c 64 ← ASCII: "rld"
The Number Base Converter
The Number Base Converter converts between binary, decimal, hex, and octal. Enter any value in any base and see all four representations simultaneously. Useful for:
- Converting hex error codes to decimal for searching
- Checking color channel values (hex to decimal)
- Verifying bit patterns (hex to binary)
- Computing IP address ranges (decimal to hex)
Related tools
- Number Base Converter — convert between all number bases
- Color Picker — hex color codes to RGB/HSL
- Hash Generator — outputs hash values in hex
Related posts
- Binary Arithmetic — Addition, Subtraction, and Two's Complement — Learn how computers perform binary arithmetic: binary addition with carry, two's…
- Binary to Decimal — Convert Binary Numbers the Right Way — Binary to decimal conversion is foundational to understanding how computers stor…
- Binary to Text: How Binary Numbers Represent Characters — Binary to text conversion isn't magic — it's a lookup table. ASCII, Unicode, UTF…
- Decimal to Binary — How to Convert Numbers Between Bases — Decimal to binary, binary to decimal, hex to binary — number base conversion exp…
Related tool
Convert between binary, octal, decimal, hexadecimal, and text (UTF-8). Handles arbitrary lengths. Per-byte and per-character views.
Written by Mian Ali Khalid. Part of the Encoding & Crypto pillar.