foreach-ui logo
codeLanguages
account_treeDSA

Quick Actions

quizlock Random Quiz
trending_uplock Progress
  • 1
  • 2
  • 3
  • 4
  • quiz
Algorithms
  • Master Brian Kernighan's algorithm for counting set bits efficiently
  • Learn XOR tricks for finding unique elements and detecting patterns
  • Understand practical patterns like isolating rightmost bit and subset generation
  • Recognize when bit tricks provide real value vs. unnecessary complexity

Bit Tricks and Patterns

You've learned the basic operations. Now comes the fun part: tricks that make you look like a wizard.

These aren't just party tricks—they're patterns that show up in production code, technical interviews, and performance-critical systems. Some are genuinely useful. Some are more educational than practical. All of them teach you to think in bits.

Counting Set Bits (Population Count)

How many 1s are in the binary representation of a number? This comes up more often than you'd think: calculating Hamming distance, finding sparse matrices, analyzing bitmasks.

The naive approach checks every bit:

def count_bits_naive(n):
    count = 0
    while n:
        if n & 1:
            count += 1
        n >>= 1
    return count

This takes O(log n) time—you check every bit position.

But there's a better way, called Brian Kernighan's algorithm:

def count_bits(n):
    count = 0
    while n:
        n &= (n - 1)  # Clear the rightmost set bit
        count += 1
    return count

This takes O(k) time where k is the number of set bits. For sparse numbers (few 1s), this is dramatically faster.

Why does n & (n-1) clear the rightmost set bit? Let's trace through an example:

n = 12 (binary: 1100)
n - 1 = 11 (binary: 1011)

12: 1100
11: 1011
&:  1000  (cleared rightmost 1)

n = 8 (binary: 1000)
n - 1 = 7 (binary: 0111)

8:  1000
7:  0111
&:  0000  (cleared rightmost 1)

When you subtract 1, you flip all the bits after the rightmost 1 (including that 1). ANDing with the original clears that bit.

Each iteration removes exactly one set bit. So for a number with 3 set bits, you only loop 3 times, not 32 times.

Real use case: Calculating the number of differences between two bitmasks (XOR them, then count bits).

Finding the Rightmost Set Bit

Sometimes you need to isolate just the rightmost 1 bit. The trick: n & -n.

rightmost = n & -n

Let's see why this works. In two's complement (how negative numbers are stored), -n is computed as ~n + 1.

n = 12 (binary: 1100)
~n = 3 (binary: 0011)
-n = ~n + 1 = 4 (binary: 0100)

12:  1100
-12: 0100
&:   0100  (isolated the rightmost 1)

Negating flips all bits and adds 1. That addition propagates through the zeros on the right, flipping them back, until it hits the rightmost 1, which becomes 0. All bits to the left stay flipped. ANDing the original and negated keeps only the rightmost 1.

Why you'd use this: Removing elements from a bitmask one at a time, or finding the position of the least significant difference.

Clearing the Rightmost Set Bit

We already saw this with n & (n-1), but it's worth emphasizing because it's so useful.

n &= (n - 1)  # Clear rightmost 1

Use case: Iterating through set bits. Each iteration, you clear one bit and process it.

def process_set_bits(n):
    while n:
        # Process n here (only one bit will be set after isolation)
        bit_position = (n & -n)  # Isolate rightmost
        print(f"Bit at position {bit_position}")
        n &= (n - 1)  # Clear it

Checking if a Number is a Power of 2

Powers of 2 have exactly one bit set: 1, 2, 4, 8, 16... are 0001, 0010, 0100, 1000, 10000 in binary.

def is_power_of_2(n):
    return n > 0 and (n & (n - 1)) == 0

Why does this work? If n is a power of 2, it has exactly one set bit. Subtracting 1 flips that bit and all bits to the right. ANDing them gives 0.

16:  10000
15:  01111
&:   00000  ✓

12:  01100
11:  01011
&:   01000  ✗ (non-zero)

The n > 0 check is important because (0 & -1) == 0, but 0 isn't a power of 2.

Extension: Checking if a number is a power of 4. It must be a power of 2, AND the 1 bit must be in an odd position (from the right). In a 32-bit integer, the valid positions are 0, 2, 4, 6... so we mask with 0x55555555 (binary: 01010101...).

def is_power_of_4(n):
    return n > 0 and (n & (n - 1)) == 0 and (n & 0x55555555) != 0

XOR Tricks: Finding the Unique Element

Here's a classic problem: you have an array where every element appears twice except one. Find that unique element.

Naive solution: use a hash map (O(n) space). Clever solution: XOR everything together.

def find_unique(arr):
    result = 0
    for num in arr:
        result ^= num
    return result

Why does this work? Remember XOR properties:

  • a ^ a = 0 (anything XORed with itself is 0)
  • a ^ 0 = a (anything XORed with 0 is itself)
  • XOR is commutative and associative

So if you have [2, 3, 5, 2, 3]:

0 ^ 2 ^ 3 ^ 5 ^ 2 ^ 3
= 0 ^ (2 ^ 2) ^ (3 ^ 3) ^ 5
= 0 ^ 0 ^ 0 ^ 5
= 5

The duplicates cancel out, leaving only the unique element. This is O(1) space and O(n) time.

Variation: Two unique numbers. If two elements appear once and the rest appear twice, you can still solve it with XOR (though it's trickier—you XOR everything to get a ^ b, find a bit where they differ, then partition the array based on that bit).

Swapping Without a Temporary Variable

This is more of a curiosity than practical code, but it demonstrates XOR's properties:

a = a ^ b
b = a ^ b  # b = (a ^ b) ^ b = a
a = a ^ b  # a = (a ^ b) ^ a = b

After these three lines, a and b are swapped.

Why? Let's trace through with a=5, b=7:

Initially: a=5 (0101), b=7 (0111)

a = a ^ b    → a = 0101 ^ 0111 = 0010 (a now holds 5^7)
b = a ^ b    → b = 0010 ^ 0111 = 0101 (b now holds 5)
a = a ^ b    → a = 0010 ^ 0101 = 0111 (a now holds 7)

Cool, right? But in practice, just use a temporary variable. It's clearer, and modern compilers optimize it anyway.

Checking if Two Numbers Have Opposite Signs

In two's complement, the sign bit is the most significant bit. If two numbers have opposite signs, their MSBs differ. XORing them will have the MSB set, making the result negative.

def opposite_signs(a, b):
    return (a ^ b) < 0
a = 5    (0101)
b = -3   (1101 in two's complement)
a ^ b =  (1000) → negative

a = 5    (0101)
b = 3    (0011)
a ^ b =  (0110) → positive

Use case: Detecting sign changes in numerical algorithms without expensive comparisons.

Absolute Value Without Branching

Branch misprediction is expensive on modern CPUs. Sometimes branchless code is faster, even if it looks weirder.

def abs_branchless(n):
    mask = n >> 31  # If n is negative, mask = -1 (all 1s), else 0
    return (n + mask) ^ mask

For 32-bit signed integers, n >> 31 gives:

  • 0 if n is positive (MSB is 0)
  • -1 (all 1s) if n is negative (MSB is 1, arithmetic shift fills with 1s)

If n is positive:

mask = 0
(n + 0) ^ 0 = n

If n is negative (say, -5):

mask = -1 (all 1s)
(n + -1) ^ -1 = (n - 1) ^ -1

This effectively computes the two's complement negation without a branch.

Is this faster than abs(n) or n if n >= 0 else -n? Depends on the compiler and CPU. On modern systems, probably not. But it's a great example of how bit manipulation can replace control flow.

Toggling Case of Letters

ASCII uppercase and lowercase letters differ by exactly one bit (bit 5). 'A' is 0100 0001, 'a' is 0110 0001. XORing with space (0010 0000) toggles that bit.

def toggle_case(c):
    return chr(ord(c) ^ 32)  # 32 = 0b100000

toggle_case('A')  # 'a'
toggle_case('a')  # 'A'

This only works for letters, but it's a neat trick for case-insensitive comparisons or transformations.

Gray Code

Gray code is a binary encoding where consecutive numbers differ in only one bit. This is useful in rotary encoders and error correction.

Converting binary to Gray code:

def binary_to_gray(n):
    return n ^ (n >> 1)
Binary: 3 (011)
Right shift: 1 (001)
XOR: 011 ^ 001 = 010 (Gray code for 3)

Gray code to binary is trickier:

def gray_to_binary(g):
    b = g
    while g >>= 1:
        b ^= g
    return b

This repeatedly XORs the number with itself right-shifted, accumulating the binary value.

Generating All Subsets

If you have a set of n elements, there are 2^n subsets (including the empty set). You can represent each subset as a bitmask.

For {a, b, c}, there are 8 subsets:

  • 000 (binary 0): {}
  • 001 (binary 1): {c}
  • 010 (binary 2): {b}
  • 011 (binary 3): {b, c}
  • 100 (binary 4): {a}
  • 101 (binary 5): {a, c}
  • 110 (binary 6): {a, b}
  • 111 (binary 7): {a, b, c}
def generate_subsets(arr):
    n = len(arr)
    subsets = []
    
    for mask in range(1 << n):  # 2^n possibilities
        subset = []
        for i in range(n):
            if mask & (1 << i):  # Check if bit i is set
                subset.append(arr[i])
        subsets.append(subset)
    
    return subsets

This is incredibly useful for brute-force approaches to subset problems.

Missing Number in a Range

If you have numbers from 1 to n with one missing, find it. You could sum them and subtract from the expected sum, but XOR is slicker:

def find_missing(arr, n):
    xor_all = 0
    xor_arr = 0
    
    for i in range(1, n + 1):
        xor_all ^= i
    
    for num in arr:
        xor_arr ^= num
    
    return xor_all ^ xor_arr

XOR of all expected numbers, XOR of all present numbers, XOR them together. The duplicates cancel, leaving only the missing one.

When Bit Tricks Are Worth It

Use them when:

  • You're in a coding interview and recognize the pattern
  • You're optimizing inner loops in performance-critical code
  • You're working with bitmasks or flags
  • Memory is severely constrained

Don't use them when:

  • Readability matters more than micro-optimizations
  • Your compiler already optimizes the obvious code
  • The trick is obscure and needs heavy commenting

Modern compilers are smart. if (n % 2 == 0) probably compiles to the same code as if ((n & 1) == 0). But understanding the bit tricks helps you think differently about problems and recognize opportunities for clever solutions.

The Bigger Picture

Bit manipulation tricks are like learning to juggle while riding a unicycle. You probably won't need it for your day job, but it teaches balance and coordination that transfer to other skills.

The real value isn't memorizing these tricks—it's training your brain to see numbers as patterns of bits, to recognize when a problem has a bit-based solution, and to understand how computers actually work at the lowest level.

When you see a problem involving subsets, duplicates, or flags, your bit-manipulation instincts will kick in. That's when these tricks stop being tricks and start being tools.

  • Interview preparation: Common in coding interviews
  • System programming: Working with hardware registers

Remember: Bit manipulation is a tool, not always the best solution. Use when it makes code more efficient or elegant, but prioritize readability when performance isn't critical.

© 2026 forEach. All rights reserved.

Privacy Policy•Terms of Service