Taro Logo

Number of 1 Bits

Easy
Asked by:
Profile picture
Profile picture
Profile picture
Profile picture
+8
18 views
Topics:
Bit Manipulation

Given a positive integer n, write a function that returns the number of set bits in its binary representation (also known as the Hamming weight).

Example 1:

Input: n = 11

Output: 3

Explanation:

The input binary string 1011 has a total of three set bits.

Example 2:

Input: n = 128

Output: 1

Explanation:

The input binary string 10000000 has a total of one set bit.

Example 3:

Input: n = 2147483645

Output: 30

Explanation:

The input binary string 1111111111111111111111111111101 has a total of thirty set bits.

Constraints:

  • 1 <= n <= 231 - 1
Follow up: If this function is called many times, how would you optimize it?

Solution


Clarifying Questions

When you get asked this question in a real-life environment, it will often be ambiguous (especially at FAANG). Make sure to ask these questions in that case:

  1. What is the range of possible values for the unsigned integer n?
  2. Should I expect n to ever be zero?
  3. Could you provide a few examples of input values and their corresponding expected outputs?
  4. Is there a specific format required for the return value (e.g., integer, string representation)?
  5. Are there any specific memory constraints I should be aware of, given the potential size of n?

Brute Force Solution

Approach

Imagine the number is a sequence of light switches, either on (1) or off (0). The brute force strategy is to simply check each switch individually to see if it's on.

Here's how the algorithm would work step-by-step:

  1. Look at the very first switch (the rightmost one).
  2. Is it on? If so, count it.
  3. Now, look at the next switch to the left.
  4. Again, is it on? If so, add it to your count.
  5. Keep doing this for every single switch, one at a time, moving from right to left.
  6. Once you've checked every switch, the total number you've counted is your answer.

Code Implementation

def number_of_one_bits_brute_force(number):
    number_of_ones = 0
    
    # Iterate through the bits of the number
    for i in range(32):

        # Check the least significant bit
        if (number & 1):
            # If the bit is 1, increment the counter
            number_of_ones += 1

        # Right shift the number to check the next bit
        # This simulates moving to the next switch
        number = number >> 1

    return number_of_ones

Big(O) Analysis

Time Complexity
O(1)The provided approach iterates through each bit of the input number to count the number of 1s. Since the number of bits in an integer is fixed (typically 32 or 64), the loop runs a constant number of times regardless of the input value. Therefore, the time complexity is O(1), constant time.
Space Complexity
O(1)The described approach iterates through the 'light switches' (bits) one by one, keeping track of a count. It doesn't involve creating any auxiliary data structures like arrays, lists, or hash maps. The only extra memory used is for the counter variable to keep track of the number of 'on' switches (1s) found, regardless of the number of switches or bits (N). Therefore, the space used remains constant and is independent of the input size.

Optimal Solution

Approach

The most efficient way to count the number of 1s in a binary number takes advantage of a bitwise trick. This trick cleverly isolates and removes the rightmost 1 in each step, allowing us to count how many steps it takes to reach zero, which equals the number of 1s.

Here's how the algorithm would work step-by-step:

  1. Take the number and perform a bitwise AND operation with the number minus 1.
  2. This operation effectively flips the rightmost 1 to a 0.
  3. Increase the count of 1s by one.
  4. Repeat the bitwise AND operation and incrementing the counter until the number becomes zero.
  5. The final count represents the total number of 1s in the original number.

Code Implementation

def count_number_of_set_bits(number):
    number_of_bits_set = 0

    # Iterate until the number becomes zero
    while number:
        # This operation isolates the rightmost 1
        number &= (number - 1)

        number_of_bits_set += 1

    return number_of_bits_set

Big(O) Analysis

Time Complexity
O(log n)The time complexity is determined by how many 1s are present in the binary representation of the input number. In each iteration, the rightmost 1 bit is flipped to 0. Therefore, the loop iterates a number of times equal to the number of 1s. In the worst case where the number is close to the largest integer representable with a fixed number of bits, it will still have O(log n) number of 1s where n is the magnitude of the number; The loop's time is proportional to the number of 1s, and not the size of the input number itself. Thus, the time complexity is O(log n).
Space Complexity
O(1)The algorithm uses a single integer variable to store the count of 1s, and operates directly on the input number without creating any additional data structures that scale with the input size. The space required for this integer variable is constant and independent of the input value. Therefore, the auxiliary space complexity is O(1).

Edge Cases

CaseHow to Handle
Input is zero (0)The algorithm should correctly return 0 since the binary representation of 0 has no set bits.
Input is a small positive integer (e.g., 1, 2, 3)Verify the algorithm accurately counts set bits for fundamental cases like 1 (0001), 2 (0010), 3 (0011).
Input is a large positive integer (close to the maximum unsigned integer value)Test with a number near the maximum unsigned integer limit to check for integer overflow during bitwise operations.
Input is a power of 2 (e.g., 2, 4, 8, 16)A power of 2 has only one set bit, so the algorithm must correctly return 1 in these scenarios.
Input with alternating 0s and 1s in its binary representation (e.g., 5, 10)These inputs require the algorithm to correctly iterate through and count all set bits without skipping any.
Input that is the maximum unsigned integer valueThis tests the loop's termination condition and ensures no errors occur at the boundaries of the unsigned integer range.
Language-specific: Potential integer overflow during calculations (if applicable)Ensure chosen method doesn't exceed the maximum representable value to avoid incorrect results.
Input with a high density of set bitsThis tests performance and accuracy when most bits are set to one.