Why this is O(log n)

Why O(log n). Each iteration halves n. Starting from n, the loop runs at most ⌈log₂(n)⌉ times before n drops to 1 (or fails the even check).

Note. The body of the loop does constant work — a modulo, a comparison, a division. Multiply the iteration count by the body cost: log n × 1 = O(log n).

How to recognize logarithmic complexity in the wild

O(log n) is the complexity of repeated halving. Classic shape: a loop where the index keeps dividing or multiplying by a constant factor. Binary search and balanced-tree descent are the textbook examples.

The eight rungs of the ladder

Big-O classifies the asymptotic growth of a function, not the wall-clock time. The ladder Bugdle uses has eight rungs ordered from cheapest to most ruinous: O(1), O(log n), O(n), O(n log n), O(n²), O(n³), O(2ⁿ), O(n!). The puzzle's answer sits at rung 2 of 8. The four-guess budget plus higher/lower hints means the puzzle is solvable in at most ⌈log₂(8)⌉ = 3 perfectly-read guesses; the fourth attempt absorbs misreads of the snippet.

Common confusables

Nested loops aren't automatically quadratic — the inner loop has to range over n, not a constant. A loop that always runs ten iterations is still O(1) work per outer iteration. Similarly, a recursive function isn't automatically exponential just because it calls itself twice — if the recursion has overlapping subproblems and you memoise, you collapse back to polynomial. Always ask: what does n really mean here, and how many distinct subproblems are there?

External reference: Big O notation — Wikipedia.