Why this is O(n log n)
Why O(n log k). Each heappush / heappop is O(log k) (the heap is bounded at size k). Repeating n times gives O(n log k) — and since k is implicitly bounded by n, the worst case is O(n log n).
Common confusable. If k were a *constant*, this would be O(n log k) = O(n). The answer depends on whether k scales with n or is fixed — here, no bound on k is stated, so we use the general case.
How to recognize linearithmic complexity in the wild
O(n log n) is the sweet spot of comparison sorts. Merge sort, heapsort, and the average case of quicksort all land here. If you see partitioning, divide-and-conquer, or 'sort then process,' suspect linearithmic.
The eight rungs of the ladder
Big-O classifies the asymptotic growth of a function, not the wall-clock time. The ladder Bugdle uses has eight rungs ordered from cheapest to most ruinous: O(1), O(log n), O(n), O(n log n), O(n²), O(n³), O(2ⁿ), O(n!). The puzzle's answer sits at rung 4 of 8. The four-guess budget plus higher/lower hints means the puzzle is solvable in at most ⌈log₂(8)⌉ = 3 perfectly-read guesses; the fourth attempt absorbs misreads of the snippet.
Common confusables
Nested loops aren't automatically quadratic — the inner loop has to range over n, not a constant. A loop that always runs ten iterations is still O(1) work per outer iteration. Similarly, a recursive function isn't automatically exponential just because it calls itself twice — if the recursion has overlapping subproblems and you memoise, you collapse back to polynomial. Always ask: what does n really mean here, and how many distinct subproblems are there?
External reference: Big O notation — Wikipedia.