Why this is O(n log n)
Why O(n log n). Two recursive calls on halves (log n depth) and an O(n) merge at every level. The recurrence T(n) = 2T(n/2) + O(n) solves to O(n log n) via the master theorem.
Note. arr.slice is O(k) where k is the slice length — but it's part of the O(n) work at each level, not extra. The result is the same.
How to recognize linearithmic complexity in the wild
O(n log n) is the sweet spot of comparison sorts. Merge sort, heapsort, and the average case of quicksort all land here. If you see partitioning, divide-and-conquer, or 'sort then process,' suspect linearithmic.
The eight rungs of the ladder
Big-O classifies the asymptotic growth of a function, not the wall-clock time. The ladder Bugdle uses has eight rungs ordered from cheapest to most ruinous: O(1), O(log n), O(n), O(n log n), O(n²), O(n³), O(2ⁿ), O(n!). The puzzle's answer sits at rung 4 of 8. The four-guess budget plus higher/lower hints means the puzzle is solvable in at most ⌈log₂(8)⌉ = 3 perfectly-read guesses; the fourth attempt absorbs misreads of the snippet.
Common confusables
Nested loops aren't automatically quadratic — the inner loop has to range over n, not a constant. A loop that always runs ten iterations is still O(1) work per outer iteration. Similarly, a recursive function isn't automatically exponential just because it calls itself twice — if the recursion has overlapping subproblems and you memoise, you collapse back to polynomial. Always ask: what does n really mean here, and how many distinct subproblems are there?
External reference: Big O notation — Wikipedia.