I think you've got it as simple as it can be.
The first job gives you a count of posts per user per hour
- Input: record
- Intermediate: k=user+hour; v=1
- Output: k=user+hour; v=count
A second job discovers each user's most active hour. As @pangea notes this involves a descending secondary sort. Normally each reducer call gets passed the values for a single, unique key value. You can use a grouping comparator to combine values for multiple key values for a single reducer call. Here, a grouping comparator could "instruct hadoop" to group all composite key values for a given user together in order to pass all hourly counts per user into a single call to the reducer.
- Input: k=user+hour; v=count
- Intermediate: k=user+count; v=hour+count
- Output: k=user; v=most-active-hour
A third job gives you a count of the number of users who's max output falls in a certain hour (by hour, of course). As @pangea notes this involves a secondary sort.
- Input: k=user; v=most-active-hour
- Intermediate: k=hour; v=1
- Output: k=hour; v=number-users-most-active-this-hour
You can force the use of a single reducer for job 3 and that would let you keep state in the reducer instance and sort/report that data in the cleanup() method - instead of adding a fourth job - but that's the kind of technique that doesn't scale. In this case it works because you have at most 24 values to sort.