From your two scenarios, the latter is a possible candidate of being implemented with a bus. The rule is - the more complex/longer processing takes, the higher probability is it won't scale when processed synchronously. Sometimes it is even the matter of not the number of concurrent requests but also the amount of memory each request consumes.
Suppose your server has 8GB of memory and you have 10 concurrent users each taking 50 megabytes of RAM. Your server handles this easily. However, suddenly, when more users come, the processing time doesn't scale linearly. This is because concurrent requests will involve virtual memory which is a hell lot slower than the physical memory.
And this is where the bus comes into play. Bus let's you throtle concurrent requests by queuing them. Your subscribers take requests and handle them one by one but because the number of subscribers is fixed, you have the control over the resource usage.
Sending emails, what else? Well, for example we queue all requests that involve reporting / document generation. We have observed that some specific documents are generated in short specific time spans (for example: accounting reports at the end of each month) and because a lot of data is processed, we usually had a complete paralysis of our servers.
Instead, having a queue only means that users have to wait for their documents a little longer but the responsiveness of the server farm is under control.
Answering your second question: because of the asynchronous and detached nature of processed implemented with message busses, you usually make the UI actively ask whether or not the processing is done. It is not the server that pushses the processing status to the UI but rather, the UI asks and asks and asks and suddenly it learns that the processing is complete. This scales well while maintaining a two-way connection to push the notification back to the client can be expensive in case of large number of users.