It ultimately depends on your environment, the data-size and the quantity of methods. But there are several reasons to go with the second option and only one to go with the first.
First option: One complex method
Reason to go with the first: The HTTP overhead of multiple requests.
Does the overhead exist? Of course, but is it really that high? HTTP is one of the lightest application layer protocols. It is designed to have little overhead. It's simplicity and light headers are some of the main reasons to its success.
Second option: Multiple autonomous methods
Now there are several reasons to go with the second option. Even when the data is large, believe me, it still is a better option. Let's discuss some aspects:
- If the data-size is large
- Breaking data transfer into smaller pieces is better.
- HTTP is a best effort protocol and data failures are very common, specially in the internet environment - so common they should be expected. The larger the data block, the greater the risks of having to re-request everything back.
- Quantity of methods: Maintainability, Reuse, Componentization, Learnability, Layering...
- You said yourself, a generic solution is easier to be used by other components. The simpler and more concise the methods' responsibilities are, the easier to understand them and reuse them in other methods it is.
- It is easier to maintain, to learn: the more independent they are, the less one has to know to change it (or get rid of a bug!).
To take REST into consideration here is important, but the reasons to break down the components into smaller pieces really comes from understanding the HTTP protocol and good programming/software engineering.