The main strategies for preventing this are:
- require registration, so you can limit the requests per user
- captchas for registration and non-registered users
- rate limiting for IPs
- require JavaScript - writing a scraper that can read JS is harder
- robots blocking, and bot detection (e.g. request rates, hidden link traps)
- data poisoning. Put in books and links that nobody will want to have, that stall the download for bots that blindly collect everything.
- mutation. Frequently change your templates, so that the scrapers may fail to find the desired contents.
Note that you can use Captchas very flexible.
For example: first book for each IP every day is non-captcha protected. But in order to access a second book, a captcha needs to be solved.