Define "necessary". It's not necessary to do this in .NET or Java, it's convenient.
Rails apps often only expose the web app and related services completely within a single app.
Testing is similar; there's no need to break up application functionality in order to test code separately, and testing in Rails is no different: there are unit tests, specs, etc. which can run at any level of the code, and various mechanisms to mock out arbitrary portions of functionality.
Persistence layers are generally handled by the AR layer, although in practice it's fairly unusual to actually switch persistence layers (I've done so exactly once in thirty years of development, but that's anecdotal, obviously). In addition, some such switches in Ruby are transparent at the code level because of duck typing (e.g., I switched one app from a local DB to a service, almost transparently, by moving to ActiveResource instead of ActiveRecord, although I did not try to optimize anything afterwards, and the data model was pretty simple.)
All this said, IMO the Rails community has often overlooked practices from "enterprise" shops because Ruby-the-language makes it very easy to build functionality without a need to over-architect. The limitations of this only become clear after apps reach a certain size, however.
Recent trends in Ruby and Rails development include things I've done in the enterprise for years (and are much easier to implement in Ruby than, say, Java). Breaking out functionality into libraries for its own sake, though, isn't particularly useful. Identifying code that should be broken out is, but that happens across environments, when it's necessary to do so.