Question

I have looked through most of the answers given regarding using in-memory database for unit test, but I could not find one that was clear to me.

I am writing some unit tests to cover some of our codebase. The codebase is written in ASP.NET Core and EF Core. I am using xUnit and Moq.

I have read and understood the concept of unit test vs integration tests.

As I understand, writing unit tests means testing a code in isolation from other dependencies and to achieve this, we can mock those dependencies so we can test the code only.

However, I found that it is a bit more work setting up mock of dependencies that I need especially if these dependencies are repository dependencies.

When I tried using an in-memory database, all I need to do was setup the in-memory database, seed the database, create a test fixture and it works. And in subsequent test, all I have to do is create the dependencies and it works just fine.

To mock the repository, I have to setup a mock and return value. And then there is the complexity of mocking an async repository as explained here: How to mock an async repository with Entity Framework Core. For each test, I have to mock the repository that is needed which means more work.

With all these in mind, would it be better if I just ditch mocking and use in-memory database? Also, would this not be seen as integration testing though it's an in-memory database.

I created two different tests using the in-memory database and mocking.. - The in-memory database tests understandably took more time, but the difference is usually about 1sec longer than the tests using mocks.

Was it helpful?

Solution

IMHO you are asking the wrong question. It does not matter if what you create is called a unit test by some people, or an integration test by others. What matters is here,

  • is this test useful for your case (so will it help you to avoid certain defects, and will it suffciently reduce the area in code where the root cause of a certain defect may be)?

  • is it fast enough, even when you are going to run several tests of this kind? (1sec longer does not sound much, but when the original test required 0.001 seconds, and the new one requires 1.001 seconds, running 5000 tests of this kind will make a notable difference)

  • is it maintainable? Ideally more maintainable than alternative approaches? This depends heavily on the tooling and how good an "in-memory DB" is supported.

If you can answer all of these questions honestly with "yes", then go ahead.

Recommended: The Way of Testivus - which tells you, for example, to follow less dogma (which is a good recommendation not just for testing).

OTHER TIPS

An in-memory database can be useful for both unit tests and integration tests, but it depends on what precisely you are trying to do.

Unit tests check a single component. Ideally that unit is tested in isolation from other components, but that's not strictly necessary – using other tested components in a unit test is OK as a matter of convenience. Isolated tests tend to run faster and point closer to the cause of an error, but can be difficult to set up unless the software has been designed for testability, for example by minimizing dependencies between components and connecting components over small, easy to mock interfaces.

Integration tests check the interactions between components. However, components that are irrelevant in the context of that test can still be mocked out. But again, that's a tradeoff between speed and error localization versus convenience. Some tests are so difficult to set up that they are not worth setting up!

An in-memory database is useful for both unit tests and integration tests when you don't want to mock out a complete data access layer, or if you need to have a real database due to an ORM. Here, using an in-memory database is easier to set up, is faster, and can easily provide isolation between tests by setting up a new database for each test.

However, this won't test the integration between your software and the actual database – these differences can be significant due to SQL dialects etc. Therefore, a test plan should also include tests with the real database sofware, e.g. in a staging environment.

In practice, keeping a clear distinction between unit tests and integration tests is not overly important. It is more important that you have a good automated test suite, and perhaps tooling and instructions for reproducible manual tests. Personally, I'm mostly writing integration tests because they make it easier to demonstrate the value of the software under test, and because they provide better confidence that the software works as a whole. I've found BDD-style integration tests to be particularly helpful, although they still assume the use of fine-grained TDD-style unit tests.

Licensed under: CC-BY-SA with attribution
scroll top