Question

I am trying to test a component in a new online shopping system written using "enterprise integration patterns (EIP)". The component observes customers’ shopping behaviour and makes suggestions along the way.

The way EIP works allows such component to be written easily, but I am having extreme difficulties setting up test fixtures for this component. In this system, much of the data have some logical connection which each other, which requires them to be varied together. This translates to tests with hundreds lines of setup, which is not maintainable.

I am now evaluating three solutions:

1. Create our own builders for all messages and response objects:

Purpose: * Insulates the tests from some changes. * Allows better fixture code reuse. (Can set some defaults in the builders.)

Pros/cons:

Each test will still require a significant amount of code to setup, but is largely future-proof. Data consistency is entirely up to the programmer.

2. Connected builders:

Basically add some methods to the builders from solution 1 to fill fields of each other. For example, there may be methods on the order builder to generate the search and page view histories.

Purpose: * Further reduces the amount of fixture setup in each test * Improves the consistency of fixtures

Pros/cons:

The builders are highly complex and may become maintenance issues of their own.

3. Emulate interactions with real services

Purpose: similar to solution 2

Pros: * The data will be more real and consistent * Easier to setup elaborate tests data (e.g. page view history)

Cons: * Very complex and highly externally dependent * The code used to describe/carry out the emulated actions may be just as long * Slow to run

4. Locally (re-)implemented services

Purpose:

  • Solves the external dependency in solution 3 by re-implementing simple versions of the services locally
  • Reduce the number of test code required (compared to solution 3) by extrapolating actions from the end results (see solution 2)

Pros/cons: * Much more reliable than solution 3, but requires a lot of implementation

My question:

Any better solutions? If not, which one would you have chosen? Also, would you have chosen differently if the method chosen here will be applied to all other services in the future?

Was it helpful?

Solution

From my experience testing services that rely heavily on data, here are some suggestions:

  1. Absolutely use real data, not test data. I think you've already mentioned it can be very painful to hand mock data since it changes often and can be complex/large.

  2. Be able to re-generate the real data quickly. It's not too hard to write a tool to take a one time dump of data and use that, but ideally when the data models change, you can easily take another snapshot of data and use that.

  3. Avoid complex code. You've covered this already. No one will keep up the quality of the tests if the complexity is too large.

The approach I've settled on for the time being that I like is the following:

  1. Make sure all database calls/external dependencies sit behind interfaces. This allows you to mock data. Seems like you have this down.

  2. Create a special implementation of an interface that does two things: a) Makes a live database call, b) Saves the result of the database call to a JSON file.

  3. When you need to create a snapshot of real data, you swap out your production implementation of a given interface for the one from (2). Then you go through a few use cases and JSON files get written to disk.

  4. Your test start up code simply needs to read JSON from disk and hydrate objects.

This has worked really well for me as it allows for quick copies of real data and it is very easy to update when the data changes.

There's obviously some work involved, but the code required is not terribly complex.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top