سؤال

I wrote a startup-pipeline where definitions are passed via environment variables and are used together with (unit-tested) methods to create suite of several publications (for subscriptions) by some lightweight factories and generators (also unit tested).

The startup-pipeline looks like this

  1. Load the publication definitions via environment variables. Assert, that the definitions comply with a given schema.
  2. Create for each publication definition a new publication with the given method.
  3. Rate limit the publication by the values defined in the definitions.
  4. Assert if a publication exists for each publication in the definitions. Otherwise throw.
  5. Assert if for each of these publications a rate limit has been registered with the values from the definitions. Otherwise throw.

The application will not start until all steps have been completed without errors.

Subscriber permissions and exceptions in the methods / factories / generators are covered by the unit tests.

From a devops point of view, any error would be detected after tests and build immediately, as the build would be installed on a private staging environment and then tries to start.

I currently see no benefit to write the same checks again in a set of integration tests and would therefore like to know, whether this procedure is sufficient to omit the integration tests?

هل كانت مفيدة؟

المحلول

There would indeed be no value in writing integration tests that duplicate the checks that you already have in your code in assertions.

On the other hand, I do see value in writing testcases that verify that your assertions work. These testcases would try to start the application and verify that it fails to start if the preconditions that the asserts check on are not met and that it succeeds if a valid environment is presented.
The value of these testcases would be that you are alerted if accidentally an important assert got removed or disabled.

نصائح أخرى

This depends on what kind of application you're developing.

If you're developing an off&the-shelf application or applications that are deployed in environments where you have little control of, you should treat misconfiguration issues as errors. You should treat configuration as a program input and validate them just like any other program inputs, with real validation statements and write tests for those validators, not just catch them with assertions.

Otherwise, if you're developing a bespoke application where you can count the number of independent production environments in one hand and they're all run by your own company, then you may be able to get by with treating those configuration as part of the application, in which case there's little value in actually trying to write validation logics.

Assertions should be though of as an executable comment, and the assertion expression is a test to ensure that the comment doesn't get out of date without being noticed.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى softwareengineering.stackexchange
scroll top