Question

When we are developing a software supported by continuous integration (CI), I imagine 3 roles working together:

  • Software developers, adding functionality to the system with merges to the repository.
  • DevOps, maintaining the CI pipeline that supports developers.
  • Testers, working on the "verification" part of CI.

My problem is that I have no experience as a tester and I don't understand how is it possible to create automatic tests that will withstand any change in the merges added by the developers.

For example, in a strongly typed language such as Java, creating a new injectable dependency to a Class must screw up so many tests because the constructor of that class would be changed.

My point is: for most changes in code there must be changes in the tests, so what's the point of CI here? I get CI when we are just replacing the functionality given by an interface or an abstract class, but usually developing is not just replacing functionality, it is adding elements and changing the structure of the code constantly.

Why CI if changes in code will mostly end up in changes on the tests?

Was it helpful?

Solution

Continous Integration is a best practice per se, which main goal is to ensure that your code assembles correctly and pass both unit and integration tests.

CI should happen continuously regardless the changes (not sporadically only when changes happen). This is especially important for projects involved in continuous deployment and delivery.

Tests can fail anytime during the CI process. Let's say that 3rd party APIs contracts could have been changed or just might have stopped working. Or even worse, someone may have changed the RDM. These things happen oftener than we think. By continuously enforcing our code to pass the tests we are anticipating ourselves to unexpected issues before releases or deployments.

Regarding your role as a tester, Q&A developers are focused on validating end-to-end functionalities and users experience rather than code. They contribute to the SDLC with manual or automated end-to-end tests so that if developers change, remove or add new features to the project, this still accomplish with its main purpose under the agreed premises. Acceptance testing.

Q&A devs are somehow a very demanding customer. The less familiar are they with the code the better because tests are not going to be biased by implementation details.

The automated tests are integrated into the deployment pipeline, contributing to the CI and CD.

From the DevOps point of view, there's no role A or B, there's a team where everybody does tests, develop, look after the quality and the integrity of the project.

Acceptance testing

Attending your comments related to how to automate acceptance tests, to my latest experience we have 3 possible approaches

1. Manual testings

Well, just do manual tests. We perform these tests in small projects. For us, small means 1-5 screens. Here, documentation is your friend. Get documented the use case to execute and the expected result. Perform the use case in the right order and check the result.

2. Automate test - machine event oriented

In few words. Selenium. The idea behind Selenium is to mimic the browser. No more no less. Ten years ago that was relatively doable because web browsers were not so sophisticated as they are today. Neither were the web applications. Today's web applications are quite more complex, they are quite more event based, more dynamic and the rendering is no longer sequential. Selenium doesn't fit well in such conditions. Agile methodologies don't help here because changes happen oftener and some of them might take us to re-type the whole automate.

3. Automate test - human action oriented

We recently started to play with Computer vision and we got very good results. It's not exempt of shortcomings and it's far more complex to make it work. For instance, we have to reach pixel perfect for any possible resolution, for any possible S.O, for any possible device. We also have to implement an X-server to render the web browser on a remote server when these tests are executed from Jenkins.

This article might interest you. We implemented Skulli as computer vision engine. Our tests are executed against deployed applications which are being monitored during the whole test phase by Jacoco, so we can determine the coverage degree of the use case.

OTHER TIPS

Not all test should experience changes like how you're describing. For example acceptance test cases shouldn't fail if a constructor changes--they should only change when requirements change. Testers should focus on writing automated test cases that verify requirements and tests that protect against regression. These automated test cases should be executed against a fully integrated system as close in configuration to production as possible.

The developers should be in charge of writing unit tests for their code, and they are responsible for updating these tests as they make changes.

I think the question needs clarifying a little, but there is the kernel of a/some good question(s) here. Taking your points in turn:

When we are developing a software supported by continuous integration (CI), I imagine 3 roles working together...

I've never seen this particular mix before but YMMV...

My problem is that I have no experience as a tester and I don't understand how is it possible to create automatic tests that will withstand any change in the merges added by the developers.

Not everything can be covered by automatic tests so don't let this bother you. There is also a difference between unit tests and integration tests.

For example, in a strongly typed language such as Java, creating a new injectable dependency to a Class must screw up so many tests because the constructor of that class would be changed.

If a small change breaks many tests, that would tend to indicate various problems such as code brittleness, tight coupling etc. Speak to your developers and/or architect.

My point is: for most changes in code there must be changes in the tests, so what's the point of CI here?

The point of CI is that code fails quickly and so problems are identified early. Compare this with the practices of yore where builds happened daily, weekly or even less frequently and it isn't hard to see how much of a boon this is for developers. It also largely does away with finger pointing since the developer can see they broke the build and attend to it more or less immediately without holding up their colleagues.

Why CI if changes in code will mostly end up in changes on the tests?

Even if it is just the test code that was failing, you would still want to run the suite again since the criteria for a successful build will have changed.

The main aim of CI is to prevent integration problems. (Wikipedia)

E.G Developer A makes a change on part of the system, and 1 days later Developer B make a change on another part of the system, Developer B shouldn't make unconscious changes that interferes Developer A changes.

That is one of the powers of the test. Developer B can be sure that he doesn't make accidental changes to Developer A's feature.

A test providing awareness and confidence that changes, don't interfere other feature.

On the other hand, it is also helpful for a single developer maintaining an app for long periods of time. Unless you have a good memory, you will never know that removing http://example.com/api/v1/users endpoint, would make 24 users that use our old mobile app that released 3 years ago can't login to our application.

But if the changes on the other feature are expected, then it is okay to change the test.

If you often need a change a test every additional feature, it probably an alarm that you create a poor test.

Licensed under: CC-BY-SA with attribution
scroll top