Pregunta

Assuming that the entire product team has agreed that implementing some automated end-to-end tests* is worthwhile in the first place... By what criteria should the workload of implementing automated end-to-end tests be distributed between developers and dedicated QA (automation) engineers (or some other role)?

*I'd prefer not to define end-to-end test too precisely; any very common definition is fair game. I tend to mean UI tests or public API tests, but feel free to vary from that definition (answers could include or reference a brief definition). If this proves problematic I may update this question to be more narrow.

I am specifically not asking about deciding what to test, which may have a separate answer. I am asking about implementing automated tests, the requirements (essentially test plans) for which may be decided by or may receive input from an entirely different set of people (dedicated QA, product management, the customer, some external agency...). Although I acknowledge that perhaps the question of how test plans are designed might influence the answer.

Allocation of implementation workload might be, for example, one of:

  • Dedicated QA (automation) engineers could implement all the automated end-to-end tests.
  • Developers could implement all the automated end-to-end tests. (Indeed the team may have no dedicated automation or QA engineers.)
  • The workload be distributed between both groups (or some other role[s]).

What criteria should govern who implements what?

Perhaps some specific types of end-to-end tests are more suitable for one group or the other? For example:

  • release-blocking vs. non-release-blocking tests
  • pre-merge vs. post-merge tests
  • pre-production vs. production tests
  • functional vs. performance tests (indeed performance tests are often designed and even implemented by yet another role, dedicated performance engineer)
  • tests that require significant new tool development or acquisition vs. tests that do not
  • tests that run outside the organization like at a customer or partner site vs. tests that only run within the organization...

Perhaps particulars of the product team and its environment (management, market, on-premise vs. SaaS...) may impact the decision?

I'm especially interested in the SaaS context but on-premise is fair game.

Perhaps it is important to consider whether a developer would implement automated end-to-end tests for their own feature vs. another developer's feature. And even though I am interested in test implementation not test design, perhaps the test design question should nonetheless influence the answer.

Existing questions

The following excellent questions are related but a bit more focused on whether it's okay to have no tester role:

By contrast, I concede that some teams may have a dedicated tester role and some may not (both of those alternatives are fair game to this question). Also by contrast, this question focuses specifically on implementing automated end-to-end tests, not the larger question of a QA or testing role, or even test planning per se. These existing questions and answers are important but focus more on testing and test planning, and are too broad to address implementing automated end-to-end tests. These tests are right at a traditional boundary where developer and QA roles often meet; on one side automated unit tests are generally the realm of developers, and on the other side manual end-to-end tests are often the realm of QA; with automated end-to-end tests, the roles may blur and assignment is less obvious. Thus this topic seems ripe for careful analysis here.


Update

Previously I asked about "writing" automated end-to-end tests, but I am really specifically interested in the question of implementing (programming) automation, not designing a test plan. I have updated the question to reflect that, and to avoid a discussion of whether developers should be testers, or such.

All this begs the question of whether it is actually okay to have one person design a test plan and another person implement automated versions of those tests. I'd like to remain agnostic on that here in my question, but answers should be free to come down on one side of that or the other if desired.

¿Fue útil?

Solución

An automated test, in essence, is not quite different from any other kind of program: there is a spec for the test (the test plan), there are different experts on the topic who know what the things in the spec mean, having some expectation about the functionality of the program. Then someone (maybe the same persons, maybe someone else) has to build the thing in code. And someone (maybe the same persons, maybe someone else) wil then run and manage it.

You may have different people in the team who have the knowledge and education for doing the coding of the tests. If in your team the QA engineers have that knowledge, fine, let them do it. If in your team the developers of the tested program have only that knowledge, then they need to be involved. If knowlegde from both sides is needed, fine, put those people together and let them work it out. And if you have so many people in your team, all qualified for the job, that you actually don't know who should do it, then talk to them and make a management decision.

In the end, you need to come to a solution which works for your team and your system, in the context of the available resources, knowledge, tools and size and structure of the system. There is no "one size fits all" solution or standard for this.

Licenciado bajo: CC-BY-SA con atribución
scroll top