Frage

we have a typical web application stack. there are 120 selenium (webdriver) tests that are executed against the application. this takes roundabout 1 hour. we execute them as part of our build chain "compile > unit test > integration test > gui tests". the gui tests take up a lot of time and we are wondering how to better structure them. currently they are "happy case and unhappy" case tests. they are quite stable i.e. they won't fail because of programmer errors.

we want to get the build times down and the biggest part are the gui tests. we want to do this based on "customer journeys" i.e. specify (together with the business people) some typical use case and test them (happy path) instead of testing too much.....

how do you guys structure your gui tests? here are some ideas that came to my mind

  • only execute happy path tests
  • do a "customer journey test", i.e. do several happy path tests in one ("clicking through the pages")
  • only take the "top 10" specified by the business (mission critical)
  • top 10 + "all the rest" as nightly build (one time)

i would appreciate your ideas

thanks marcel

War es hilfreich?

Lösung

The nighttime is a perfect time for Selenium tests - you just have to remember to put a "Don't turn me off!" sticky note on your computer :).

Also, there always is Selenium Grid when the nighttime begins to be too short to run all tests. With Grid, you can run your tests on several machines in parallel!

We have several test suites that are applicable to different situations. Before a major release (to test, to pre-live, to production), everything runs. Usually (on a daily or even hourly basis on rush days) only the "The Quickened Normal Path of a User Through the Application" suite runs. And if somebody "fixes" a large bug, then the tests related to that part of application are run.

Andere Tipps

An hour seems absolutely fine to me.

One suggestion could be to decide which of the tests come under smoke tests, and are required to run every night. That is, tests that show the core functionality of your web application is still intact and working - other more detailed tests can be run at different times (once every few days?).

With that said, ours take around 2 hours - the only problem comes when one test has failed, you fix it, commit it, but then have to wait a very long time to verify it is fixed on the CI server.

TeamCity allows to run builds in parallel on the same machine, so gui tests should not be in build chain along with unit and integration tests. UI tests should have separate database and separate build so they will not waste time of developers or manual testers or any other stakeholders. TeamCity will gather all statistic, will send email on build failures and so on.
Next step is parallelization. As Slanec said you can use Grid (several machines are not required) with Mbunit (c#) or TestNG (java). With the help of Grid you can decrease tests execution time e.g. by 10 times so it will take only 6(!) min to run all your tests.
Also you can combine some of your tests in the bigger ones (but this will lead to increasing time for discover the reasons of failure and make tests difficult to maintain).
After these steps Gui tests can be executed after each source commit and provide fast response on application bugs status.

Great question, great answers.

An extra consideration is that you could prioritize your 120 gui tests: You can run them in an order such that the most important ones or those that are most likely to fail are run first. This won't help to get the build times down, but it will help to get useful feedback from a build faster.

This prioritization (your top 10) need not be fixed, but can change per release / iteration / completed story / day, etc. For example, you may want to run the newest gui tests first. Or those that were changed most recently. Or the ones covering most of the code that was most recently changed.

There is no tooling up and running out there immediately supporting this, as far as I know, although there is quite some (academic) research going on in the area of test case prioritization.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top