Question

We are having a set of selenium UI tests for our application. We are deploying the application in our test machines in our QA environment. We are using TFS 2015 for continuous integration and deployment.

I am building the selenium test scripts using Maven and creating the JAR file in build definition. Now should I copy this JAR file to the QA environment and run the tests from there. Or should I run the tests from build server only pointing the URL to QA application.

Was it helpful?

Solution

Typically a build machine is meant for build, compile + unit tests. There should be no external dependencies.

Usually, UI automation happens as a post deployment step after the package has been built and deployed to an environment. So, the Selenium scripts should be executed after the code package has been deployed.

One could run them on your build server, but then your build server needs to look like your environment. Sometimes building and deploying on same machine can cause problems, so best practice is to separate them.

OTHER TIPS

I would deploy the application in a separate test environment that is production "similar" (take a look at Docker or other similar technology ) . Run Gui test, after everything is fine then deploy to QA server for real tester to take a look.

I had the same question a few weeks back for our own product.

Traditionally, build machines are very special beasts, due tot a high number of tools installed. Adding more tools lead to conflicts and maintenance headaches, so the best practice was to have different machines for different tasks. But now, all of our build machines are just running the tfs build agent and docker - there are some difference in hardware, but that has no impact apart from different build times. Each environment (e.g. a python build, a asp.net core build or a test) has it's own container which is totally separate from the environment of all other environments. The "special beast" is contained in the containers. So where I execute my tests is totally irrelevant for me, except that the test has to be able to connect to the targeted website. So we no longer distuinguish build machines at all. The sacred "BUILD_01_CAn_BUILD_OLD_CRAP" and "TEST_CAN_EXECUTE_SELENIUM_TESTS" machines are gone, they stopped being pets and are cattle as they should be. (https://devops.stackexchange.com/questions/653/what-is-the-definition-of-cattle-not-pets)

Our tests are a docker container, too (pytest-selenium in a container and selenium/standalone-chrome as another container). I inject the target url as an environment parameter and that's all. There is some configuration inside the pytest-selenium container, but not much.

We had to solve the same problem. We have come to the sollowing solution that works pretty well.

All code into master must be through a pull request. To complete a pull request it must build and be approved by an required reviewer. It is important that the system is setup to build a temporary merge commit from the branch and master(similar to the commit if the PR were to be completed)

After the build is completed a release is created with the artifacts from the build. This release is deployed to dev servers. Each deploy gets a port number where we can reach this specific version of the webapp.

Lets say we have the following environments. DEV,RUNTESTS,UAT,PROD. After deploy to dev RUNTESTS env is run for this particular release. If everything is good the required reviewer gives the PR an approve and it can be completed.

This ensures it is very hard to put broken code into master. Another benefit is that we get a port number for a specific release that can be used for demo purposes(PO, stakeholders, analysts...)

Each morning a build from master is scheduled and deployed to DEV and RUNTESTS and we can then choose to put it into UAT and PROD if we like. Since we have the temporary release BEFORE the code hits master we don't need to trigger a build on each commit.

Licensed under: CC-BY-SA with attribution
scroll top