Question

I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it.

Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly.

To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow.

The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each.

On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app.

Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already.

But wouldn't it be nice if I could make that process simpler still?

At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3).

Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there?

Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse?

For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

Was it helpful?

Solution 7

After much time wasted trying to make up an easier solution, we eventually tought the ops team how to use NUnit's Gui runner. This was easier than expected and is working fine.

OTHER TIPS

I worked in a smoke test writer for an asp.net application. We used QuickTest Pro, the automation of test runs was done with Quality Center (it was called Test Director.). This involved writing hundreds of test scripts that automate a web browser interacting with the web application. These tests where used validate a build before rolling it out on our production servers. Quality Center allows you to define a "pool" of test machines to allow you to run a large list of test scripts in a multi-threaded manner.

A more simplistic smoke test would be to log all errors/exceptions that the application produces and run a spider against the system. This will not obtain very "deep" code coverage, but smoke tests aren't meant for deep code coverage. This error logging should be apart of the production application to deal with errors as they come up. Bugs will always slip though the cracks and sadly enough the best testers will be your users.

I've used Selenium in the past to do these sort of smoke tests for web deployments. You can write a suite of test scripts and then run then against the same site in different environments.

I have also put some thought into this sequence and have proposed taking a declarative approach to deployment and verification, see here for my thoughts,

http://jimblogdog.blogspot.co.uk/2010/10/introducingdeclarative-deployment.html

I have also created some plugins to my open source project Wolfpack to automate this entire process. Essentially you package your "deployment smoke tests" as a NuGet package and publish it to your private NuGet feed. Wolfpack will automatically detect the new version of the package and download it, along with the NUnit.Runner NuGet package and unpack all the files. It will then silently run your tests using the NUnit console runner and parse the results into an alert that you can receive either by email, growl, hipchat etc.

http://wolfpack.codeplex.com/

http://wolfpackcontrib.codeplex.com/wikipage?title=NUnitDeploymentPublisher

Telerik has some free and not-free UI testing tools that can be ran in an automated way by anybody that might help with this too.

I don't know which VCS you're using, but you could write a solution that pulls a version-specific configuration file from the VCS through an intermediary service.

You could write an powershell script or an application that would download the config file from a web service or web app, passing the test URL as a parameter. The servers or app would be running on a machine with access to VCS, so it could return the file contents. Once retrieved, the script or app could then initiate the tests.

Typically, your nUnit tests are sufficient that if they all pass, the code base should be working fine. If you deploy the code, with passing nUnit tests, and encounter a failure on the website, then you need to add an additional nUnit that fails as well, for the same reason. Then, when you fix your code such that the nUnit is passing, you know that you have fixed the issue that the deployed code has. For this reason, most automatic build systems can be configured to automatically run all the nUnit tests first, and then 'fail' the build if any of the tests fail.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top