Question

I'm trying to clean up our functional suite at work and I was wondering if there is way to have cucumber repeat a scenario and see if passes before moving on to the next scenario in the feature? Phantom is my headless webkit browser poltergeist is my driver.

Basically our build keeps on failing because the box gets overwhelmed by all the test and during a scenario the page won't have enough time to render whatever it is we're trying to test. Therefore, this produces a false positive. I know of no way to anticipate what test will hang up the build.

What would be nice is to have a hook(one idea) that happens after each scenario. If the scenario passes then great print the results for that scenario and move on. However, if the scenario fails then try running it again just to make sure it isn't the build getting dizzy. Then and only then do you print the results for that scenario and move on to the next test.

Does anyone have any idea on how to implement that?

I'm thinking something like

 After do |scenario|
     if scenario.failed?
         result = scenario.run_again # I just made this function up I know for a fact this doesn't actually exist (see http://cukes.info/api/cucumber/ruby/yardoc/Cucumber/Ast/Scenario.html)
         if !result
            Cucumber.wants_to_quit = true
         end
     end
 end

The initial solution I saw for this was: How to rerun the failed scenarios using Cucumber?

This would be fine, but I would need to make sure that

 cucumber @rerun.txt

actually corrected the reports if the test passed. Like

 cucumber @rerun.txt --format junit --out foo.xml

Where foo.xml is the junit report that initially said that feature 1, 2 & 5 were passing while 3 and 4 were failing, but now will say 1, 2, 3, 4 & 5 are passing even though rerun.txt only said to rerun 3 and 4.

Was it helpful?

Solution

I use rerun extensively, and yes, it does output the correct features into the rerun.txt file. I have a cucumber.yml file that defines a bunch of "profiles". Note the rerun profile:

    <%
rerun = File.file?('rerun.txt') ? IO.read('rerun.txt') : ""
rerun_opts = rerun.to_s.strip.empty? ? "--format #{ENV['CUCUMBER_FORMAT'] || 'progress'} features" : "--format #{ENV['CUCUMBER_FORMAT'] || 'pretty'} #{rerun}"
%>

<% standart_opts = "--format html --out report.html --format rerun --out rerun.txt --no-source --format pretty --require features --tags ~@wip" %>
default: <%= standart_opts %> --no-source --format pretty --require features


rerun: <%= rerun_opts %> --format junit --out junit_format_rerun --format html --out rerun.html --format rerun --out rerun.txt --no-source --require features

core: <%= standart_opts %> --tags @core
jenkins: <%= standart_opts %> --tags @jenkins

So what happens here is that I run cucumber. During the initial run, it'll throw all the failed scenarios into the rerun.txt file. Then, after, I'll rerun only the failed tests with the following command:

cucumber -p rerun

The only downfall to this is that it requires an additional command (which you can automate, of course) and that it clutters up test metrics if you have them in place.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top