Question

We recently started using Behave (github link) for BDD of a new python web service.

Question

Is there any way we can get detailed info about the failure cause as tests fails? They throw AssertionError, but they never show what exactly went wrong. For example the expected value and the actual value that went into the assert.

We have been trying to find an existing feature like this, but I guess it does not exist. Naturally, a good answer to this question would be hints and tips on how to achieve this behavior by modifying the source code, and whether this feature exists in other, similar BDD frameworks, like jBehave, NBehave or Cucumber?

Example

Today, when a test fails, the output says:

  Scenario: Logout when not logged in                  # features\logout.feature:6
    Given I am not logged in                               # features\steps\logout.py:5
    When I log out                                     # features\steps\logout.py:12
    Then the response status should be 401             # features\steps\login.py:18
      Traceback (most recent call last):
        File "C:\pro\venv\lib\site-packages\behave\model.py", line 1037, in run
          match.run(runner.context)
        File "C:\pro\venv\lib\site-packages\behave\model.py", line 1430, in run
          self.func(context, *args, **kwargs)
        File "features\steps\login.py", line 20, in step_impl
          assert context.response.status == int(status)
      AssertionError

      Captured stdout:
      api.new_session
      api.delete_session

      Captured logging:
      INFO:urllib3.connectionpool:Starting new HTTP connection (1): localhost
      ...

I would like something more like:

  Scenario: Logout when not logged in                  # features\logout.feature:6
    Given I am not logged in                               # features\steps\logout.py:5
    When I log out                                     # features\steps\logout.py:12
    Then the response status should be 401             # features\steps\login.py:18

ASSERTION ERROR
Expected:   401
But got:    200

As you can see, the assertion in our generic step clearly prints

`assert context.response.status == int(status)`

but I would rather have a function like

assert(behave.equals, context.response.status, int(status)

or anything else that makes it possible to generate dynamic messages from the failed assertion.

Was it helpful?

Solution

Instead of using "raw assert" statements like in your example above, you can use another assertion provider, like PyHamcrest, who will provide you with desired details. It will show you what went wrong, like:

# -- file:features/steps/my_steps.py
from hamcrest import assert_that, equal_to
...
    assert_that(context.response.status, equal_to(int(status)))

See also:

OTHER TIPS

According to https://pythonhosted.org/behave/tutorial.html?highlight=debug,and This implementation is working for me.

A “debug on error/failure” functionality can easily be provided, by using the after_step() hook. The debugger is started when a step fails.

It is in general a good idea to enable this functionality only when needed (in interactive mode). This is accomplished in this example by using an environment variable.

# -- FILE: features/environment.py
# USE: BEHAVE_DEBUG_ON_ERROR=yes     (to enable debug-on-error)
from distutils.util import strtobool as _bool
import os

BEHAVE_DEBUG_ON_ERROR = _bool(os.environ.get("BEHAVE_DEBUG_ON_ERROR", "no"))

def after_step(context, step):
    if BEHAVE_DEBUG_ON_ERROR and step.status == "failed":
        # -- ENTER DEBUGGER: Zoom in on failure location.
        # NOTE: Use IPython debugger, same for pdb (basic python debugger).
        import ipdb
        ipdb.post_mortem(step.exc_traceback)

Don't forget you can always add an info message to an assert statement. For example:

assert output is expected, f'{output} is not {expected}'

I find that use pyhamcrest assertions yields much better error reporting than standard Python assertions.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top