We just came upon the whitepaper "Observation Driven Testing: Yes, the code is doing what you want. By the way, what else is it doing?", and were intrigued.

However, Google doesn't seem to reveal much about how it works in practice (1, 2). Everything out there seems to be from a vendor, Agitar.

Has anyone implemented ODT as a complementary process to TDD and CI?

If so, please share some of the benefits and pitfalls you've encountered with it... we'd love to benefit from your wisdom.

有帮助吗?

解决方案

Observation-Driven Testing is more or less Agitar's term for using their product. I evaluated Agitar's product several years ago. It doesn't replace normal unit testing but works alongside it. Although there is a process to using it, as there is to everything, I wouldn't consider it a process on the level of TDD or CI or CD -- it's as much like using a static analysis tool as it is like testing. Nonetheless it is very powerful and interesting.

Its core functionality is invariant detection, which Agitar calls agitation. It finds all of the methods in your code and executes each with an automatically determined set of parameter values. It is sophisticated in how it chooses parameter values: it does relatively obvious things like using MIN_VALUE, -1, 0, 1 and MAX_VALUE for ints, but it is also able to watch existing test code run and harvest interesting values to use when agitating. It has explicit support for objects with external dependencies like JDBC Connections.

After running all of a class's methods with the chosen parameter values, Agitar makes observations (candidate invariants) about parameter values, method return values and values of fields. It presents them to the user, who can either promote them to invariants, which will be tested for in future agitation runs, or change the code so the observation becomes impossible. For example, Agitar might observe that calling a method with one parameter being null causes the method to throw NullPointerException. Changing the code to handle the null will eliminate that observation in future runs.

Agitar can generate JUnit tests from its observations. I don't know of any advantages that offers over just maintaining the invariants in Agitar's tool.

I didn't buy Agitar's product, but I still have great respect for what I saw and would consider it again in the right environment (Java, strong business need for bulletproof code). It found bugs in code that I had developed with TDD and which had nearly 100% line and branch coverage. Even better, watching it work improved how I think about unit tests!

Regarding pitfalls: it is specific to Java, proprietary and expensive. Also, because it is such a thorough tool and its observations are coupled to implementation details, it would take a lot of effort to maintain Agitar's view of a program that was being actively developed (more than maintaining the acceptance/integration/unit test suite).

This is the best quick introduction to agitation that I found: http://www.agitar.com/downloads/demos/agi_demo/agiDemo.html. There is interesting reading on a predecessor and related systems here: http://plse.cs.washington.edu/daikon/pubs/. Also, ScalaCheck appears to implement a similar although much simpler process.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top