Sunday, August 03, 2008

BDD for Acceptance Testing (xBehave/Dan North)

As I said in my previous post on BDD I use the Dan North (xBehave) style solely for acceptance testing as I think it suits testing at that level.

The first problem with discussion this style is that the message isn't really all that clear, if you want evidence of this do a Google search for BDD and spend an evening reading as much as you can then see if you feel you understand what its all about. Given the fact that Dan North apparently first started working on BDD in late 2003 I'm surprised at the lack of good content. Having said all that I have few real answers in regards to BDD so I'm going try to explain some of the real issues I've met.

However many of these issues are not specific to BDD as they relate as much to the correct application of user stories and acceptance testing. However I thought it was worth discussing it all together because you will meet issues if you do try to follow Dan's style and some of them may surprise you even if you have read about BDD, especially since most BDD examples boil down to dull AccountTransferService type cases.

It is also worth remembering though that these are just my current opinions, and I have already changed my mind a few times on some of this stuff :)

What Are We Talking About Here

My acceptance tests are the highest level tests that I'm going to write as part of my normal development process. They aren't necessarily end to end but they will test to the boundary of one particular system, let me give you two examples of what I mean:

  1. Integration Project - The acceptance tests sent a message to the SUT and verified the XML document that came out the other end. I didn't go as far as testing how that XML document was processed by subsequent systems, writing these totally end-to-end tests might have had value but they would have been complex/slow/tricky and in my view would not have adequately driven development.
  2. Web Project - For the last month I've been working on a new Website and in this case we're using Watin/NUnit and the BDD specifications are working from the top down. They only interact with the application through the browser, including when verifying.

Just to be clear we are just talking here about the top level tests, so in the case of the Web any controller/mapper/service/entity tests are covered separately using the xSpec style (Scott Bellware is the authority on this in the .NET world).

Framework

As far as I'm aware the only serious .NET BDD framework is NBehave and whilst my (limited) work with it left me feeling that its a good approach I wasn't happy with the resulting specifications. Looking back now I think I made a mistake giving up on it too quickly and I do intend to try it again once v0.4 comes out.

Another BDD framework that you might have heard of is Stormwind. It looks very professional but it is very GUI focused and doesn't really seem to have that much to do with BDD. As soon as you start talking about the nitty gritty details of the user interacting with the controls on a page I think you've lost the power that BDD gives you about describing what your user wants to be able to do and why.

So in actual fact I'm not currently using any special framework for my acceptance tests other than (for the Web) Watin and NUnit/MSTest.

Style

Before starting out with BDD you need to know what your trying to get out of it. Personally I wanted to get:

  1. Good acceptance tests
  2. Improved communication within the team.
  3. Further cementing of the user story based requirements process.

You will notice reporting is not mentioned here because the ultimate goal is to create working software that meets users needs so personally I'm not overly interested in the reporting angle. With this in mind, and given that we have adapted our requirements process to ensure we produce stories in a suitable given/when/then format, how do we feed them into our development?  After a bit of experimentation my current approach is just to do it blindly by encoding them directly into methods:

Given_that_I_am_not_logged_in();

And_I_am_on_the_search_page();

When_I_login_successfully();

Then...

And_I_will_be_on_the_search_page();

Few things to note about this style:

  1. Managing Complexity - In this post about RBehave Dan North indicates that each steps implementation should be simple. I'm not so sure this is always true, for example when I was writing tests for the integration project there was a lot to be done to setup the test and to verify the results and with this style I hide all those details within the methods.
  2. Lacks Reporting - With this approach you don't get any reporting because we're just calling methods rather than passing useful metadata to a framework like NBehave, this means we can't print out a report to show our users. This doesn't bother me because the "user stories" are driving the creation of these specifications so I don't particularly see the value of printing out the results of an execution to say "look we're doing what we said". If we want to prove we've done the right thing we can look at the way the software behaves.
  3. Lacks Flexibility - I've met quite a few situations where I want to parameterise my given/when/then, for example even for this simple acceptance tests I have variations depending on where you are on the site when you login and one case where you go directly to the login page from outside of the site. I agree that DRY does not necessarily apply as much to tests as to normal code (which I will discuss in a separate post) but if I blindly copy the code in each case I end up with a real muddle.

What I've found is with acceptance tests there is a lot of complexity and scope for reuse and you can of course get the reuse in a number of ways including using inheritance or composition:

  1. Inheritance - You have a context base class and inherit into multiple subclasses. Most of the code is probably going into the base classes and the subclasses just override a few values/methods to customize the behaviour (e.g. overriding StartPage and EndPage or equivalent for the login case). It works but it does mean that the individual scenario classes tell you very little and to understand what's going on you need to go into the base class. Although the base class itself might be short and written in a very clean manner the result is far from perfect. Note even if you do use inheritance you will probably use some helper/builder classes, but this is beside the point.
  2. Composition - Do what xBehave does and move the context out to a separate class which you configure before running every scenario, so your given/when/then is moved to the composed class and you tell it what values to use in a particular execution.

Since the reuse is a big issue I might well blog about it separately, but I do think the amount of repetitive code involved in acceptance testing does push you towards a composition based approach and I do intent to give NBehave another shot sometime soon.

Outside In

As I discuss above I'm using this approach from the top down, but my usage has varied massively:

  1. Integration Project - I tried an outside in approach focusing on mocking first then replacing with real implementations. This approach was emphasized heavily in a lot of BDD articles and I found it relatively useful in helping me use mocking to influence my design.
  2. Web Project - In this case we're using Watin, no mocking is performed at any stage in these tests but we will stub out some services (e.g. Web services). My workflow is to start with a single Watin test and then once its failing I'll switch directly to testing the controller (using xSpec style), then services/repositories and so on right the way down until I've written enough code to get the original test passing.

The difference is small, but I certainly think its worth pointing out that BDD does not force you to work in one particular way.

Thoroughness

How many of these specifications do I write, well for me it depends and since I've now tried to use BDD on two projects I thought I'd describe how many acceptance tests I've written on each of them:

  1. Integration Project - In this case the SUT receives messages describing changes to an AGGREGATE and it then generates XML messages which are sent to BizTalk. My approach to acceptance testing was to modify the AGGREGATE, get the resulting message, feed it through the SUT and then verify the resulting XML matched my expectations. In this case we had (boring) complexity in the mapping, if Foo has the value X then do Y, and whilst I could have have tried to write acceptance tests for each of these cases but I was already covering that behaviour in my lower level tests and it was far more convenient to test/specify at that level. With this in mind I only wrote two acceptance tests, one with a relatively unpopulated aggregate and one that was fully populated with interesting values.  All the other details were covered by xSpec style unit/integration tests.
  2. GUI Project - Our user proxy writes user stories and scenarios that go into quite a lot of detail, these were then turned pretty much directly into acceptance tests. The result is lots of acceptance tests and they do indeed (so far) drive the development.

The integration project was troublesome, applying user stories and BDD involved a bit more creativity than in the case of the GUI work and the specifications didn't cleanly tie back to the requirements. Having said that the user stories themselves didn't go into massive amounts of detail about all of the little details so I thought I'd found a reasonable balance and the acceptance tests did have a lot of value and saved me time overall because I had less issues being discovered when we did end-to-end testing.

Outstanding Questions

Acceptance testing is hard and from what I've seen there just isn't the guidance out there and I'm quickly running into exactly the issues that make test automation (which used to be my job) very difficult. The main questions I still have are:

  1. Verifications/Setup - Assuming there is a GUI do I ever skip it when doing my setup/verify?
  2. Size - Do I go for one big happy path test or more detailed and targeted tests (which in my view are more useful)?  What about when in order to verify something I need to go to another part of the system?
  3. Maintainability - These tests use so much of the system that we have to make them maintainable, this isn't just about not encoding too many GUI details into the specifications but is also about allowing reuse and writing clean code. What's the best approach?
  4. Story Driven - In order to create user stories that feed into BDD nicely you need to write them a certain way, is this something the users should have to think about?

Note that these question relate to the process that a developer goes through in practicing BDD and the artifacts we produce, I have plenty of other questions about the overall process of BDD and how far it really changes software development.

Starting with BDD

If you are new to BDD I'd recommend you read Dan North discuss BDD and give it a shot. Once you understand the ideas maybe look at the BDD group and all the great content out there on the Web. However when reading content I'd make sure to continually ask whether the author is talking about the Dan North (xBehave) style or the Dave Astels (xSpec) style. If its the former then it is most likely that the author will be thinking of acceptance testing and the advice is probably worth considering, if its the latter then its quite possible they are thinking of unit testing which (to me) is a whole different ball game (in practical terms).

Also everything I've said above is questionable and subject to change, I'm also not strictly following what Dan North thinks. For example here's a quote from Dan North:

I use the scenarios to identify the “outermost” domain objects and services – the ones that interact with the real world. Then I use rspec and mocha to implement those, which drives out dependent objects (secondary domain objects, repositories, services, etc).

This makes sense but I've found using xSpec style works better for domain objects/services, so I actually use xSpec for everything below the GUI, but that might change again next month or if I work on a project with different characteristics.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

1 comment:

  1. Anonymous11:38 am

    Thanks for this.

    Acceptance testing is something our current process lacks. Our work is heavily influnced by the UI. We creat clickable wireframes to get fast feedback from clients and proceed to go from wireframe to html. More feedback and then code iterations.

    We do use TDD but acceptance testing is something i want to bring on board. We have a small team and recently we brought 2 contractors on board. If I had integration tests as a starting point then that would have been much better.

    ReplyDelete