Wednesday, August 13, 2008

Test Data Builder and Object Mother

I've been meaning to write about this topic for a while because I think correct use of these two patterns can make a big difference to tests for any moderately complex domain model.

I'm not going to discuss the details of the two patterns themselves because there's already a lot of good content out there about them (see links at end), instead I'll discuss my impressions of them.

Why Are They Needed?

For any moderately complex domain model you're going to have quite a few ENTITIES and VALUE OBJECTs and although you'd like to avoid a Web of associations you will have associations between them.

Thus when testing a Customer (ENTITY) its quite possible that you'll want to associate an Account (ENTITY) or Name (VALUE OBJECT) with it. However you don't necessarily want the creation/configuration of the Account/Name inside the test fixture:

  1. You probably want to reuse the creation code in other tests.
  2. You don't want the creation code adding complexity to the tests, complexity that the reader doesn't care about.

There are other reasons they can be useful, most of which relate to any type of Test Helper.

Application

So far my approach has been to use TEST DATA BUILDER for value objects and then OBJECT MOTHER for Entities.

VALUE OBJECTS validate in their constructor so if you try to use an OBJECT MOTHER you get a lot of methods and overloads. For example you'd have methods to create an Address with a specific Postcode, another to create it with a specific PostCode and town...it gets old very fast and that's why I think an EXPRESSION BUILDER based approach is preferable.

ENTITIES do not necessarily force you to provide all the data to them in the constructor so an OBJECT MOTHER is a good approach. I use a combination of the patterns described in the Creation Method article in the XUnit Patterns page (also see Test Helper page on same site). This works nicely because you can use a simple method on the OBJECT MOTHER to create an ENTITY (give me an active customer) and can then customize the returned Customer in the test method (for example by giving them a rejected order).

I have used TEST DATA BUILDER for ENTITIES too, in addition to OBJECT MOTHERs, however I've only done this a few times and they are quite specific. In particular these are very high level builders so they handle cases like "Give me a customer with a relationship to an account manager who works for the company Spondooliks". This case involves at least three AGGREGATES and the associations between those aggregates and we want to make that setup really readable, which either means putting it in a method in the test class or using a TEST DATA BUILDER (or both).

One thing to be careful of is relying on values of objects returned by OBJECT MOTHERs or TEST DATA BUILDERS. If you don't pass in a value and it isn't implied by the name of the members that you used then do not rely on the value in your tests because if you do they become overly fragile and complex. So if you call CreateActive on an CustomerObjectMother and don't pass in any data then its safe to assume that the returned Customer is Active but you cannot assume that the Customer has an Age of 28 (see Creation Methods).

Are They Evil?

Some argue that both patterns are evil because by creating real ENTITIES/VALUE OBJECTS you are going from writing unit tests to writing small integration tests. I'm more in agreement with Ian Cooper on this point and think its usually fine to use real domain objects in tests (within reason). However if you disagree then you can go ahead and mock out your ENTITIES but you'd still want to use TEST DATA BUILDER for your VALUE OBJECTS (see this test smell from the mockobjects guys).

We want to avoid using OBJECT MOTHER to hide design smells For example if our ENTITIES and AGGREGATEs are too large, too complex, are overly coupled, or have too many states then we could use an OBJECT MOTHER to hide the fact that these problems make the objects difficult to create. Eric Evans discussed this topic here and I handle it by trying to ensure that the OBJECT MOTHERs themselves are kept clean and simple, if they get complex I definitely consider that as a good indicate that there is something wrong with the design.

Another argument which I've heard is that OBJECT MOTHERs and BUILDERs add a lot more code to step through and make debugging tests more difficult. As it happens I rarely debug tests but if I did I either wouldn't step into the OBJECT MOTHER or BUILDER or I'd add the necessary attributes (as Greg Young does).

Object Mother Links

  1. Pattern - You can also get this PDF here.
  2. Martin Fowler
  3. Ward's Wiki
  4. Creation Methods

Test Data Builder Links

  1. Nat Pryce - Great series of articles on the pattern.
  2. Expression Builder

Several other people use slightly different approaches, for example this one is quite interesting and is a variation of the approach I've settled on.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Wednesday, August 06, 2008

Domain Model Validation

This post has been sitting as a draft for a while and finally thought I should post it in response to the recent thread in the ALT.NET forum Validate Value Objects.

So here goes, I'm going to explain basically the approach to validation that I prefer, or at least the approach we used on my last DDD project.

Value Objects

A value object will validate the parameters in the constructor and should then be immutable.
If you pass invalid values to the constructor you get an exception but if you want to know the reasons that certain values are not appropriate you call a method of the form BrokenRulesPreventingConstruction(arg1, arg2, ...). You may not need this if you can are prepared to repeat the validation in another form (such as in the GUI).
Although you might use a BUILDER to create a VALUE OBJECT I prefer to keep all the validation in the constructor of the VALUE OBJECT.
It is also worth noting that value objects simplify validation, for example if a Person must have a Name then all we need to do is check that the Person has a Name as we don't need to validate the name because all Name objects are valid (whole object).

Entities

Choices

If you want to do validation within your domain you have a couple of choices:

  1. Entity or Service Based - Some people move their business logic, including validation, into the services and leave the entities as DTO's. This brings an anaemic domain model. An alternative is to make each entity responsible for some of its own validation.
  2. Attributes or Rule Classes - If you want to use attributes then you'll likely be looking at something like EViL or the Validation Application Block (or an example project like Xeva). Attributes work for simple rules, but don't handle complex or state/processed based validation well. For more complex scenarios you'll likely turn to little rule classes (which I choose to see as variations on the SPECIFICATION pattern). Some people combine attributes and custom classes but personally I prefer just to use rule classes for all cases, it does mean you can't generate the GUI validation (client side validation) but I've yet to find an approach to automatically generating client side validation that I liked so in my view if you want that validation in the GUI then you should consider writing it separately.
  3. Inject or Direct Dependency - Some people who use rule classes inject the rules into the ENTITIES. Injecting the rules adds flexibility, but just having the domain decide which rules to use is going to be enough in a lot of cases. You sometimes need to inject though and Udi Dahan's post on Generic Validation has some good points on where it does fall down, but I still think ENTITIES can handle a lot of their own validation.
  4. Immediate or Delayed - Its natural to assume that the best way to validate is to throw exception from the property setters, disallowing invalid states. This doesn't work for cross field validation or cross object validation or where the object will be temporarily invalid.  If you want a good discussion of this see this book.
  5. Notification or Event - Udi Dahan describes an event based notification pattern.
  6. Constructor - The more you validate in the constructor the better (Constructor Initialization) but it does complicate the use of the domain objects and it is only ever a partial answer. For example if an entity moves through multiple states, or is used in multiple processes then whether it is valid is contextual. I tend to use constructor arguments for immutable's, for example a business key. 

So what do we use:

  1. Entity Based - Services are certainly involved in validation but an ENTITY is responsible for its own validation and an AGGREGATE root is responsible for validating that entity AGGREGATE.
  2. Rule Classes - We evaluated attributes but they only handle very simple validation (not null, max length) well and we didn't want to have two types of validation at play so we just use rule objects.
  3. Direct Dependency - For the sorts of systems I am developing injecting rules is not necessary, I have no need to decouple an entity from its own validation rules or to vary the rules at run-time.
  4. Delayed - You can ask an ENTITY whether it is valid at any time and you can temporarily leave it invalid, this lets the higher layers manipulate the objects as they see fit but lets us ensure that once an application "transaction" is complete the object is back to being valid.
  5. Notification - We use a Notification style approach and a style where you can ask an object whether it is valid/ready to do something (and get back the notification) or you can just try to do it(which may then cause you to get an exception containing all the reasons that the operation is not possible).

Implementation

The implementation is simple but varies from case by case.

For simple cases an entity will have a GetBrokenRules method. This method will create a collection of IDomainRule objects, it then forces each rule to evaluate the object in question. When a rule fails a description of the failure is put into a separate collection of BrokenDomainRule (NOTIFICATION) objects that the GetBrokenRules method returns.

For more complex cases we have to vary the validation based on the state (as in state pattern) of the object. This doesn't really change things too much though but instead of calling GetBrokenRules you call GetBrokenRulesDisallowingTransitionTo(...) and pass in the representation of the new state (maybe an enum value).

Aggregates

An AGGREGATE root is responsible for coordinating the validation for the entire AGGREGATE, so if you ask a Customer to validate itself then it will validate the entire aggregate and return any failings.

Cross Aggregate

If a rule involves more than one AGGREGATE then it should be performed in a SERVICE. For example if we had this requirement:

Before moving a Customer to the Active state you want to ensure that the Customer itself is valid and also that it has at least one Account.

You could make the Customer responsible for this sort of validation but a better option is to move this validation into a separate SERVICE. The same goes for validation that involves multiple instances of the same class (e.g. only one Customer with a specific e-mail address).

In addition if validation requires you to call to an external resource, even a REPOSITORY, then I would move it into a SERVICE and the SERVICE coordinates the validation.

Services

As well as cross-aggregate validation SERVICES should handle any process specific validation.

Associations

Collections can manage some validation to do with associations. For example if we had this requirement:

A Customer can only have one earnings Portfolio

We've found the best way to handle this is to make the appropriate collection responsible for the validation so when you call customer.Accounts.Add(account) you get an exception if for any reason the addition is not possible (the exception tells you all the reasons it was impossible). You also have a CanAdd method so you can evaluate without raising an exception (wouldn't work if we had to worry about the effects of threads on this code).

This validation is not performed when you validate the AGGREGATE that owns the collection (Customer) because the methods that control adding/removing from the collection can ensure the collection is always valid.

Factories

Some validation only needs to be performed on creation. If this validation is complex enough then it can be worth moving it into a FACTORY, in particular if you want to perform the validation before creating the associated ENTITY.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Acceptance Testing Book on CodePlex

Geoff Stockham was good enough to send me a link to a CTP of a new acceptance testing guide written by a group of authors that include Gerard Meszaros.

Really looking forward to reading the final version it as I am hoping that it will answer some of my questions about how to tackle acceptance testing, especially within the context of BDD.

My initial scan left me thinking that it's quite high level, for example the "Hand Scripted Test Automation" section doesn't really tell you much about how to write good acceptance tests in the context of Acceptance Test-Driven Development. However it is early days and it looks like a lot more is to come so I'm sure it will be useful.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Sunday, August 03, 2008

BDD for Acceptance Testing (xBehave/Dan North)

As I said in my previous post on BDD I use the Dan North (xBehave) style solely for acceptance testing as I think it suits testing at that level.

The first problem with discussion this style is that the message isn't really all that clear, if you want evidence of this do a Google search for BDD and spend an evening reading as much as you can then see if you feel you understand what its all about. Given the fact that Dan North apparently first started working on BDD in late 2003 I'm surprised at the lack of good content. Having said all that I have few real answers in regards to BDD so I'm going try to explain some of the real issues I've met.

However many of these issues are not specific to BDD as they relate as much to the correct application of user stories and acceptance testing. However I thought it was worth discussing it all together because you will meet issues if you do try to follow Dan's style and some of them may surprise you even if you have read about BDD, especially since most BDD examples boil down to dull AccountTransferService type cases.

It is also worth remembering though that these are just my current opinions, and I have already changed my mind a few times on some of this stuff :)

What Are We Talking About Here

My acceptance tests are the highest level tests that I'm going to write as part of my normal development process. They aren't necessarily end to end but they will test to the boundary of one particular system, let me give you two examples of what I mean:

  1. Integration Project - The acceptance tests sent a message to the SUT and verified the XML document that came out the other end. I didn't go as far as testing how that XML document was processed by subsequent systems, writing these totally end-to-end tests might have had value but they would have been complex/slow/tricky and in my view would not have adequately driven development.
  2. Web Project - For the last month I've been working on a new Website and in this case we're using Watin/NUnit and the BDD specifications are working from the top down. They only interact with the application through the browser, including when verifying.

Just to be clear we are just talking here about the top level tests, so in the case of the Web any controller/mapper/service/entity tests are covered separately using the xSpec style (Scott Bellware is the authority on this in the .NET world).

Framework

As far as I'm aware the only serious .NET BDD framework is NBehave and whilst my (limited) work with it left me feeling that its a good approach I wasn't happy with the resulting specifications. Looking back now I think I made a mistake giving up on it too quickly and I do intend to try it again once v0.4 comes out.

Another BDD framework that you might have heard of is Stormwind. It looks very professional but it is very GUI focused and doesn't really seem to have that much to do with BDD. As soon as you start talking about the nitty gritty details of the user interacting with the controls on a page I think you've lost the power that BDD gives you about describing what your user wants to be able to do and why.

So in actual fact I'm not currently using any special framework for my acceptance tests other than (for the Web) Watin and NUnit/MSTest.

Style

Before starting out with BDD you need to know what your trying to get out of it. Personally I wanted to get:

  1. Good acceptance tests
  2. Improved communication within the team.
  3. Further cementing of the user story based requirements process.

You will notice reporting is not mentioned here because the ultimate goal is to create working software that meets users needs so personally I'm not overly interested in the reporting angle. With this in mind, and given that we have adapted our requirements process to ensure we produce stories in a suitable given/when/then format, how do we feed them into our development?  After a bit of experimentation my current approach is just to do it blindly by encoding them directly into methods:

Given_that_I_am_not_logged_in();

And_I_am_on_the_search_page();

When_I_login_successfully();

Then...

And_I_will_be_on_the_search_page();

Few things to note about this style:

  1. Managing Complexity - In this post about RBehave Dan North indicates that each steps implementation should be simple. I'm not so sure this is always true, for example when I was writing tests for the integration project there was a lot to be done to setup the test and to verify the results and with this style I hide all those details within the methods.
  2. Lacks Reporting - With this approach you don't get any reporting because we're just calling methods rather than passing useful metadata to a framework like NBehave, this means we can't print out a report to show our users. This doesn't bother me because the "user stories" are driving the creation of these specifications so I don't particularly see the value of printing out the results of an execution to say "look we're doing what we said". If we want to prove we've done the right thing we can look at the way the software behaves.
  3. Lacks Flexibility - I've met quite a few situations where I want to parameterise my given/when/then, for example even for this simple acceptance tests I have variations depending on where you are on the site when you login and one case where you go directly to the login page from outside of the site. I agree that DRY does not necessarily apply as much to tests as to normal code (which I will discuss in a separate post) but if I blindly copy the code in each case I end up with a real muddle.

What I've found is with acceptance tests there is a lot of complexity and scope for reuse and you can of course get the reuse in a number of ways including using inheritance or composition:

  1. Inheritance - You have a context base class and inherit into multiple subclasses. Most of the code is probably going into the base classes and the subclasses just override a few values/methods to customize the behaviour (e.g. overriding StartPage and EndPage or equivalent for the login case). It works but it does mean that the individual scenario classes tell you very little and to understand what's going on you need to go into the base class. Although the base class itself might be short and written in a very clean manner the result is far from perfect. Note even if you do use inheritance you will probably use some helper/builder classes, but this is beside the point.
  2. Composition - Do what xBehave does and move the context out to a separate class which you configure before running every scenario, so your given/when/then is moved to the composed class and you tell it what values to use in a particular execution.

Since the reuse is a big issue I might well blog about it separately, but I do think the amount of repetitive code involved in acceptance testing does push you towards a composition based approach and I do intent to give NBehave another shot sometime soon.

Outside In

As I discuss above I'm using this approach from the top down, but my usage has varied massively:

  1. Integration Project - I tried an outside in approach focusing on mocking first then replacing with real implementations. This approach was emphasized heavily in a lot of BDD articles and I found it relatively useful in helping me use mocking to influence my design.
  2. Web Project - In this case we're using Watin, no mocking is performed at any stage in these tests but we will stub out some services (e.g. Web services). My workflow is to start with a single Watin test and then once its failing I'll switch directly to testing the controller (using xSpec style), then services/repositories and so on right the way down until I've written enough code to get the original test passing.

The difference is small, but I certainly think its worth pointing out that BDD does not force you to work in one particular way.

Thoroughness

How many of these specifications do I write, well for me it depends and since I've now tried to use BDD on two projects I thought I'd describe how many acceptance tests I've written on each of them:

  1. Integration Project - In this case the SUT receives messages describing changes to an AGGREGATE and it then generates XML messages which are sent to BizTalk. My approach to acceptance testing was to modify the AGGREGATE, get the resulting message, feed it through the SUT and then verify the resulting XML matched my expectations. In this case we had (boring) complexity in the mapping, if Foo has the value X then do Y, and whilst I could have have tried to write acceptance tests for each of these cases but I was already covering that behaviour in my lower level tests and it was far more convenient to test/specify at that level. With this in mind I only wrote two acceptance tests, one with a relatively unpopulated aggregate and one that was fully populated with interesting values.  All the other details were covered by xSpec style unit/integration tests.
  2. GUI Project - Our user proxy writes user stories and scenarios that go into quite a lot of detail, these were then turned pretty much directly into acceptance tests. The result is lots of acceptance tests and they do indeed (so far) drive the development.

The integration project was troublesome, applying user stories and BDD involved a bit more creativity than in the case of the GUI work and the specifications didn't cleanly tie back to the requirements. Having said that the user stories themselves didn't go into massive amounts of detail about all of the little details so I thought I'd found a reasonable balance and the acceptance tests did have a lot of value and saved me time overall because I had less issues being discovered when we did end-to-end testing.

Outstanding Questions

Acceptance testing is hard and from what I've seen there just isn't the guidance out there and I'm quickly running into exactly the issues that make test automation (which used to be my job) very difficult. The main questions I still have are:

  1. Verifications/Setup - Assuming there is a GUI do I ever skip it when doing my setup/verify?
  2. Size - Do I go for one big happy path test or more detailed and targeted tests (which in my view are more useful)?  What about when in order to verify something I need to go to another part of the system?
  3. Maintainability - These tests use so much of the system that we have to make them maintainable, this isn't just about not encoding too many GUI details into the specifications but is also about allowing reuse and writing clean code. What's the best approach?
  4. Story Driven - In order to create user stories that feed into BDD nicely you need to write them a certain way, is this something the users should have to think about?

Note that these question relate to the process that a developer goes through in practicing BDD and the artifacts we produce, I have plenty of other questions about the overall process of BDD and how far it really changes software development.

Starting with BDD

If you are new to BDD I'd recommend you read Dan North discuss BDD and give it a shot. Once you understand the ideas maybe look at the BDD group and all the great content out there on the Web. However when reading content I'd make sure to continually ask whether the author is talking about the Dan North (xBehave) style or the Dave Astels (xSpec) style. If its the former then it is most likely that the author will be thinking of acceptance testing and the advice is probably worth considering, if its the latter then its quite possible they are thinking of unit testing which (to me) is a whole different ball game (in practical terms).

Also everything I've said above is questionable and subject to change, I'm also not strictly following what Dan North thinks. For example here's a quote from Dan North:

I use the scenarios to identify the “outermost” domain objects and services – the ones that interact with the real world. Then I use rspec and mocha to implement those, which drives out dependent objects (secondary domain objects, repositories, services, etc).

This makes sense but I've found using xSpec style works better for domain objects/services, so I actually use xSpec for everything below the GUI, but that might change again next month or if I work on a project with different characteristics.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Book Review - Better, Faster, Lighter Java

Ian Cooper mentioned this book on Twitter and since it was available for 33p on Amazon I couldn't resist buying it.

The book describes the lightweight processes and development practices that started in the Java space and have fed through into ALT.NET. The way the book sets out the vision is by breaking things down to five key principles:

  1. Keep It Simple
  2. Do One Thing, and Do It Well
  3. Strive For Transparency
  4. Allow for Extension
  5. You are What You Eat

The first four principles each get their own chapter and in following chapters concrete examples, including Hibernate and Spring, are used to show how these principles improved the designs. These chapters worked really well and I also enjoyed the entirely relevant discussions on topics such as golden hammers, rewrite or replace and vendor sales processes. The authors are also pleasingly open-minded, absolutely no dogma here just pragmatic advice on how to create successful software.

So my opinion is that even though it is a Java book I think its worth a read, especially if you can get it for 33p :)

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone