Thursday, September 04, 2008

NHibernate - Mapping Custom Collections

Came upon this article on mapping custom collections that uses extension methods. I'm not a massive fan of extension methods to be honest and I prefer to use custom collection classes and I thought I'd show how I've seen them mapped in the past.

When I say that I like custom collections I mean classes like this:

public class FooCollection : ReadOnlyCollection<Foo>

I prefer a collection like this over a bog standard IList<Foo> because there is usually a lot of behaviour related to the collection, for example we may have custom rules relating to addition/removal and we'll probably want to ensure the collections contents are valid at all times. One solution would be to use encapsulate collection but I've found that it just results in my aggregate roots and key entities getting bogged down in lots of boring collection work so I prefer to farm out this work to custom collections. Plus if I go for the custom collection approach I can ensure they implement interfaces such as these:

    public interface IReadOnlyCollection<TItem> : IReadOnlyCollection, ...
  
{
       bool Contains(TItem toEvaluate);

        IReadOnlyCollection<TItem> FindAll(Specification<TItem> specification);
        ...
  
}

This sort of interface is very useful because IEnumerable misses some important members and IList is too permissive of change, plus we want to be able to add all sorts of useful helper methods like the FindAll shown.

However I can't directly map my FooCollection, NHibernate is only going to be able to map something basic like an IList. The solution is to map the collection like this:

<component name="FooCollection" access="nosetter.camelcase-underscore">
<
bag name="_innerList" cascade="all-delete-orphan" access="field" lazy="true">
<
key column="ContaingClassesID" />
<
one-to-many class="..." />
</
bag>
</
component>

This requires a little explaining. We're mapping our custom collection as a component, this seems odd until you look at the way we then map the _innertList within our custom collection. Essentially this _innerList is an IList<Foo> hidden within FooCollection (or more correctly one of its base classes). Since we're mapping an interface NHibernate is happy and the users of our custom collection don't know that we've had to put in this silly _innerList, plus maintaining/using this little internal collection is taken care of by a base class so its all painless. One slight smell is that we're mapping _innerList as a field, not perfect but its not going to keep me up at nights.

Anyway its not a perfect solution but it lets you map custom collections without sacrificing too much.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Wednesday, August 13, 2008

Test Data Builder and Object Mother

I've been meaning to write about this topic for a while because I think correct use of these two patterns can make a big difference to tests for any moderately complex domain model.

I'm not going to discuss the details of the two patterns themselves because there's already a lot of good content out there about them (see links at end), instead I'll discuss my impressions of them.

Why Are They Needed?

For any moderately complex domain model you're going to have quite a few ENTITIES and VALUE OBJECTs and although you'd like to avoid a Web of associations you will have associations between them.

Thus when testing a Customer (ENTITY) its quite possible that you'll want to associate an Account (ENTITY) or Name (VALUE OBJECT) with it. However you don't necessarily want the creation/configuration of the Account/Name inside the test fixture:

  1. You probably want to reuse the creation code in other tests.
  2. You don't want the creation code adding complexity to the tests, complexity that the reader doesn't care about.

There are other reasons they can be useful, most of which relate to any type of Test Helper.

Application

So far my approach has been to use TEST DATA BUILDER for value objects and then OBJECT MOTHER for Entities.

VALUE OBJECTS validate in their constructor so if you try to use an OBJECT MOTHER you get a lot of methods and overloads. For example you'd have methods to create an Address with a specific Postcode, another to create it with a specific PostCode and town...it gets old very fast and that's why I think an EXPRESSION BUILDER based approach is preferable.

ENTITIES do not necessarily force you to provide all the data to them in the constructor so an OBJECT MOTHER is a good approach. I use a combination of the patterns described in the Creation Method article in the XUnit Patterns page (also see Test Helper page on same site). This works nicely because you can use a simple method on the OBJECT MOTHER to create an ENTITY (give me an active customer) and can then customize the returned Customer in the test method (for example by giving them a rejected order).

I have used TEST DATA BUILDER for ENTITIES too, in addition to OBJECT MOTHERs, however I've only done this a few times and they are quite specific. In particular these are very high level builders so they handle cases like "Give me a customer with a relationship to an account manager who works for the company Spondooliks". This case involves at least three AGGREGATES and the associations between those aggregates and we want to make that setup really readable, which either means putting it in a method in the test class or using a TEST DATA BUILDER (or both).

One thing to be careful of is relying on values of objects returned by OBJECT MOTHERs or TEST DATA BUILDERS. If you don't pass in a value and it isn't implied by the name of the members that you used then do not rely on the value in your tests because if you do they become overly fragile and complex. So if you call CreateActive on an CustomerObjectMother and don't pass in any data then its safe to assume that the returned Customer is Active but you cannot assume that the Customer has an Age of 28 (see Creation Methods).

Are They Evil?

Some argue that both patterns are evil because by creating real ENTITIES/VALUE OBJECTS you are going from writing unit tests to writing small integration tests. I'm more in agreement with Ian Cooper on this point and think its usually fine to use real domain objects in tests (within reason). However if you disagree then you can go ahead and mock out your ENTITIES but you'd still want to use TEST DATA BUILDER for your VALUE OBJECTS (see this test smell from the mockobjects guys).

We want to avoid using OBJECT MOTHER to hide design smells For example if our ENTITIES and AGGREGATEs are too large, too complex, are overly coupled, or have too many states then we could use an OBJECT MOTHER to hide the fact that these problems make the objects difficult to create. Eric Evans discussed this topic here and I handle it by trying to ensure that the OBJECT MOTHERs themselves are kept clean and simple, if they get complex I definitely consider that as a good indicate that there is something wrong with the design.

Another argument which I've heard is that OBJECT MOTHERs and BUILDERs add a lot more code to step through and make debugging tests more difficult. As it happens I rarely debug tests but if I did I either wouldn't step into the OBJECT MOTHER or BUILDER or I'd add the necessary attributes (as Greg Young does).

Object Mother Links

  1. Pattern - You can also get this PDF here.
  2. Martin Fowler
  3. Ward's Wiki
  4. Creation Methods

Test Data Builder Links

  1. Nat Pryce - Great series of articles on the pattern.
  2. Expression Builder

Several other people use slightly different approaches, for example this one is quite interesting and is a variation of the approach I've settled on.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Wednesday, August 06, 2008

Domain Model Validation

This post has been sitting as a draft for a while and finally thought I should post it in response to the recent thread in the ALT.NET forum Validate Value Objects.

So here goes, I'm going to explain basically the approach to validation that I prefer, or at least the approach we used on my last DDD project.

Value Objects

A value object will validate the parameters in the constructor and should then be immutable.
If you pass invalid values to the constructor you get an exception but if you want to know the reasons that certain values are not appropriate you call a method of the form BrokenRulesPreventingConstruction(arg1, arg2, ...). You may not need this if you can are prepared to repeat the validation in another form (such as in the GUI).
Although you might use a BUILDER to create a VALUE OBJECT I prefer to keep all the validation in the constructor of the VALUE OBJECT.
It is also worth noting that value objects simplify validation, for example if a Person must have a Name then all we need to do is check that the Person has a Name as we don't need to validate the name because all Name objects are valid (whole object).

Entities

Choices

If you want to do validation within your domain you have a couple of choices:

  1. Entity or Service Based - Some people move their business logic, including validation, into the services and leave the entities as DTO's. This brings an anaemic domain model. An alternative is to make each entity responsible for some of its own validation.
  2. Attributes or Rule Classes - If you want to use attributes then you'll likely be looking at something like EViL or the Validation Application Block (or an example project like Xeva). Attributes work for simple rules, but don't handle complex or state/processed based validation well. For more complex scenarios you'll likely turn to little rule classes (which I choose to see as variations on the SPECIFICATION pattern). Some people combine attributes and custom classes but personally I prefer just to use rule classes for all cases, it does mean you can't generate the GUI validation (client side validation) but I've yet to find an approach to automatically generating client side validation that I liked so in my view if you want that validation in the GUI then you should consider writing it separately.
  3. Inject or Direct Dependency - Some people who use rule classes inject the rules into the ENTITIES. Injecting the rules adds flexibility, but just having the domain decide which rules to use is going to be enough in a lot of cases. You sometimes need to inject though and Udi Dahan's post on Generic Validation has some good points on where it does fall down, but I still think ENTITIES can handle a lot of their own validation.
  4. Immediate or Delayed - Its natural to assume that the best way to validate is to throw exception from the property setters, disallowing invalid states. This doesn't work for cross field validation or cross object validation or where the object will be temporarily invalid.  If you want a good discussion of this see this book.
  5. Notification or Event - Udi Dahan describes an event based notification pattern.
  6. Constructor - The more you validate in the constructor the better (Constructor Initialization) but it does complicate the use of the domain objects and it is only ever a partial answer. For example if an entity moves through multiple states, or is used in multiple processes then whether it is valid is contextual. I tend to use constructor arguments for immutable's, for example a business key. 

So what do we use:

  1. Entity Based - Services are certainly involved in validation but an ENTITY is responsible for its own validation and an AGGREGATE root is responsible for validating that entity AGGREGATE.
  2. Rule Classes - We evaluated attributes but they only handle very simple validation (not null, max length) well and we didn't want to have two types of validation at play so we just use rule objects.
  3. Direct Dependency - For the sorts of systems I am developing injecting rules is not necessary, I have no need to decouple an entity from its own validation rules or to vary the rules at run-time.
  4. Delayed - You can ask an ENTITY whether it is valid at any time and you can temporarily leave it invalid, this lets the higher layers manipulate the objects as they see fit but lets us ensure that once an application "transaction" is complete the object is back to being valid.
  5. Notification - We use a Notification style approach and a style where you can ask an object whether it is valid/ready to do something (and get back the notification) or you can just try to do it(which may then cause you to get an exception containing all the reasons that the operation is not possible).

Implementation

The implementation is simple but varies from case by case.

For simple cases an entity will have a GetBrokenRules method. This method will create a collection of IDomainRule objects, it then forces each rule to evaluate the object in question. When a rule fails a description of the failure is put into a separate collection of BrokenDomainRule (NOTIFICATION) objects that the GetBrokenRules method returns.

For more complex cases we have to vary the validation based on the state (as in state pattern) of the object. This doesn't really change things too much though but instead of calling GetBrokenRules you call GetBrokenRulesDisallowingTransitionTo(...) and pass in the representation of the new state (maybe an enum value).

Aggregates

An AGGREGATE root is responsible for coordinating the validation for the entire AGGREGATE, so if you ask a Customer to validate itself then it will validate the entire aggregate and return any failings.

Cross Aggregate

If a rule involves more than one AGGREGATE then it should be performed in a SERVICE. For example if we had this requirement:

Before moving a Customer to the Active state you want to ensure that the Customer itself is valid and also that it has at least one Account.

You could make the Customer responsible for this sort of validation but a better option is to move this validation into a separate SERVICE. The same goes for validation that involves multiple instances of the same class (e.g. only one Customer with a specific e-mail address).

In addition if validation requires you to call to an external resource, even a REPOSITORY, then I would move it into a SERVICE and the SERVICE coordinates the validation.

Services

As well as cross-aggregate validation SERVICES should handle any process specific validation.

Associations

Collections can manage some validation to do with associations. For example if we had this requirement:

A Customer can only have one earnings Portfolio

We've found the best way to handle this is to make the appropriate collection responsible for the validation so when you call customer.Accounts.Add(account) you get an exception if for any reason the addition is not possible (the exception tells you all the reasons it was impossible). You also have a CanAdd method so you can evaluate without raising an exception (wouldn't work if we had to worry about the effects of threads on this code).

This validation is not performed when you validate the AGGREGATE that owns the collection (Customer) because the methods that control adding/removing from the collection can ensure the collection is always valid.

Factories

Some validation only needs to be performed on creation. If this validation is complex enough then it can be worth moving it into a FACTORY, in particular if you want to perform the validation before creating the associated ENTITY.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Acceptance Testing Book on CodePlex

Geoff Stockham was good enough to send me a link to a CTP of a new acceptance testing guide written by a group of authors that include Gerard Meszaros.

Really looking forward to reading the final version it as I am hoping that it will answer some of my questions about how to tackle acceptance testing, especially within the context of BDD.

My initial scan left me thinking that it's quite high level, for example the "Hand Scripted Test Automation" section doesn't really tell you much about how to write good acceptance tests in the context of Acceptance Test-Driven Development. However it is early days and it looks like a lot more is to come so I'm sure it will be useful.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Sunday, August 03, 2008

BDD for Acceptance Testing (xBehave/Dan North)

As I said in my previous post on BDD I use the Dan North (xBehave) style solely for acceptance testing as I think it suits testing at that level.

The first problem with discussion this style is that the message isn't really all that clear, if you want evidence of this do a Google search for BDD and spend an evening reading as much as you can then see if you feel you understand what its all about. Given the fact that Dan North apparently first started working on BDD in late 2003 I'm surprised at the lack of good content. Having said all that I have few real answers in regards to BDD so I'm going try to explain some of the real issues I've met.

However many of these issues are not specific to BDD as they relate as much to the correct application of user stories and acceptance testing. However I thought it was worth discussing it all together because you will meet issues if you do try to follow Dan's style and some of them may surprise you even if you have read about BDD, especially since most BDD examples boil down to dull AccountTransferService type cases.

It is also worth remembering though that these are just my current opinions, and I have already changed my mind a few times on some of this stuff :)

What Are We Talking About Here

My acceptance tests are the highest level tests that I'm going to write as part of my normal development process. They aren't necessarily end to end but they will test to the boundary of one particular system, let me give you two examples of what I mean:

  1. Integration Project - The acceptance tests sent a message to the SUT and verified the XML document that came out the other end. I didn't go as far as testing how that XML document was processed by subsequent systems, writing these totally end-to-end tests might have had value but they would have been complex/slow/tricky and in my view would not have adequately driven development.
  2. Web Project - For the last month I've been working on a new Website and in this case we're using Watin/NUnit and the BDD specifications are working from the top down. They only interact with the application through the browser, including when verifying.

Just to be clear we are just talking here about the top level tests, so in the case of the Web any controller/mapper/service/entity tests are covered separately using the xSpec style (Scott Bellware is the authority on this in the .NET world).

Framework

As far as I'm aware the only serious .NET BDD framework is NBehave and whilst my (limited) work with it left me feeling that its a good approach I wasn't happy with the resulting specifications. Looking back now I think I made a mistake giving up on it too quickly and I do intend to try it again once v0.4 comes out.

Another BDD framework that you might have heard of is Stormwind. It looks very professional but it is very GUI focused and doesn't really seem to have that much to do with BDD. As soon as you start talking about the nitty gritty details of the user interacting with the controls on a page I think you've lost the power that BDD gives you about describing what your user wants to be able to do and why.

So in actual fact I'm not currently using any special framework for my acceptance tests other than (for the Web) Watin and NUnit/MSTest.

Style

Before starting out with BDD you need to know what your trying to get out of it. Personally I wanted to get:

  1. Good acceptance tests
  2. Improved communication within the team.
  3. Further cementing of the user story based requirements process.

You will notice reporting is not mentioned here because the ultimate goal is to create working software that meets users needs so personally I'm not overly interested in the reporting angle. With this in mind, and given that we have adapted our requirements process to ensure we produce stories in a suitable given/when/then format, how do we feed them into our development?  After a bit of experimentation my current approach is just to do it blindly by encoding them directly into methods:

Given_that_I_am_not_logged_in();

And_I_am_on_the_search_page();

When_I_login_successfully();

Then...

And_I_will_be_on_the_search_page();

Few things to note about this style:

  1. Managing Complexity - In this post about RBehave Dan North indicates that each steps implementation should be simple. I'm not so sure this is always true, for example when I was writing tests for the integration project there was a lot to be done to setup the test and to verify the results and with this style I hide all those details within the methods.
  2. Lacks Reporting - With this approach you don't get any reporting because we're just calling methods rather than passing useful metadata to a framework like NBehave, this means we can't print out a report to show our users. This doesn't bother me because the "user stories" are driving the creation of these specifications so I don't particularly see the value of printing out the results of an execution to say "look we're doing what we said". If we want to prove we've done the right thing we can look at the way the software behaves.
  3. Lacks Flexibility - I've met quite a few situations where I want to parameterise my given/when/then, for example even for this simple acceptance tests I have variations depending on where you are on the site when you login and one case where you go directly to the login page from outside of the site. I agree that DRY does not necessarily apply as much to tests as to normal code (which I will discuss in a separate post) but if I blindly copy the code in each case I end up with a real muddle.

What I've found is with acceptance tests there is a lot of complexity and scope for reuse and you can of course get the reuse in a number of ways including using inheritance or composition:

  1. Inheritance - You have a context base class and inherit into multiple subclasses. Most of the code is probably going into the base classes and the subclasses just override a few values/methods to customize the behaviour (e.g. overriding StartPage and EndPage or equivalent for the login case). It works but it does mean that the individual scenario classes tell you very little and to understand what's going on you need to go into the base class. Although the base class itself might be short and written in a very clean manner the result is far from perfect. Note even if you do use inheritance you will probably use some helper/builder classes, but this is beside the point.
  2. Composition - Do what xBehave does and move the context out to a separate class which you configure before running every scenario, so your given/when/then is moved to the composed class and you tell it what values to use in a particular execution.

Since the reuse is a big issue I might well blog about it separately, but I do think the amount of repetitive code involved in acceptance testing does push you towards a composition based approach and I do intent to give NBehave another shot sometime soon.

Outside In

As I discuss above I'm using this approach from the top down, but my usage has varied massively:

  1. Integration Project - I tried an outside in approach focusing on mocking first then replacing with real implementations. This approach was emphasized heavily in a lot of BDD articles and I found it relatively useful in helping me use mocking to influence my design.
  2. Web Project - In this case we're using Watin, no mocking is performed at any stage in these tests but we will stub out some services (e.g. Web services). My workflow is to start with a single Watin test and then once its failing I'll switch directly to testing the controller (using xSpec style), then services/repositories and so on right the way down until I've written enough code to get the original test passing.

The difference is small, but I certainly think its worth pointing out that BDD does not force you to work in one particular way.

Thoroughness

How many of these specifications do I write, well for me it depends and since I've now tried to use BDD on two projects I thought I'd describe how many acceptance tests I've written on each of them:

  1. Integration Project - In this case the SUT receives messages describing changes to an AGGREGATE and it then generates XML messages which are sent to BizTalk. My approach to acceptance testing was to modify the AGGREGATE, get the resulting message, feed it through the SUT and then verify the resulting XML matched my expectations. In this case we had (boring) complexity in the mapping, if Foo has the value X then do Y, and whilst I could have have tried to write acceptance tests for each of these cases but I was already covering that behaviour in my lower level tests and it was far more convenient to test/specify at that level. With this in mind I only wrote two acceptance tests, one with a relatively unpopulated aggregate and one that was fully populated with interesting values.  All the other details were covered by xSpec style unit/integration tests.
  2. GUI Project - Our user proxy writes user stories and scenarios that go into quite a lot of detail, these were then turned pretty much directly into acceptance tests. The result is lots of acceptance tests and they do indeed (so far) drive the development.

The integration project was troublesome, applying user stories and BDD involved a bit more creativity than in the case of the GUI work and the specifications didn't cleanly tie back to the requirements. Having said that the user stories themselves didn't go into massive amounts of detail about all of the little details so I thought I'd found a reasonable balance and the acceptance tests did have a lot of value and saved me time overall because I had less issues being discovered when we did end-to-end testing.

Outstanding Questions

Acceptance testing is hard and from what I've seen there just isn't the guidance out there and I'm quickly running into exactly the issues that make test automation (which used to be my job) very difficult. The main questions I still have are:

  1. Verifications/Setup - Assuming there is a GUI do I ever skip it when doing my setup/verify?
  2. Size - Do I go for one big happy path test or more detailed and targeted tests (which in my view are more useful)?  What about when in order to verify something I need to go to another part of the system?
  3. Maintainability - These tests use so much of the system that we have to make them maintainable, this isn't just about not encoding too many GUI details into the specifications but is also about allowing reuse and writing clean code. What's the best approach?
  4. Story Driven - In order to create user stories that feed into BDD nicely you need to write them a certain way, is this something the users should have to think about?

Note that these question relate to the process that a developer goes through in practicing BDD and the artifacts we produce, I have plenty of other questions about the overall process of BDD and how far it really changes software development.

Starting with BDD

If you are new to BDD I'd recommend you read Dan North discuss BDD and give it a shot. Once you understand the ideas maybe look at the BDD group and all the great content out there on the Web. However when reading content I'd make sure to continually ask whether the author is talking about the Dan North (xBehave) style or the Dave Astels (xSpec) style. If its the former then it is most likely that the author will be thinking of acceptance testing and the advice is probably worth considering, if its the latter then its quite possible they are thinking of unit testing which (to me) is a whole different ball game (in practical terms).

Also everything I've said above is questionable and subject to change, I'm also not strictly following what Dan North thinks. For example here's a quote from Dan North:

I use the scenarios to identify the “outermost” domain objects and services – the ones that interact with the real world. Then I use rspec and mocha to implement those, which drives out dependent objects (secondary domain objects, repositories, services, etc).

This makes sense but I've found using xSpec style works better for domain objects/services, so I actually use xSpec for everything below the GUI, but that might change again next month or if I work on a project with different characteristics.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Book Review - Better, Faster, Lighter Java

Ian Cooper mentioned this book on Twitter and since it was available for 33p on Amazon I couldn't resist buying it.

The book describes the lightweight processes and development practices that started in the Java space and have fed through into ALT.NET. The way the book sets out the vision is by breaking things down to five key principles:

  1. Keep It Simple
  2. Do One Thing, and Do It Well
  3. Strive For Transparency
  4. Allow for Extension
  5. You are What You Eat

The first four principles each get their own chapter and in following chapters concrete examples, including Hibernate and Spring, are used to show how these principles improved the designs. These chapters worked really well and I also enjoyed the entirely relevant discussions on topics such as golden hammers, rewrite or replace and vendor sales processes. The authors are also pleasingly open-minded, absolutely no dogma here just pragmatic advice on how to create successful software.

So my opinion is that even though it is a Java book I think its worth a read, especially if you can get it for 33p :)

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Thursday, July 24, 2008

BDD - The Two Approaches

I've been using BDD for a little while now and think I've gotten to grips with most of the ideas, however the more I learn the more questions I have. Rather than wait for answers I thought I'd blog about what I'm trying now, so when reading this keep in mind that these are just my current personal opinions and should be taken with a pinch of salt.

My intention was to write one big blog entry but it immediately got too long and unfocussed so I decided to break things out a bit and I thought I'd start by describing how I view BDD.

To me there are two styles of BDD:

  1. North BDD (xBehave) - The style Dan North and colleagues created and evolved and which is often seen as the given/when/then style. To me this style will be useful in bringing everyone into the requirements process and getting them talking the same language whilst providing a process to ensure that the requirements feed directly into development. In my view best suited to higher level testing and in particular to acceptance testing.
  2. Astels BDD (xSpec) - The style Dave Astels popularised and which, to me, is a new form of TDD which has some serious benefits. You can use this style for any testing, from acceptance testing downwards.

Some have argued that the given/when/then style is not needed, and that the xSpec style can be used for all levels of testing. I've tried this but didn't entirely like the results so I currently use both approaches because I actually think that using a different testing styles for acceptance testing and all other developer testing has benefits, particularly because the two types of test/specifications are quite different:

  1. Level - Although many of the BDD examples are against the domain/service layer level I actually think that in many systems you'll actually be using this style for your acceptance testing, probably from the top down.
  2. Language - Dan North makes a big deal of the fact that BDD can draw in the UL. I'm not so sure this is true because user stories aren't necessarily in the UL. However even if it is true it only applies to some of the specifications we're writing, namely those describe the user experience (acceptance level) and domain model.
  3. Reuse - When you're writing acceptance tests you've got a lot of potential for reuse so, in my view, composition wins out over inheritance. I'll describe this more in a separate post, to me, that's the biggest technical reason to look at xBehave (I just found out this is also Dan's view, see comments on this post).
  4. Tools - I'll cover this later but although I think you can do both styles of BDD without any tools I do think that Dan North's style is the one that will benefit more from additional tools/frameworks (such as NBehave) particularly if you are interested in people outside the development team writing the inputs and seeing the outputs (notably reports).
  5. Patterns - In my limited experience patterns/approaches that you come up with for developer testing don't necessarily scale up to acceptance testing where there is more reuse/complexity. I thus think that when viewing much of what is written about BDD you need to be clear on what type of testing the person is describing.
  6. Intent - Users/domain experts are only going to be interested in the behaviour of some parts of the system, notably the GUI and the domain so using the xBehave style right the way down is an un-necessary burden.

Its unfortunate that Astels and North both called their approach BDD, but thats the way of it. However we do need to get our terminology straight and that starts with understanding the differences between the two styles of BDD.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Sunday, July 20, 2008

BDD Talk by Scott Bellware

Greg Young has posted a link to a talk given about BDD by Scott Bellware which I highly recommend. I like it most because Scott clearly explains the differences between the two styles of BDD, one influenced by Dan North (*Behave) and one by Dave Astels (*Spec). I also like the fact that slowly but surely we're starting to use different terminology for the two types because mixing them up is (I believe) the cause of a lot of the misunderstandings regarding what BDD is all about.

My feeling, as I've posted before, is that both approaches have a lot of value but unfortunately the given/when/then style seems to me to be work best with some tool support (especially so you can parameterise the given/when/then itself) so it is harder to get into. I'm going to post more about my views on this later though.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Monday, July 14, 2008

NServiceBus - Getting Started Fast

NOTE: This post is probably only of interest to people planning to try out NServiceBus for the first time as the post is all about how to get it up and running quickly.

I've been trying to get up and running with NServiceBus because its relatively mature and has a good community and is based on good practices. However in my view quite  it is difficult to get into and there is a good chance you'll waste a lot of time time trying to get up and running so I thought I'd share what little I've learned so far with it.

I should also say that difficulty getting projects setup is fine, its a part of using open source projects. In fact although I love Castle and NHibernate both have a high barrier to entry, however in both cases there is lots of documentation available detailing how to get started with them so you never feel far away from an answer. That isn't the case with NServiceBus so I think that when starting out you need to make life easy for yourself. Anyway so far there have been two issues:

  1. Configuration - Out of the "box" NServieBus comes with Spring based configuration and someone has produced a Castle MicroKernel implementation. You also seem to be able to mix the configuration, doing some in code and some in the XML files.
  2. Referencing Projects - NServiceBus has a lot of projects. I'm not against that approach but since initially you have no idea what each project contains you do start off referencing just NServiceBus and two hours later you've ended up bringing in 10 projects. Just bring them all in to begin with and ignore the smell, you can sort it out later if you need to.

Anyway I go into a little more detail into these issues next, but you will probably also want to read this guide.

Referencing Projects

The DLLs I found I needed to reference:

  1. NServiceBus
  2. NServiceBus.Serialization
  3. NServiceBus.Serializers.Configure
  4. NServiceBus.Unicast
  5. NServiceBus.Unicast.Config
  6. NServiceBus.Unicast.Subscriptions
  7. NServiceBus.Unicast.Subscriptions.DB
  8. NServiceBus.Unicast.Transport
  9. NServiceBus.Unicast.Msmq
  10. ObjectBuilder
  11. ObjectBuilder.CastleFramework or ObjectBuilder.SpringFramework

Configuration

The "test\full" folder inside the solution has great examples of how to configure a client/server (see ClientServer and ServerRunner).

You still need to include the configuration file with the suitable elements in it, but examples of this are also included.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Monday, July 07, 2008

EF Team Blog - Data First

The EF team have a post on their blog about encoding logic into eSQL so it can be called in reports.

Not sure it's a good approach but at least you can give your opinions.

So far though I've completely failed because my comments on this and another post have not yet made it to the blog (they could still be waiting to be reviewed, but it has been over a day...).

Frustrating to say the least, makes me wonder whether I should just give up and stop looking at EF content altogether.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Sunday, July 06, 2008

WCF/Silverlight/IIS7 Configuration Links

I've just wasted 4 more hours getting my simple Silverlight/WCF play area setup.

I had lots of problems and although the answers were out there on the Web they were difficult to find, especially as SL/WCF seemed to be returning bog standard exceptions every time something failed. In particular I got sick of seeing "The remote server returned an unexpected response: (404) Not Found."

So here's a list of links that you might find useful if you are planning to play with SL.

Silverlight Links for Resolving Errors

Although google search was the main way I got answers I did find these sites useful:

  1. Forum - Plenty of gotchas and bugs in there to catch you out, solutions to my issues were available on here though.
  2. Irritated Vowel - Had some answers to issues I had, including cross-domain issues when calling WCF services.
  3. Tim Heuer - Looks like he's got answers to lots of issues you'll have.

Silverlight Config

If you are making cross domain requests ensure you have "clientaccesspolicy.xml" and/or "crossdomain.xml" setup and be aware that the required formats of these files is different in beta 1 and beta 2.

You may also get a ProtocolException (404 not found) when you try to call a WCF service. Cross-domain calls is one reason for this but another is that your WCF service is not setup for basicHttpBinding. You can read more on the forum or in Building WCF Services For Consumption From Silverlight. This page made me realize I needed to update my service reference after changing the binding. Lastly this forum entry pointed out that for it to work you need to specify that you want it to create a Website project to host the SL app.

Oh and you will also find that when you add a "Service Reference" to the WCF service you'll find that the "ServiceReferences.ClientConfig" is incorrect.

IIS7

I switched to using IIS 7 rather than the development server because I was having so many issues but to get my app within IIS7 I had to follow some of the steps here (important bit for me was setting it up for ASP.NET).

I then got an error containing the message "could not download the silverlight application". Turns out it was because the MIME type was not registered and the solution is to combine what is in this article with what is in this one (second one is more relevant).

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Wednesday, July 02, 2008

RESTful Web Services

I've been learning about REST for the last month or so and I quickly learned there was an awful lot of rubbish written out there, both supporting REST and bashing it.

Many of the REST guides that have been written just aren't that convincing. Luckily some of the more recent posts, including REST Anti-Patterns by Stefan Tilkov, are much better written and really look to be focusing on the details.

Anyway along with reading a lot of content on the Web I've been reading RESTful Web Services and I thought I'd write a quick review.

Essentially the book is well worth a read, it covers a lot of what you'll want to know about REST including fundamentals and some best early practices.

Some aspects of it did bother me a little. The first was that the code examples were mainly in Ruby, having equivalent examples in Java or C# might have been good. Secondly the examples focused on Web applications, for example a social bookmaking app written in Ruby on Rails, and I did feel an example of a more standard enterprise application might have helped.

Anyway I intend to give parts of the book a second read and I'd definitely recommend it to anyone wanting to know more about REST.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Monday, June 30, 2008

Entity Framework Wiki

The EF team have created a Wiki where you can contribute your views on domain model focused development. Definitely a good sign as they are obviously very interested in finding out how we work.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Sunday, June 15, 2008

Interesting TDD Posts

Three interesting posts have appeared. They started with Michael Feathers blogging about The Flawed Theory Behind Unit Testing, definitely worth a read.

Steve Freeman replied with Test-Driven Development. A Cognitive Justification? which itself had some very interesting points.

Another reply is TDD, Mocks and Design which is also very interesting and focuses on the reasons the advocates removing getters/setters and itss effects on design/testing.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Saturday, June 14, 2008

Soya Project

I watched Jim Webber's Guerilla SOA presentation at InfoQ this morning and he mentions the Soya project so I gave it a quick look.

It builds on top of WCF to allow you to substitute SSDL contracts for your normal WSDL ones, allowing a message based approach.

Luckily the documentation at the site is pretty good and the binaries come with a couple of useful examples including the one described in the getting started section so I definitely felt it was worth a few hours.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Saturday, June 07, 2008

Spec# - First Impressions

I finally got around to looking properly at Spec#, previously all I've been going on was blogs and podcasts, and I'm very impressed.

Although there are other good resources out there I decided to write about the features I enjoyed using, and obviously if you want to try Spec# yourself you can get it at the Spec# Site.

Non-Nullable Reference Types

If you look at many of my protected or public methods then you'll find code that look out for nulls. Its not particularly interesting code so writing it and testing it is bad enough but for it to be visible to the users of your code you really also need to document it (unless the tests are enough).

Anyway there is little doubt that using ! to indicate that null is not allowed is a far more attractive option:

public void Transfer(Account! source, Account! destination, doubletransfer, IAuthorizationService! authorizationService)

If I then try to pass in null then I do get a warning telling me that "Null cannot be used where a non-null value is expected.", very nice.

Preconditions

I might want to specify some preconditions explicitly to help the caller know what is expected of them:

public TransferDescription! Transfer(Account! source, Account! destination, double amountToTransfer, IAuthorizationService! authorizationService)
requires source != destination;
requires amountToTransfer > 0;

If I now try and call my service passing in the same Account for source and destination then I get a Microsoft.Contracts.RequiresException which is useful.

This feature alone would be great for someone like me, not least as it makes it easy for callers to find about these preconditions:

showingpreconditions

PostConditions

Postconditions help me describe the promises my member makes to callers. For example here is how I've specified that my method returns a non-null object that contains the amount of the transfer:

public TransferDescription! Transfer(Account! source, Account! destination, double amountToTransfer, IAuthorizationService! authorizationService)
ensures result.Amount == amountToTransfer;

If the returned object does not have an Amount equal to amountToTransfer then I get a Microsoft.Contracts.EnsuresException.

Invariants

Specifying that some set of conditions "always" holds true can be useful, for example lets say I want my accounts to stay in credit:
 public class Account
{
private double _balance;
invariant _balance >= 0;

The result of breaking one of these invariants at run-time is an Microsoft.Contracts.ObjectInvariantException. There is also a way of temporarily breaking invariants within the type, see the expose keyword.

Finding Out More

Google doesn't turn up much until you realize that instead of Spec# you need to search for something like "specsharp". Still there isn't that much out there and in general the documentation regarding Spec# is pretty patchy but there are some decent resources:

  1. Making Spec# a Priority and .NET 3.5, Design by contract and Spec#
  2. Expert to Expert: Contract Oriented Programming and Spec# - Excellent screen cast showing what Spec# is all about. Hate the talk about "bangs" though :)
  3. Spec# Site
  4. Declare Your Love

I certainly thought Spec# was worth a few hours of play, I did only scratch the surface but I definitely hope these features are brought into C# sooner rather than later.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Pex - "Fix It" or "Allow It"

It seems like when Pex generates failing tests the choice is between "Fix It" and "Allow It" and they do seem to do very different things so I thought it was worth mentioning what little I've found out.

I'll start out with this code in a Pex test:

[PexMethod]
[PexUseType(typeof(AuthorizationService))]
public void overall_behavior_correct(IAuthorizationService authorizationService,
Account source, Account destination, double amountToTransfer)
{
new AccountTransferService().Transfer(source, destination, amountToTransfer, authorizationService);
}

When I run Pex with this test it generates one passing test and some failing tests:

BeforeGetTestsPassing

The failing tests are showing me useful things, for example the SUT does indeed raise an exception if the source and destination Accounts are the same. If I right click on either of these issues I get two options:

Fix It

If I select "Fix It" on each of the automatically generated tests then Pex ends up updating the original parameterized unit test (PUT) to look like this:

    [PexMethod]
[PexUseType(typeof(AuthorizationService))]
public void overall_behavior_correct(IAuthorizationService authorizationService,
Account source, Account destination, double amountToTransfer)
{
// <pex>
PexAssume.IsNotNull((object)source, "source");
PexAssume.IsTrue(source != destination, "source == destination");
PexAssume.IsTrue
(source.Balance >= amountToTransfer, "source.Balance < amountToTransfer");
PexAssume.IsNotNull((object)destination, "destination");
PexAssume.IsNotNull((object)authorizationService, "authorizationService");
PexAssume.IsTrue(amountToTransfer >= 1.5, "amountToTransfer < 1.5");
PexAssume.IsTrue(((AuthorizationService)authorizationService).AllowTransfer
(source, destination, amountToTransfer) != false, "complex reason");
// </pex>

new AccountTransferService().Transfer(source, destination, amountToTransfer, authorizationService);
}

If I now run Pex again it will generate 0 tests from this code. I guess this makes sense, the PexAssumes are presumably telling Pex not to pass in certain values (such as null for source). However in the process I've made my PUT pretty useless and it does make me question the usefulness of the "Fix It" option in these sorts of situations, so "Allow It" must be the more sensible option in this case...

Allow It

If I select "Allow It" for each of the failing generated tests then my Pex test stays as it was originally but the following attributes are put into my assembly:

[assembly: PexAllowedExceptionFromAssembly(typeof(ArgumentException), "PexPlay")]
[assembly: PexAllowedExceptionFromAssembly(typeof(ArgumentNullException), "PexPlay")]
[assembly: PexAllowedExceptionFromAssembly(typeof(ArgumentOutOfRangeException), "PexPlay")]
[assembly: PexAllowedExceptionFromAssembly(typeof(InvalidOperationException), "PexPlay")]

PexPlay is the assembly I'm working in (the assembly that contains the SUT) and this seems to be indicating that if I get any of the specified types of exceptions anywhere in PexPlay then the tests should still pass. This is confirmed if I re-run Pex as it will generate the same set of tests but they now pass:

PassingPex

Problem is that the attributes are at too high a level for me to be happy, I'm not necessarily always happy to see those exceptions so instead of using PexAllowedExceptionFromAssembly I tried using attributes at the Pex test level which seems to work fine:

[PexAllowedException(typeof(ArgumentException)), PexAllowedException(typeof(ArgumentNullException)),
PexAllowedException(typeof(ArgumentOutOfRangeException)), PexAllowedException(typeof(InvalidOperationException))]
[PexMethod]
[PexUseType(typeof(AuthorizationService))]
public void overall_behavior_correct(IAuthorizationService authorizationService,
Account source, Account destination, double amountToTransfer)
{
new AccountTransferService().Transfer(source, destination, amountToTransfer, authorizationService);
}

Its probably worth noting how the PexAllowedException have effected the generated tests, here's one of them (MSTest):

[TestMethod]
[ExpectedException(typeof(ArgumentNullException))]
[PexGeneratedBy(typeof(when_account_transfer_occurs))]
public void overall_behavior_correctIAuthorizationServiceAccountAccountDouble_20080607_120018_002()
{
Account a0;
a0 = AccountFactory.Create(0);
AuthorizationService as0 = new AuthorizationService();
this.overall_behavior_correct((IAuthorizationService)as0, a0, (Account)null, 1);
}
As you can see the test is marked with ExpectedException attribute which correctly specifies the behaviour I expect when I pass in a null Account.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Pex - First Impressions

Apparently the issues I was having with Pex (ClrMonitorFail errors) were caused by its interaction with TypeMock. This doesn't shock me as having used TypeMock for a while I've learned that any time anything odd starts happening disabling TypeMock is a good idea.

So I disabled TypeMock and started playing with Pex, first impression is that it looks great but my second impression was that the IDE integration seemed a little flaky. My IDE actually crashed many times whilst using it but after a while I learned what to click and not click :)

Anyway I thought I'd start writing down the little that I've found out about Pex in case it is in any way useful to anyone.

First Feelings

I did notice a few interesting things when working with Pex, the first is that it seemed like it c lull you into a false sense of security.

Seeing a whole loading of auto-generated tests passing is great but I quickly began to notice that I could modify the code under test in inappropriate ways and my tests weren't failing. This wasn't Pex's fault though, I just hadn't been thorough enough in telling it what to expect and once I applied more PexAssume values and a few more assertions I definitely felt safer.

Fix It Or Allow It

I found that I was getting some useful tests generated even if I was quite vague:

[PexMethod]
[PexUseType(typeof(AuthorizationService))]
public void overall_behavior_correct(IAuthorizationService authorizationService,
Account source, Account destination, double amountToTransfer)
{
new AccountTransferService().Transfer(source, destination, amountToTransfer, authorizationService);
}

The tests Pex was generating were to do with null reference exceptions and invariants that the SUT was enforcing. Obviously initially the tests were failing so I had to tell Pex what I expected the SUT to do in each situation. I could do this using either the "Allow It" or "Fix It" options from the "Pex Results" panel.

If I chose "Fix It" then I would tend to get the following:

image

My IDE would then close down, gah. However if I persevered though it would update the test, for example:

[PexMethod]
[PexUseType(typeof(AuthorizationService))]
public void overall_behavior_correct(IAuthorizationService authorizationService,
Account source, Account destination, double amountToTransfer)
{
PexAssume.IsNotNull((object)source, "source");
PexAssume.IsTrue(source != destination, "source == destination");

new AccountTransferService().Transfer(source, destination, amountToTransfer, authorizationService);
}

When I then ran Pex for this method again it wouldn't generate a test that passed in null for source or that passed in source and destination as the same objects.

The alternative to "Fix It" is to select "Allow It" it, when I did that Pex would put this attribute in PexAssemblyInfo.cs

[assembly: PexAllowedExceptionFromAssembly(typeof(ArgumentException), "PexPlay")]

This seems a bit brute force though, I don't want to allow the exception across the entire assembly so I probably need to do a bit more research.

Decimals

One interesting thing that I noticed is that when I setup my methods to take in decimals I got no tests generated, but if I changed the inputs to be doubles I did. Not sure what that's all about but again I need to do some more research.

Mocking

Ignore the layout of this test, its a mess, but I do like the easy way I'm able to specify that I want to use a mock of the authorization service using the PexUseType attribute:

       [PexMethod]
[PexUseType(typeof(MockAuthorizationService))]
public void transfers_correctly_between_accounts(IAuthorizationService authorizationService,
double intialInSource, double initialInDestination, double amountToTransfer)
{
PexAssume.IsNotNull(authorizationService);
PexAssume.IsFalse(amountToTransfer < 0);
PexAssume.IsTrue(intialInSource > amountToTransfer);

Account source = new Account(intialInSource);
Account destination = new Account(initialInDestination);

double fromSourceBefore = source.Balance;
double fromDestinationBefore = destination.Balance;

new AccountTransferService().Transfer(source, destination, amountToTransfer, authorizationService);

Assert.AreEqual(fromSourceBefore - amountToTransfer, source.Balance);
Assert.AreEqual(fromDestinationBefore + amountToTransfer, destination.Balance);
}

The mock service is in this form (see the PDF for more on this but I haven't truly had time to grok it yet):

[PexMock]
public class MockAuthorizationService : IAuthorizationService
{
public bool AllowTransfer(Account source, Account destination, double amountToTransfer)
{
var call = PexOracle.Call(this);
return call.ChooseResult<bool>();
}
}

When I run Pex over transfers_correctly_between_accounts it will actually correctly use an instance of MockAuthorizationService and will run a test where that service returns false, causing the transfer to fail as expected. Nice.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Thursday, May 29, 2008

Pex

Looks like Pex is out and ready for us to play with, seems like interesting stuff and as well as Peli posting up links to docs you can also view Ben Hall's first impressions.

In addition it looks like MbUnit can already be used with it.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Thursday, May 15, 2008

Reusing Tests at Different Granularities

Pat Maddox has a superb post called Refactoring with Shared Example Groups which describes one strategy that Ruby programmers can consider when deciding how they want to change their tests when they extract a class/method in the SUT.

I'd guess you could use a similar solution in C# especially if you hooked into a test framework like Gallio, mind you it might be equally sensible to look at IronRuby for testing (longer term).

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone