Sunday, December 28, 2008

SOA Principles of Service Design

I've been learning a lot more about SOA and REST recently and so have been doing a lot of reading including SOA Principles of Service Design and I thought I should write a short review.

Short version, this is one of the worst technical books I've read and I wouldn't recommend it to anyone. My main gripes are:

  • Style - Although there are lots of colour diagrams they don't tell you much and don't take away from the fact that the text is dry, repetitive and at times very boring. Since I've read one of Erl's previous books, well read about half of it because I just couldn't face the rest, I should have known what to expect but it was still a big big struggle to finish it.
  • Ideas - Reusable entity services (Customer/Order) shared throughout the enterprise might sound sensible but I'm not at all convinced by that approach and the focus on it has always blunted my interest in SOA. Unfortunately that style of layered SOA is key to this book so I found a lot of it pretty un-inspiring.
  • Single Viewpoint - There is more to SOA than RPC services but this book doesn't really talk about anything else. REST gets no coverage (as an alternative to aspects of SOA), EDM/messaging doesn't really come into it, business focused SOA (analysis/design with business experts) is hardly just all feels too narrowly focused on the one approach the author uses and so there is little discussion of alternatives and tradeoffs.
  • Book Assembly Line Approach - Having one book on SOA principles and another on the patterns, with references from the principles book to the patterns one, seems overkill. I know SOA is a big topic but do we really need an eight book SOA series? In my view you could easily have cut 30% of the content in this book (including most of the diagrams) and merged it with the patterns book.

Some might take this as me saying I don't rate SOA but that is not true at all, I just don't rate this book and don't think the ideas behind it are what I'm interested in. However I still think SOA has a lot of value and in fact the next two books I plan to read/finish are both about SOA:

  • SOA Patterns - Not the SOA patterns book by Erl (which I'll admit from it's site looks interesting), this one is by Arnon and from the Manning EAP version I've read it's going to be superb.
  • Enterprise SOA Adoption Strategies - I learned about this book because I enjoyed reading Steve Jones' posts on the SOA forum and it seems to focus on the business oriented side of SOA, something which doesn't seem to get discussed nearly enough.

Since I'm far more interested in business focused services each wrapping one or more domain models and exposing multiple end-points, instead of entity services first approach, I'm looking forward to reading these two as from the looks of it  they'll give me lots of new ideas.

Share This - Digg It Save to Stumble It! Kick It DZone

Wednesday, November 19, 2008

BDD - Available Frameworks

I've been using the Astels style of BDD for a while now but so far I've just done it using MSTest/NUnit and a few custom base classes. I think that's a good way to start out, as with so many good things in life it doesn't require a new whizzy tool/framework.

However I've just joined a new project and we've been looking at different frameworks that are available for unit testing and for BDD so I thought I'd post about what I've seen so far. Any opinions would be gladly received.

This project is very cool, not only does it allow you to create superb specifications but you also get some nice reporting. Here's a simple example of a sample spec:

public class When_adding_a_contact_to_a_user_with_no_existing_contacts
private static User _user;
private static Contact _contact;

Establish context_once =()=>
_user = new TestUserBuilder().Build();
_contact = new ContactBuilder().Build();

Because a_context_is_added =()=>

private It should_associate_the_contact_with_the_user = () =>


One thing to note is if your planning to look at MSpec then you'll probably want to download the Machine codebase since there aren't many examples of using MSpec on the Web the examples with Machine are a good starting point.

So far there's no R# integration but that doesn't worry me at all as if needed it will come and this is still a very early version.

The reporting seems to work well, but we primarily use this style for unit/integration tests and so we are unlikely to present the reports outside the development team. Having said that Aaron pointed out that they can be useful within the development team, which makes a lot of sense.

Overall my main worry is the syntax could be a bit much for some people, in particular if you go for the compact style:

I think there's a good argument for just using a base class, especially when you are getting going with the approach:

public class When_adding_a_contact_to_a_user_with_no_existing_contacts : SpecificationBaseNUnit
private User _user;
private Contact _contact;

protected override void EstablishContext()
_user = new TestUserBuilder().Build();
_contact = new ContactBuilder().Build();

protected override void Act()

public void should_associate_the_contact_with_the_user()
This is a hopelessly naive example but you get the idea. You lose some of the syntax niceness, suddenly the specs themselves take up multiple lines because of all the curlies. You've also lost reporting, unless you put in some work yourself. However it is a little easier to understand and when introducing TDD/BDD that could be important.
I'm no expert but Ben Hall convinced us to give it a shot by recommending it and it is very nice. You can read about an approach that works here. If you use the specification base class described in that post you might end up with this:

// Using base class influenced by
public class When_adding_a_contact_to_a_user_with_no_existing_contacts : SpecificationBase
private User _user;
private Contact _contact;

protected override void EstablishContext()
_user = new TestUserBuilder().Build();
_contact = new ContactBuilder().Build();

protected override void Because()

public void should_associate_the_contact_with_the_user()
One aspect of XUnit that might throw you is how opinionated it is, which could be an advantage or a disadvantage. An example is that it's aiming for each test to run in isolation, so the fixture class is re-created each time and if you really want to reuse the fixture you implement IUseFixture. I guess this is a very safe approach because it means tests/specs are extremely unlikely to affect each other, but it actually seems over-kill if you're using a style where the specification methods only assert (no side-effects).

The lack of messages on assertions seems sensible, and it is for small focused BDD specifications, but if you use it for integration testing you would want the option of adding a message in.

On the syntax front we could always go for more flexibility:
public class When_a_user_has_no_contacts : FlexibileGrammarSpecificationBase
private User _user;
private Contact _contact;

protected override void EstablishContext()
_user = new TestUserBuilder().Build();
_contact = new ContactBuilder().Build();

protected void and_we_give_them_a_new_contact()

public void the_contact_should_be_associated_with_the_user()
However this seems a little pointless to me so I've dumped the idea.


We decided to go with but we also plan to look at Ruby based solutions including RSpec and Cucumber. Cucumber seems exciting as it lets you specify table based specifications, theoretically allowing us to get the advantages of a FIT style approach without having to use FIT (or SLIM) itself.

Ultimately there is a lot going on in the BDD space in Ruby-land (and a book on the way) and the language does suit it quite well so we intend to do some playing.

Share This - Digg It Save to Stumble It! Kick It DZone

BDD and Parameterized testing

Although I really like Astels style BDD (to me a constrained/enhanced style of TDD) I still use a lot of parameterized testing and though I should give you an example why, using

Lets say we're testing simple SPECIFICATION style rules, we might write:

public class When_using_rule_on_a_null_string : SpecificationBase
protected TestEntity _testEntity;
private bool _isSatisfied;

protected override void EstablishContext()
_testEntity = ContextSetup.CreateTestEntityWithValue(null);

protected override void Act()
_isSatisfied = new ValidEmailRule<TestEntity>(_testEntity, x => x.Value).IsSatisfied();

public void is_satisfied()

This just tests how the rule handles a null value, but we'd then want to test with all sorts of other values (valid and invalid). To compare lets thus look at how easy it is to test a variety of invalid e-mail address using one of's parameterized testing approaches (see Ben Hall for more options):

public class When_evaluating_invalid_email_addresses
public void is_not_satisfied(string invalidEmailAddress)
var testEntity = ContextSetup.CreateTestEntityWithValue(invalidEmailAddress);

var isSatisfied = new ValidEmailRule<TestEntity>(testEntity, x => x.Value).IsSatisfied();


Now you may disagree with my approach here, this isn't as readable as it could be, but I think you can see why you'd use this approach if you have a lot of values to validate.

Share This - Digg It Save to Stumble It! Kick It DZone

Tuesday, October 28, 2008

DDD - Making meaningful relationships

A recent discussion on the DDD forum made me want to post about what I consider to be an under-appreciated aspect of domain modelling, namely relationships. In the thread Randy Stafford said the following:

Note that RoleRegistration is an example of a Relationship Object – arguably a fourth archetype of domain object alongside Entity, Value Object, and Aggregate.

I couldn't agree more with this, in a domain model I worked on recently we had a stack of very useful and very meaningful relationship objects that served a range of purposes.

For example if a Client has Accounts then it is possible that you can get away with just having that as a direct relationship but its equally possible that the relationship itself is meaningful and carries its own data/behavior. In this case you might have a type association with the relationship that would explain if the Client owns the Account, or whethery just manage it, or whether they are actually just one of several owners.


You need to consider aggregate boundaries especially carefully when using Relationship Objects.

In the case of an association between a Client and an Account the relationship probably belongs to the Client aggregate.

Then again if you choose to model the association between a Client and SalesAdvisor using a full blown party/role/relationship approach then things become a big more complex. Are all parties and roles and relationships different aggregates or does a relationship own the two roles it composes?

If its the latter then you may be breaking an aggregate design rule because the party now refers to a piece of the relationship aggregate other than root.

Temporal Associations

Another common case is that the relationship is temporal which brings with it a lot of complexity and should only be done with extreme care.  If your sure you need temporal associations then you will find Martin Fowlers patterns invaluable.

Encapsulating Relationships

Most of the relationships have real meaning in their own right but sometimes they are just an artifact of the design, in those cases you can demote the associations to being just an encapsulated detail.

Take the association between Client and Account, maybe when you ask for the Accounts for a Client you want to get the Account objects rather than the ClientAccountRelationship objects.

If this were the case you could keep the ClientAccountRelationship class, which has its own data/behaviour and makes mapping easier, but entirely hide it from users of the domain. One way to do this is to create custom collection called ClientAccounts and have it wrap an IList<ClientAccountRelationship> whilst acting like it is just a simple IList<Account>, it can also provide helpful methods like FindPrimaryOwner.


I mention all of this because when I got started with DDD relationship objects bothered me especially as we were working with a legacy database and I saw the relationship objects as being a problem caused by the number of join tables. At the time my plan was to get rid of a lot of them by taking advantage of NHibernate magic.

However as I got a bit more experience I realized that they were key and although we encapsulated some (when they were just an artifact of the design) we made others totally key parts of the design. In both cases the relationships themselves were very useful.

Share This - Digg It Save to Stumble It! Kick It DZone

Domain Driven Design - Knowledge Level

A recent thread on the ALT.NET forum got me thinking about KNOWLEDGE LEVEL, a seldom discussed but useful pattern from domain driven design. As with many patterns near the end of the book its a pattern that isn't going to suit every situation but it is a pattern that I think has a lot of value. In summary we aim to explicitly split our model into:

  1. Operations Level - The place that we do our day-to-day business. For example Shipment, Account.
  2. Knowledge Level - Objects that describe/constrain the objects in the basic (operations) level. For example EmployeeType, RebatePaymentMethod, ShipmentMethod.

I'm not a big fan of just repeating what's already in books and to my mind if you want to understand DDD then I'd suggest you read the DDD bible. I'm thus not going to go over the pattern, but I did think it was worth a post about what I've found when coming to the detailed design and implementation of classes in a knowledge layer. I thought I'd start by discussing how you can persist/load/model objects in the KNOWLEDGE LEVEL, moving on to other topics in later posts. Two caveats:

  1. Simple Example - Rather than using a real example I've chosen to use the shipping method example from the thread, I've never worked in that domain so this may be a bad idea but it felt right...
  2. Might seem obvious - If you are an object/behaviour first type of developer then you'll be looking at using OO techniques and so some of this will just be waffle. However I've seen situations where people used data-driven techniques when approaching their KNOWLEDGE LEVEL and so I wanted to explain why I think it's a bad idea.

Option 1 - Data First Approach

Since we store our operational objects in the database why not do the same for our KNOWLEDGE LEVEL? If we go down this path then we'd have a ShippingMethod table with one row for each shipping method we support. Of course each shipping method has different characteristics, perhaps certain shipping methods are only available to long standing customers or customers in certain countries. To support this we'll end up with extra columns in the ShippingMethod table and maybe even associations to other tables. To get the ShippingMethods from the database we'd probably use a REPOSITORY, probably making sure that the REPOSITORY is read-only (maybe not always, see pattern for information on modifying the KNOWLEDGE LEVEL).


This approach obviously has disadvantages:

  1. Loss of clarity - Our ubiquitous language might refer to a "local carrier shipping method" but it won't be visible in the domain model and in fact to see details about this shipping method you need to look at the appropriate row in the ShippingMethod table. Even if you don't value the ubiquitous language this is a problem because as a developer you'll end up switching between the code and database in order to understand even simple aspects of the behaviour.
  2. Goodbye OO, hello procedural - Since there is no LocalCarrierShippingMethod class I have nowhere to put its behaviour, instead I have to fall back on a procedural style of program where we load in the data about the ShippingMethod with an ID of 3 (which happens to be the local carrier shipping method) and then look at its SupportsInternationalDelivery boolean flag before deciding how to proceed (hopelessly naive example alert). We can encapsulate this logic in a service, perhaps a ShippingMethodSelectionService but its still more than a little smelly.
  3. No explicit associations - Imagine if we know that the local carrier shipping method is only available for certain types of Orders and that the decision making also takes into account the Customer placing the Order. If we're using the database-driven approach then managing this becomes very difficult, we can't just look at our LocalCarrierShippingMethod class or at some code that sets up the associations. Instead we fall back and run a database query involving all sorts of joins.


Those are, as I see it, the primary disadvantages of hiding this important information in the database. Ofcourse its not all bad:

  1. Consistency - Our KNOWLEDGE LEVEL is handled in the same was the rest of the domain model, loaded from the database as required. Not sure its a massive advantage though because we've explicitly chosen break out the KNOWLEDGE LEVEL its fair enough to say that the consistency has limited value.
  2. Flexibility - If we want to add a new kind of ShippingMethod we just add a row to the ShippingMethod table, no need to redeploy. That's the idea anyway, but its not always going to work especially when the related procedural code has to change.

Its also fair to say that the KNOWLEDGE LEVEL changes at a different pace to the rest of the model, we probably don't add ShippingMethods too often. However we might want to be able to change characteristics of existing ShipmentMethod's, such as changing their price (remember this probably only affects future Shipments) or changing what Countries they can be used in. So on the flexibility angle you've actually got a couple of types of changes:

  1. Adding/removing concepts -  Perhaps adding a donkey based shipment method, to me its safe to require a redeploy for this sort of change and in any case without some serious thought we're not necessarily going to be able to do just by modifying a table anyway. 
  2. Changing configuration/associations - Its maybe fair enough to expect to be able to ban using donkey based shipping for all future orders in Spain without requiring the code to be redeployed. As I'll show below this can work even if you go for an object-oriented approach and I also think this is a case where a DSL would really add value (not tried that though).

Those are the main advantages I've heard about, and as I say I'm not sold on them. That brings me on to how I think you should go about it...


Option 2 - Object Oriented Approach

Take that "local carrier shipping method" concept and turn it into a LocalCarrierShippingMethod class, probably inheriting from a ShippingMethod base class. There's only one instance of this class and since its read-only (at least as far as normal usage goes) it's totally safe to share it. The ShippingMethod has any data it needs to support the behaviour it contains and to support the interface that it exposes, so for example you might have an IsApplicableForSending(Shipment) with each subclass providing their own implementation.


How does this look in practice, well one approach I've used is to a variant of the typesafe enum pattern. The following is just pseudo code to show the idea:

public abstract class ShipmentMethod
private static Dictionary<ShipmentMethodKind, ShipmentMethod> _shippingMethods = new Dictionary<ShipmentMethodKind, ShipmentMethod>();

static ShipmentMethod()
// NOTE: In practice you wouldn't do it like this but it is just an example....
_shippingMethods.Add(ShipmentMethodKind.LocalCarrier, new LocalCarrierShippingMethod());
_shippingMethods.Add(ShipmentMethodKind.DonkeyBased, new DonkeyBasedShippingMethod());

public IEnumerable<ShipmentMethod> GetAll()
return _shippingMethods.Values;

public ShipmentMethod GetByKey(ShipmentMethodKind key)
return _shippingMethods[key];

public abstract bool IsApplicableFor(Shipment toEvaluate);

public enum ShipmentMethodKind
LocalCarrier = 0,
DonkeyBased = 1

public class LocalCarrierShippingMethod : ShipmentMethod
public override bool IsApplicableFor(Shipment toEvaluate)

Don't get too hung up on the implementation, some of it is optional (e.g. ShipmentMethodKind) and it could be refractored quite a bit, I'm really just trying to show the idea not to show how to actually implement a solution.

You can see that in this case ShipmentMethod is really just a SPECIFICATION but in real situations a ShipmentMethod might well have more behaviour and data. The base class gives us an easy way to access all the ShipmentMethods that the system handles, this can be useful because it could easy provide useful methods like GetAllShipmentMethodsThatCanHandle(Shipment).

In reality there are multiple ways of implementing this, in particular you might want to load in the configuration for each ShipmentMethod from a database or other data source. This is more flexible than including it in the code but slightly more complicated to implement.

Referential Integrity

If you choose not to load the data from the database then we might seem to have a problem, how do we associate this ShipmentMethod with other objects in the KNOWLEDGE LEVEL and operational level?

It would seem that we've lost referential integrity but we have choices:

  1. Generate tables - The code in the static constructor in ShipmentMethod can be used to generate a table in the database.
  2. Use enum value as key - We can instead treat the value of the ShipmentMethodKind as a "key" in the database, automated tests will make spotting any issues really quick.
  3. Have separate config table - As I said earlier we may want to load the configuration information for each ShipmentMethod from the database, if so we have a ShipmentMethod table which resolves the problem.

In any case I haven't found this to be a major issue, by and large I don't care that my ShipmentTable has a ShipmentMethodId table which doesn't lead me anywhere as if I want to understand what's going on I go back to the domain model.

In Closing

Not sure this topic needed as much details I've put in here, really its the KNOWLEDGE LEVEL pattern that's key but I hoped that by enumerating some of the implementation choices that I've seen I'd help you make an informed decision.

Share This - Digg It Save to Stumble It! Kick It DZone

Biztalk and WCF - When two big beasts collide

I spent the entirety of last week trying to create a ridiculously simple Biztalk orchestration and trying to get it to talk to a simple WCF service and I thought I should describe what I "learned".


If you follow me on Twitter you'll know how unbelievably annoyed the results made me and although I didn't learn much from the experience I thought I should put down some tips:

  1. If Biztalk gives you an error DO NOT read it, the message itself is bound to be utter jibberish and the correct response is to put it straight into Google.
  2. If Biztalk behaves like a problem is with step N don't assume that step N-1 passed especially if step N-1 is a transformation. You can test the transformation in isolation within the IDE using a sample document so do it,
  3. If you are having real problems working out why Biztalk and WCF aren't playing ball then it might well be XML namespaces that are the issue.
  4. If you're thinking of renaming the orchestration or anything in it be careful and take a backup first.


Whilst Biztalk left me cold the WCF side of it was a joy, mainly because Johnny Hall pointed me at the Castle WCF Facility and his own usages of it. Using the WCF Facility configuring your services is an utter joy, definitely when compared to the XML based approach that you get with bog-standard WCF. The documentation isn't great but the tests that you get with the Castle source code are the real way to see how to use its fluent interface.

Johnny also suggested we use a console application to host the service when testing and a Windows Service when deploying for a real. The console application makes testing locally a lot easier, just CTRL+F5 and your host is loaded and ready for you to fire requests at it.

If only Biztalk was as enjoyable to use...

Share This - Digg It Save to Stumble It! Kick It DZone

BDD - Files/Folders/Namespaces (BDD)

One thing that can be troublesome when moving from TDD to BDD is how to organize your files and folders, so far I've tried two approaches:

  1. One class in each file - So if you have When_associating_an_order_with_a_customer and When_associating_an_order_with_a_preferred_customer then they'd be in seperate files even though they are very closely related. If they share a base class, or a class they both compose, then that would be in yet another class (presumably).
  2. Multiple classes per file - As an example you might group the Order addition contexts into a file called OrderPlacementSpecifications, the file could also contain the shared base class (if you went down that road).
To me the second approach has a couple of advantages:
  1. Gives the reader extra information - By grouping the two order placement classes we tell the reader that they are quite closely related.
  2. Simplifies folder structure - If we go for the other approach, one class in each file, then we're probably going to have to have more folders. The addition of the extra files and folders definitely makes the solution file harder to structure.
To give you an idea here's a screen shot of a part of the folder structure for a sample app we're doing:

In addition to files/folders I've tried a few approaches to structuring namespaces but the approach I'm trying now groups related artifacts. For example:
  1. Specifications.Users.Domain
  2. Specifications.Users.Domain.Contacts
  3. Specifications.Users.Services
  4. Specifications.Users.Domain.Repositories
  5. Specifications.Users.UI.Controllers
The "Specifications" bit adds very little but I think grouping all specifications related to users is useful, not least as R# makes it easy to run all the specifications in a namespace. This can be useful if you have a big solution and only want to run the specifications for the area your working on. Its also worth saying that its "Users" to avoid clashing with the "User" class.

Folder wise however we're using a standard approach where your Repositories are in a completely seperate folder from your controllers, even though they might both relate to a particular entity. To me the lack of relationship between our folders and namespaces isn't a problem though, with R# its easy to find a file/type and in addition the folder/namespace tell you two different things about your codebase (one by "layer", one by "feature").

So I'm interested in peoples views? I'm guessing you'll all dislike it though because from what I've seen no matter what you do people will be unhappy with your file/folder/namespace scheme. Pluse we'll probably turn against this approach next week....

Share This - Digg It Save to Stumble It! Kick It DZone

Thursday, October 23, 2008

What I want from an ORM

Thought I'd blog about some of the things I'd like to see in an ORM in the future, particularly to support DDD cleanly:

  1. No enforced associations - I never want to create an association in the model just to support persistence, regardless of where keys are stored. So if I want to use uni-directional associations then I should be able to do that without having to go for workarounds.
  2. Aggregate locking - Currently, with NHibernate at least, its difficult to lock an entire aggregate. For example NHibernate's optimistic concurrency approach involves applying a version to rows, however aggregates can span tables so we really want to be able to give each aggregate a shared version (coarse-grained locking approach). See coarse-grained lock pattern.
  3. Validating before saving - I'd like hooks to automatically and cleanly validate an entire aggregate before persistence.
  4. Disabling unit of work - I'd like to be able to disable my unit of work, in many cases when working with DDD the UOW becomes more of a hindrance than anything else. I really want to be 100% sure that the only way to save a Customer is through a CustomerRepository.
  5. Revalidate value objects when reloading - Value objects only validate their data in their constructors, if your ORM does not ensure that a constructor that performs the validation is also used when reloading the object then its possible to end up with an invalid Value object. This is something you definitely want to avoid. Greg Young has raised this issue a few times, including in the NHibernate forum, and made some very good points.
  6. Value objects - Choosing to view something as a value object is a design decision that you make irrespective of the underlying DB design, so whilst the NHibernate component mapping is useful it should be as powerful as the mappings used elsewhere. Unfortunately with NHibernate components don't support inheritance nicely and if your value object is stored in a separate table things get confusing.

There may be other things I'd want but those are the ones that come to mind.

Share This - Digg It Save to Stumble It! Kick It DZone

New Home

Just to let you know that I'll now also be posting over at lostechies. Obviously I'll continue to post everything here though so you only need to do anything if your already subscribed to the lostechies feed and getting my blog entries twice is annoying you.

Share This - Digg It Save to Stumble It! Kick It DZone

Wednesday, October 22, 2008

DDD and Complexity

There have been some interesting and entirely valid discussions on Twitter about the validity of DDD and about deciding when its useful and when its too complex (including good comments from Casey Charlton and Jimmy Bogard). I full agree with Casey and Jimmy and the comments, when combined with an interesting discussion on the topic at the latest ALT.NET UK discussion, let me wanting to blog about how I think DDD can be used in different types of projects.

First off I must admit that even for a simple business system I'd be thinking about using some of the patterns from DDD, in particular I think these usually have value:

  1. Aggregate
  2. Repository
  3. Service

For example I find that thinking about aggregates makes me think much more carefully about how to map those classes to the database, for example how far to cascade, which in my view is a good idea as not thinking about boundaries can lead you to some fairly annoying debugging sessions. You can take these three patterns and with a good ORM and create and map a simple domain model to a green fields database very quickly. 


Here's some of the tradeoffs we're making when using DDD on a system that it doesn't necessarily suit:

  1. Analysis/Design - We map completely skip the analysis/design and in particular the discussions with the domain expert, instead starting from our requirements and letting TDD and our own design skills guides us to the correct design.
  2. Encapsulation - We might expose everything using getters and setters and the domain may be anaemic, for example validation may be in attributes and/or use a custom framework.
  3. Design Clarity - If our primary focus is on getting going fast then we're going to have to cut some corners.  We'd bind the domain to the GUI, design it to be as easy to map to the database, make it easy to create domain objects (default constructors) and generally make tradeoffs in the quality of the domain model to make our own lives easier.
  4. Flexibility - A model that is quick to create/map/bind is not likely going to be flexible, this may or may not be a problem.
  5. Patterns - We're ignoring half the patterns in DDD, patterns that have a lot of value in a complex domain model but may not be justified when the domain is simpler/more closely bounded.

We make these tradeoffs to make our lives easier and in particular I wanted to cover two of the tradeoffs you may choose to make.

Design Clarity

If we want to be able to bind your GUI to the domain and map it to the database quickly then we can being to fray our domain model:

  1. Value objects make binding and displaying validation errors trickier.
  2. If our user interface uses wizards then the GUI will want to create domain objects early on in the wizard, possibly without giving us any meaningful data. We thus end up with default constructors or constructors with very few arguments.
  3. If we use an ORM with a unit of work it will probably fight against our need to validate aggregates before saving them.
  4. If we use an ORM we'll find it hard to version the entire aggregate.

With discipline you can make these tradeoffs whilst still maintaining a comprehensible model but there is no doubt that we are making tradeoffs.


We're also sacrificing flexibility, for example a lot of talk about repositories right now focuses on generic repositories. For example we'd have a Repository<T> class where T is an aggregate root and you'd have query methods on this repository that would take Linq specifications. That's going to be fine for a lot of cases but in more complex models (or where we don't control our environment fully) our repository can encapsulate complex logic, for example we should be able to ensure the following:

  1. Instances of aggregate are never deleted or they are simply archived.
  2. Instances of aggregate Y are built up from multiple data sources.
  3. Instead of mapping aggregate Z to a (legacy) DB we map some simple DTO's and then in the repository convert them to our nicely designed aggregate.
  4. Query Z is very expensive and needs to be done using SQL.

Those are all things I’ve had to be involved in when working with a moderately complex domain model and repositories helped encapsulate those details. So whilst I think generic repositories might have their place ins some systems I think you have to be aware of the choices your making.

What was the point of all this?

My point with all this is that you can get a lot of value from DDD without following it too closely, but you need to be aware of what tradeoffs your making and why. Choosing to go for a simple domain model when you have a complex problem to solve is a really bad choice and going from active record (true active record) to a real domain model is not going to be a smooth transition.

However I don't think the choice is easy, for example there's been a lot of discussion recently on whether DDD and complex models are suitable for CRUD related problems. In general I can see why using DDD on an average CRUD system is a mistake but sometimes it is worth using. For example we used DDD on a CRM system that among other things handled parties and their associations in a temporal manner. This was a complex modelling problem solved with a complex pattern but primarily we were handling CRUD (including constraints/validation) and setting the stage for other systems to use the information. Trying to do this using active record would, in my view, have been a big mistake.

So as Casey pointed out DDD is expensive and whilst it can pay off you need to do in with eyes open fully aware of the tradeoffs involved.

Share This - Digg It Save to Stumble It! Kick It DZone

Monday, October 20, 2008

ALT.NET Drinks in Edinburgh

We're "organizing" the first (and who knows, perhaps the last) ALT.NET drinks in Edinburgh on the 3rd of November (starting around 7:30). Its at a pub called the Rutland and if you're travelling over in the train its a conveniant spot (5-10 mins walk from Haymarket).

You can read more, though not a lot more, at the UK .NET events site and in the Glasgow ALT.NET thread.

Share This - Digg It Save to Stumble It! Kick It DZone

Sunday, October 05, 2008

Book Review - Enterprise SOA

I just got through with Enterprise SOA and have to say I think itis well worth a read. If you want a good review look here but here's a summary of some of my favourite aspects of the book:

  1. Covered a range of technologies, emphasizing that SOA is not just about throwing out a range of Web services.
  2. Covered valid design/architecture choices, for example discussing asynchronous and synchronous approaches and where they might each be useful.
  3. Discussed how services can be a valuable tool in bringing IT/business closer together.
  4. The case studies were very interesting, I actually would have liked it if the book had more detail but what it did include was useful.
  5. The coverage of governance had me nodding my head, even if you don't chose to apply SOA you'll need to know how ensure that people don't fall back into tactical decision making.
  6. Covered how SOA can change your approach to project management.
  7. We need someone other than Erl talking about how to make SOA work in practice.

So I think the book is superb and well worth a read.

Having said that one issue I had with the book was the emphasis on having basic services as the cornerstone of your SOA (basic services being essentially entity services). I'm hoping that in a future edition the authors will explore how SOA can be used in a manner that work hand in hand with DDD and I think this would mean re-evaluating whether reusable entity services are really a good idea.

Share This - Digg It Save to Stumble It! Kick It DZone

Thursday, September 04, 2008

NHibernate - Mapping Custom Collections

Came upon this article on mapping custom collections that uses extension methods. I'm not a massive fan of extension methods to be honest and I prefer to use custom collection classes and I thought I'd show how I've seen them mapped in the past.

When I say that I like custom collections I mean classes like this:

public class FooCollection : ReadOnlyCollection<Foo>

I prefer a collection like this over a bog standard IList<Foo> because there is usually a lot of behaviour related to the collection, for example we may have custom rules relating to addition/removal and we'll probably want to ensure the collections contents are valid at all times. One solution would be to use encapsulate collection but I've found that it just results in my aggregate roots and key entities getting bogged down in lots of boring collection work so I prefer to farm out this work to custom collections. Plus if I go for the custom collection approach I can ensure they implement interfaces such as these:

    public interface IReadOnlyCollection<TItem> : IReadOnlyCollection, ...
       bool Contains(TItem toEvaluate);

        IReadOnlyCollection<TItem> FindAll(Specification<TItem> specification);

This sort of interface is very useful because IEnumerable misses some important members and IList is too permissive of change, plus we want to be able to add all sorts of useful helper methods like the FindAll shown.

However I can't directly map my FooCollection, NHibernate is only going to be able to map something basic like an IList. The solution is to map the collection like this:

<component name="FooCollection" access="nosetter.camelcase-underscore">
bag name="_innerList" cascade="all-delete-orphan" access="field" lazy="true">
key column="ContaingClassesID" />
one-to-many class="..." />

This requires a little explaining. We're mapping our custom collection as a component, this seems odd until you look at the way we then map the _innertList within our custom collection. Essentially this _innerList is an IList<Foo> hidden within FooCollection (or more correctly one of its base classes). Since we're mapping an interface NHibernate is happy and the users of our custom collection don't know that we've had to put in this silly _innerList, plus maintaining/using this little internal collection is taken care of by a base class so its all painless. One slight smell is that we're mapping _innerList as a field, not perfect but its not going to keep me up at nights.

Anyway its not a perfect solution but it lets you map custom collections without sacrificing too much.

Share This - Digg It Save to Stumble It! Kick It DZone

Wednesday, August 13, 2008

Test Data Builder and Object Mother

I've been meaning to write about this topic for a while because I think correct use of these two patterns can make a big difference to tests for any moderately complex domain model.

I'm not going to discuss the details of the two patterns themselves because there's already a lot of good content out there about them (see links at end), instead I'll discuss my impressions of them.

Why Are They Needed?

For any moderately complex domain model you're going to have quite a few ENTITIES and VALUE OBJECTs and although you'd like to avoid a Web of associations you will have associations between them.

Thus when testing a Customer (ENTITY) its quite possible that you'll want to associate an Account (ENTITY) or Name (VALUE OBJECT) with it. However you don't necessarily want the creation/configuration of the Account/Name inside the test fixture:

  1. You probably want to reuse the creation code in other tests.
  2. You don't want the creation code adding complexity to the tests, complexity that the reader doesn't care about.

There are other reasons they can be useful, most of which relate to any type of Test Helper.


So far my approach has been to use TEST DATA BUILDER for value objects and then OBJECT MOTHER for Entities.

VALUE OBJECTS validate in their constructor so if you try to use an OBJECT MOTHER you get a lot of methods and overloads. For example you'd have methods to create an Address with a specific Postcode, another to create it with a specific PostCode and gets old very fast and that's why I think an EXPRESSION BUILDER based approach is preferable.

ENTITIES do not necessarily force you to provide all the data to them in the constructor so an OBJECT MOTHER is a good approach. I use a combination of the patterns described in the Creation Method article in the XUnit Patterns page (also see Test Helper page on same site). This works nicely because you can use a simple method on the OBJECT MOTHER to create an ENTITY (give me an active customer) and can then customize the returned Customer in the test method (for example by giving them a rejected order).

I have used TEST DATA BUILDER for ENTITIES too, in addition to OBJECT MOTHERs, however I've only done this a few times and they are quite specific. In particular these are very high level builders so they handle cases like "Give me a customer with a relationship to an account manager who works for the company Spondooliks". This case involves at least three AGGREGATES and the associations between those aggregates and we want to make that setup really readable, which either means putting it in a method in the test class or using a TEST DATA BUILDER (or both).

One thing to be careful of is relying on values of objects returned by OBJECT MOTHERs or TEST DATA BUILDERS. If you don't pass in a value and it isn't implied by the name of the members that you used then do not rely on the value in your tests because if you do they become overly fragile and complex. So if you call CreateActive on an CustomerObjectMother and don't pass in any data then its safe to assume that the returned Customer is Active but you cannot assume that the Customer has an Age of 28 (see Creation Methods).

Are They Evil?

Some argue that both patterns are evil because by creating real ENTITIES/VALUE OBJECTS you are going from writing unit tests to writing small integration tests. I'm more in agreement with Ian Cooper on this point and think its usually fine to use real domain objects in tests (within reason). However if you disagree then you can go ahead and mock out your ENTITIES but you'd still want to use TEST DATA BUILDER for your VALUE OBJECTS (see this test smell from the mockobjects guys).

We want to avoid using OBJECT MOTHER to hide design smells For example if our ENTITIES and AGGREGATEs are too large, too complex, are overly coupled, or have too many states then we could use an OBJECT MOTHER to hide the fact that these problems make the objects difficult to create. Eric Evans discussed this topic here and I handle it by trying to ensure that the OBJECT MOTHERs themselves are kept clean and simple, if they get complex I definitely consider that as a good indicate that there is something wrong with the design.

Another argument which I've heard is that OBJECT MOTHERs and BUILDERs add a lot more code to step through and make debugging tests more difficult. As it happens I rarely debug tests but if I did I either wouldn't step into the OBJECT MOTHER or BUILDER or I'd add the necessary attributes (as Greg Young does).

Object Mother Links

  1. Pattern - You can also get this PDF here.
  2. Martin Fowler
  3. Ward's Wiki
  4. Creation Methods

Test Data Builder Links

  1. Nat Pryce - Great series of articles on the pattern.
  2. Expression Builder

Several other people use slightly different approaches, for example this one is quite interesting and is a variation of the approach I've settled on.

Share This - Digg It Save to Stumble It! Kick It DZone

Wednesday, August 06, 2008

Domain Model Validation

This post has been sitting as a draft for a while and finally thought I should post it in response to the recent thread in the ALT.NET forum Validate Value Objects.

So here goes, I'm going to explain basically the approach to validation that I prefer, or at least the approach we used on my last DDD project.

Value Objects

A value object will validate the parameters in the constructor and should then be immutable.
If you pass invalid values to the constructor you get an exception but if you want to know the reasons that certain values are not appropriate you call a method of the form BrokenRulesPreventingConstruction(arg1, arg2, ...). You may not need this if you can are prepared to repeat the validation in another form (such as in the GUI).
Although you might use a BUILDER to create a VALUE OBJECT I prefer to keep all the validation in the constructor of the VALUE OBJECT.
It is also worth noting that value objects simplify validation, for example if a Person must have a Name then all we need to do is check that the Person has a Name as we don't need to validate the name because all Name objects are valid (whole object).



If you want to do validation within your domain you have a couple of choices:

  1. Entity or Service Based - Some people move their business logic, including validation, into the services and leave the entities as DTO's. This brings an anaemic domain model. An alternative is to make each entity responsible for some of its own validation.
  2. Attributes or Rule Classes - If you want to use attributes then you'll likely be looking at something like EViL or the Validation Application Block (or an example project like Xeva). Attributes work for simple rules, but don't handle complex or state/processed based validation well. For more complex scenarios you'll likely turn to little rule classes (which I choose to see as variations on the SPECIFICATION pattern). Some people combine attributes and custom classes but personally I prefer just to use rule classes for all cases, it does mean you can't generate the GUI validation (client side validation) but I've yet to find an approach to automatically generating client side validation that I liked so in my view if you want that validation in the GUI then you should consider writing it separately.
  3. Inject or Direct Dependency - Some people who use rule classes inject the rules into the ENTITIES. Injecting the rules adds flexibility, but just having the domain decide which rules to use is going to be enough in a lot of cases. You sometimes need to inject though and Udi Dahan's post on Generic Validation has some good points on where it does fall down, but I still think ENTITIES can handle a lot of their own validation.
  4. Immediate or Delayed - Its natural to assume that the best way to validate is to throw exception from the property setters, disallowing invalid states. This doesn't work for cross field validation or cross object validation or where the object will be temporarily invalid.  If you want a good discussion of this see this book.
  5. Notification or Event - Udi Dahan describes an event based notification pattern.
  6. Constructor - The more you validate in the constructor the better (Constructor Initialization) but it does complicate the use of the domain objects and it is only ever a partial answer. For example if an entity moves through multiple states, or is used in multiple processes then whether it is valid is contextual. I tend to use constructor arguments for immutable's, for example a business key. 

So what do we use:

  1. Entity Based - Services are certainly involved in validation but an ENTITY is responsible for its own validation and an AGGREGATE root is responsible for validating that entity AGGREGATE.
  2. Rule Classes - We evaluated attributes but they only handle very simple validation (not null, max length) well and we didn't want to have two types of validation at play so we just use rule objects.
  3. Direct Dependency - For the sorts of systems I am developing injecting rules is not necessary, I have no need to decouple an entity from its own validation rules or to vary the rules at run-time.
  4. Delayed - You can ask an ENTITY whether it is valid at any time and you can temporarily leave it invalid, this lets the higher layers manipulate the objects as they see fit but lets us ensure that once an application "transaction" is complete the object is back to being valid.
  5. Notification - We use a Notification style approach and a style where you can ask an object whether it is valid/ready to do something (and get back the notification) or you can just try to do it(which may then cause you to get an exception containing all the reasons that the operation is not possible).


The implementation is simple but varies from case by case.

For simple cases an entity will have a GetBrokenRules method. This method will create a collection of IDomainRule objects, it then forces each rule to evaluate the object in question. When a rule fails a description of the failure is put into a separate collection of BrokenDomainRule (NOTIFICATION) objects that the GetBrokenRules method returns.

For more complex cases we have to vary the validation based on the state (as in state pattern) of the object. This doesn't really change things too much though but instead of calling GetBrokenRules you call GetBrokenRulesDisallowingTransitionTo(...) and pass in the representation of the new state (maybe an enum value).


An AGGREGATE root is responsible for coordinating the validation for the entire AGGREGATE, so if you ask a Customer to validate itself then it will validate the entire aggregate and return any failings.

Cross Aggregate

If a rule involves more than one AGGREGATE then it should be performed in a SERVICE. For example if we had this requirement:

Before moving a Customer to the Active state you want to ensure that the Customer itself is valid and also that it has at least one Account.

You could make the Customer responsible for this sort of validation but a better option is to move this validation into a separate SERVICE. The same goes for validation that involves multiple instances of the same class (e.g. only one Customer with a specific e-mail address).

In addition if validation requires you to call to an external resource, even a REPOSITORY, then I would move it into a SERVICE and the SERVICE coordinates the validation.


As well as cross-aggregate validation SERVICES should handle any process specific validation.


Collections can manage some validation to do with associations. For example if we had this requirement:

A Customer can only have one earnings Portfolio

We've found the best way to handle this is to make the appropriate collection responsible for the validation so when you call customer.Accounts.Add(account) you get an exception if for any reason the addition is not possible (the exception tells you all the reasons it was impossible). You also have a CanAdd method so you can evaluate without raising an exception (wouldn't work if we had to worry about the effects of threads on this code).

This validation is not performed when you validate the AGGREGATE that owns the collection (Customer) because the methods that control adding/removing from the collection can ensure the collection is always valid.


Some validation only needs to be performed on creation. If this validation is complex enough then it can be worth moving it into a FACTORY, in particular if you want to perform the validation before creating the associated ENTITY.

Share This - Digg It Save to Stumble It! Kick It DZone

Acceptance Testing Book on CodePlex

Geoff Stockham was good enough to send me a link to a CTP of a new acceptance testing guide written by a group of authors that include Gerard Meszaros.

Really looking forward to reading the final version it as I am hoping that it will answer some of my questions about how to tackle acceptance testing, especially within the context of BDD.

My initial scan left me thinking that it's quite high level, for example the "Hand Scripted Test Automation" section doesn't really tell you much about how to write good acceptance tests in the context of Acceptance Test-Driven Development. However it is early days and it looks like a lot more is to come so I'm sure it will be useful.

Share This - Digg It Save to Stumble It! Kick It DZone

Sunday, August 03, 2008

BDD for Acceptance Testing (xBehave/Dan North)

As I said in my previous post on BDD I use the Dan North (xBehave) style solely for acceptance testing as I think it suits testing at that level.

The first problem with discussion this style is that the message isn't really all that clear, if you want evidence of this do a Google search for BDD and spend an evening reading as much as you can then see if you feel you understand what its all about. Given the fact that Dan North apparently first started working on BDD in late 2003 I'm surprised at the lack of good content. Having said all that I have few real answers in regards to BDD so I'm going try to explain some of the real issues I've met.

However many of these issues are not specific to BDD as they relate as much to the correct application of user stories and acceptance testing. However I thought it was worth discussing it all together because you will meet issues if you do try to follow Dan's style and some of them may surprise you even if you have read about BDD, especially since most BDD examples boil down to dull AccountTransferService type cases.

It is also worth remembering though that these are just my current opinions, and I have already changed my mind a few times on some of this stuff :)

What Are We Talking About Here

My acceptance tests are the highest level tests that I'm going to write as part of my normal development process. They aren't necessarily end to end but they will test to the boundary of one particular system, let me give you two examples of what I mean:

  1. Integration Project - The acceptance tests sent a message to the SUT and verified the XML document that came out the other end. I didn't go as far as testing how that XML document was processed by subsequent systems, writing these totally end-to-end tests might have had value but they would have been complex/slow/tricky and in my view would not have adequately driven development.
  2. Web Project - For the last month I've been working on a new Website and in this case we're using Watin/NUnit and the BDD specifications are working from the top down. They only interact with the application through the browser, including when verifying.

Just to be clear we are just talking here about the top level tests, so in the case of the Web any controller/mapper/service/entity tests are covered separately using the xSpec style (Scott Bellware is the authority on this in the .NET world).


As far as I'm aware the only serious .NET BDD framework is NBehave and whilst my (limited) work with it left me feeling that its a good approach I wasn't happy with the resulting specifications. Looking back now I think I made a mistake giving up on it too quickly and I do intend to try it again once v0.4 comes out.

Another BDD framework that you might have heard of is Stormwind. It looks very professional but it is very GUI focused and doesn't really seem to have that much to do with BDD. As soon as you start talking about the nitty gritty details of the user interacting with the controls on a page I think you've lost the power that BDD gives you about describing what your user wants to be able to do and why.

So in actual fact I'm not currently using any special framework for my acceptance tests other than (for the Web) Watin and NUnit/MSTest.


Before starting out with BDD you need to know what your trying to get out of it. Personally I wanted to get:

  1. Good acceptance tests
  2. Improved communication within the team.
  3. Further cementing of the user story based requirements process.

You will notice reporting is not mentioned here because the ultimate goal is to create working software that meets users needs so personally I'm not overly interested in the reporting angle. With this in mind, and given that we have adapted our requirements process to ensure we produce stories in a suitable given/when/then format, how do we feed them into our development?  After a bit of experimentation my current approach is just to do it blindly by encoding them directly into methods:






Few things to note about this style:

  1. Managing Complexity - In this post about RBehave Dan North indicates that each steps implementation should be simple. I'm not so sure this is always true, for example when I was writing tests for the integration project there was a lot to be done to setup the test and to verify the results and with this style I hide all those details within the methods.
  2. Lacks Reporting - With this approach you don't get any reporting because we're just calling methods rather than passing useful metadata to a framework like NBehave, this means we can't print out a report to show our users. This doesn't bother me because the "user stories" are driving the creation of these specifications so I don't particularly see the value of printing out the results of an execution to say "look we're doing what we said". If we want to prove we've done the right thing we can look at the way the software behaves.
  3. Lacks Flexibility - I've met quite a few situations where I want to parameterise my given/when/then, for example even for this simple acceptance tests I have variations depending on where you are on the site when you login and one case where you go directly to the login page from outside of the site. I agree that DRY does not necessarily apply as much to tests as to normal code (which I will discuss in a separate post) but if I blindly copy the code in each case I end up with a real muddle.

What I've found is with acceptance tests there is a lot of complexity and scope for reuse and you can of course get the reuse in a number of ways including using inheritance or composition:

  1. Inheritance - You have a context base class and inherit into multiple subclasses. Most of the code is probably going into the base classes and the subclasses just override a few values/methods to customize the behaviour (e.g. overriding StartPage and EndPage or equivalent for the login case). It works but it does mean that the individual scenario classes tell you very little and to understand what's going on you need to go into the base class. Although the base class itself might be short and written in a very clean manner the result is far from perfect. Note even if you do use inheritance you will probably use some helper/builder classes, but this is beside the point.
  2. Composition - Do what xBehave does and move the context out to a separate class which you configure before running every scenario, so your given/when/then is moved to the composed class and you tell it what values to use in a particular execution.

Since the reuse is a big issue I might well blog about it separately, but I do think the amount of repetitive code involved in acceptance testing does push you towards a composition based approach and I do intent to give NBehave another shot sometime soon.

Outside In

As I discuss above I'm using this approach from the top down, but my usage has varied massively:

  1. Integration Project - I tried an outside in approach focusing on mocking first then replacing with real implementations. This approach was emphasized heavily in a lot of BDD articles and I found it relatively useful in helping me use mocking to influence my design.
  2. Web Project - In this case we're using Watin, no mocking is performed at any stage in these tests but we will stub out some services (e.g. Web services). My workflow is to start with a single Watin test and then once its failing I'll switch directly to testing the controller (using xSpec style), then services/repositories and so on right the way down until I've written enough code to get the original test passing.

The difference is small, but I certainly think its worth pointing out that BDD does not force you to work in one particular way.


How many of these specifications do I write, well for me it depends and since I've now tried to use BDD on two projects I thought I'd describe how many acceptance tests I've written on each of them:

  1. Integration Project - In this case the SUT receives messages describing changes to an AGGREGATE and it then generates XML messages which are sent to BizTalk. My approach to acceptance testing was to modify the AGGREGATE, get the resulting message, feed it through the SUT and then verify the resulting XML matched my expectations. In this case we had (boring) complexity in the mapping, if Foo has the value X then do Y, and whilst I could have have tried to write acceptance tests for each of these cases but I was already covering that behaviour in my lower level tests and it was far more convenient to test/specify at that level. With this in mind I only wrote two acceptance tests, one with a relatively unpopulated aggregate and one that was fully populated with interesting values.  All the other details were covered by xSpec style unit/integration tests.
  2. GUI Project - Our user proxy writes user stories and scenarios that go into quite a lot of detail, these were then turned pretty much directly into acceptance tests. The result is lots of acceptance tests and they do indeed (so far) drive the development.

The integration project was troublesome, applying user stories and BDD involved a bit more creativity than in the case of the GUI work and the specifications didn't cleanly tie back to the requirements. Having said that the user stories themselves didn't go into massive amounts of detail about all of the little details so I thought I'd found a reasonable balance and the acceptance tests did have a lot of value and saved me time overall because I had less issues being discovered when we did end-to-end testing.

Outstanding Questions

Acceptance testing is hard and from what I've seen there just isn't the guidance out there and I'm quickly running into exactly the issues that make test automation (which used to be my job) very difficult. The main questions I still have are:

  1. Verifications/Setup - Assuming there is a GUI do I ever skip it when doing my setup/verify?
  2. Size - Do I go for one big happy path test or more detailed and targeted tests (which in my view are more useful)?  What about when in order to verify something I need to go to another part of the system?
  3. Maintainability - These tests use so much of the system that we have to make them maintainable, this isn't just about not encoding too many GUI details into the specifications but is also about allowing reuse and writing clean code. What's the best approach?
  4. Story Driven - In order to create user stories that feed into BDD nicely you need to write them a certain way, is this something the users should have to think about?

Note that these question relate to the process that a developer goes through in practicing BDD and the artifacts we produce, I have plenty of other questions about the overall process of BDD and how far it really changes software development.

Starting with BDD

If you are new to BDD I'd recommend you read Dan North discuss BDD and give it a shot. Once you understand the ideas maybe look at the BDD group and all the great content out there on the Web. However when reading content I'd make sure to continually ask whether the author is talking about the Dan North (xBehave) style or the Dave Astels (xSpec) style. If its the former then it is most likely that the author will be thinking of acceptance testing and the advice is probably worth considering, if its the latter then its quite possible they are thinking of unit testing which (to me) is a whole different ball game (in practical terms).

Also everything I've said above is questionable and subject to change, I'm also not strictly following what Dan North thinks. For example here's a quote from Dan North:

I use the scenarios to identify the “outermost” domain objects and services – the ones that interact with the real world. Then I use rspec and mocha to implement those, which drives out dependent objects (secondary domain objects, repositories, services, etc).

This makes sense but I've found using xSpec style works better for domain objects/services, so I actually use xSpec for everything below the GUI, but that might change again next month or if I work on a project with different characteristics.

Share This - Digg It Save to Stumble It! Kick It DZone

Book Review - Better, Faster, Lighter Java

Ian Cooper mentioned this book on Twitter and since it was available for 33p on Amazon I couldn't resist buying it.

The book describes the lightweight processes and development practices that started in the Java space and have fed through into ALT.NET. The way the book sets out the vision is by breaking things down to five key principles:

  1. Keep It Simple
  2. Do One Thing, and Do It Well
  3. Strive For Transparency
  4. Allow for Extension
  5. You are What You Eat

The first four principles each get their own chapter and in following chapters concrete examples, including Hibernate and Spring, are used to show how these principles improved the designs. These chapters worked really well and I also enjoyed the entirely relevant discussions on topics such as golden hammers, rewrite or replace and vendor sales processes. The authors are also pleasingly open-minded, absolutely no dogma here just pragmatic advice on how to create successful software.

So my opinion is that even though it is a Java book I think its worth a read, especially if you can get it for 33p :)

Share This - Digg It Save to Stumble It! Kick It DZone

Thursday, July 24, 2008

BDD - The Two Approaches

I've been using BDD for a little while now and think I've gotten to grips with most of the ideas, however the more I learn the more questions I have. Rather than wait for answers I thought I'd blog about what I'm trying now, so when reading this keep in mind that these are just my current personal opinions and should be taken with a pinch of salt.

My intention was to write one big blog entry but it immediately got too long and unfocussed so I decided to break things out a bit and I thought I'd start by describing how I view BDD.

To me there are two styles of BDD:

  1. North BDD (xBehave) - The style Dan North and colleagues created and evolved and which is often seen as the given/when/then style. To me this style will be useful in bringing everyone into the requirements process and getting them talking the same language whilst providing a process to ensure that the requirements feed directly into development. In my view best suited to higher level testing and in particular to acceptance testing.
  2. Astels BDD (xSpec) - The style Dave Astels popularised and which, to me, is a new form of TDD which has some serious benefits. You can use this style for any testing, from acceptance testing downwards.

Some have argued that the given/when/then style is not needed, and that the xSpec style can be used for all levels of testing. I've tried this but didn't entirely like the results so I currently use both approaches because I actually think that using a different testing styles for acceptance testing and all other developer testing has benefits, particularly because the two types of test/specifications are quite different:

  1. Level - Although many of the BDD examples are against the domain/service layer level I actually think that in many systems you'll actually be using this style for your acceptance testing, probably from the top down.
  2. Language - Dan North makes a big deal of the fact that BDD can draw in the UL. I'm not so sure this is true because user stories aren't necessarily in the UL. However even if it is true it only applies to some of the specifications we're writing, namely those describe the user experience (acceptance level) and domain model.
  3. Reuse - When you're writing acceptance tests you've got a lot of potential for reuse so, in my view, composition wins out over inheritance. I'll describe this more in a separate post, to me, that's the biggest technical reason to look at xBehave (I just found out this is also Dan's view, see comments on this post).
  4. Tools - I'll cover this later but although I think you can do both styles of BDD without any tools I do think that Dan North's style is the one that will benefit more from additional tools/frameworks (such as NBehave) particularly if you are interested in people outside the development team writing the inputs and seeing the outputs (notably reports).
  5. Patterns - In my limited experience patterns/approaches that you come up with for developer testing don't necessarily scale up to acceptance testing where there is more reuse/complexity. I thus think that when viewing much of what is written about BDD you need to be clear on what type of testing the person is describing.
  6. Intent - Users/domain experts are only going to be interested in the behaviour of some parts of the system, notably the GUI and the domain so using the xBehave style right the way down is an un-necessary burden.

Its unfortunate that Astels and North both called their approach BDD, but thats the way of it. However we do need to get our terminology straight and that starts with understanding the differences between the two styles of BDD.

Share This - Digg It Save to Stumble It! Kick It DZone

Sunday, July 20, 2008

BDD Talk by Scott Bellware

Greg Young has posted a link to a talk given about BDD by Scott Bellware which I highly recommend. I like it most because Scott clearly explains the differences between the two styles of BDD, one influenced by Dan North (*Behave) and one by Dave Astels (*Spec). I also like the fact that slowly but surely we're starting to use different terminology for the two types because mixing them up is (I believe) the cause of a lot of the misunderstandings regarding what BDD is all about.

My feeling, as I've posted before, is that both approaches have a lot of value but unfortunately the given/when/then style seems to me to be work best with some tool support (especially so you can parameterise the given/when/then itself) so it is harder to get into. I'm going to post more about my views on this later though.

Share This - Digg It Save to Stumble It! Kick It DZone

Monday, July 14, 2008

NServiceBus - Getting Started Fast

NOTE: This post is probably only of interest to people planning to try out NServiceBus for the first time as the post is all about how to get it up and running quickly.

I've been trying to get up and running with NServiceBus because its relatively mature and has a good community and is based on good practices. However in my view quite  it is difficult to get into and there is a good chance you'll waste a lot of time time trying to get up and running so I thought I'd share what little I've learned so far with it.

I should also say that difficulty getting projects setup is fine, its a part of using open source projects. In fact although I love Castle and NHibernate both have a high barrier to entry, however in both cases there is lots of documentation available detailing how to get started with them so you never feel far away from an answer. That isn't the case with NServiceBus so I think that when starting out you need to make life easy for yourself. Anyway so far there have been two issues:

  1. Configuration - Out of the "box" NServieBus comes with Spring based configuration and someone has produced a Castle MicroKernel implementation. You also seem to be able to mix the configuration, doing some in code and some in the XML files.
  2. Referencing Projects - NServiceBus has a lot of projects. I'm not against that approach but since initially you have no idea what each project contains you do start off referencing just NServiceBus and two hours later you've ended up bringing in 10 projects. Just bring them all in to begin with and ignore the smell, you can sort it out later if you need to.

Anyway I go into a little more detail into these issues next, but you will probably also want to read this guide.

Referencing Projects

The DLLs I found I needed to reference:

  1. NServiceBus
  2. NServiceBus.Serialization
  3. NServiceBus.Serializers.Configure
  4. NServiceBus.Unicast
  5. NServiceBus.Unicast.Config
  6. NServiceBus.Unicast.Subscriptions
  7. NServiceBus.Unicast.Subscriptions.DB
  8. NServiceBus.Unicast.Transport
  9. NServiceBus.Unicast.Msmq
  10. ObjectBuilder
  11. ObjectBuilder.CastleFramework or ObjectBuilder.SpringFramework


The "test\full" folder inside the solution has great examples of how to configure a client/server (see ClientServer and ServerRunner).

You still need to include the configuration file with the suitable elements in it, but examples of this are also included.

Share This - Digg It Save to Stumble It! Kick It DZone