Sunday, December 30, 2007

BDD/TDD - What Drives The Domain Design

In view of the discussion of BDD[1][2] I've started to look at the way we specify/test our domain model. A great promise of BDD is that your tests will drive the design of your domain model and that the tests themselves will help explain the design of the domain.

My efforts to get to the essence of BDD have left me very confused and in an effort to better understand BDD I actually went to the Test Driven Development Yahoo group. I've been subscribed to it for a while but I'd never realized how good it is. Well informed discussion and heavyweights like Ron Jeffries/Kent Beck wading in. Great stuff and seems to me to have a similiar feel to the DDD group.

Anyway I came across one particularly useful thread. It doesn't really cover BDD directly but it does cover state/interaction testing and how they influence the way you design. In particular it contains a couple of great posts including this one which tries to get the to real essence of interaction and state testing and the way they affect your design.

I was already well aware of the difference between state and interaction testing but this single post summed things up very nicely and reminded me of a few things.

Personally I refactor, including to patterns, a lot. As I do I find my code changes massively which affects the way I test. Let me give you a representative example...

Example - State Testing/Refactoring Driving The Design
We need to allow people to debit Accounts in our system, a test to kick off this work might be

Account sourceAccount = ...;
Account targetAccount = ...;
Money amountToDebit = ...;

Money originalValueTargetAccount = targetAccount.Balance();
Money originalValueSourceAccount = sourceAccount.Balance();

FundsTransferService.Transfer(sourceAccount, targetAccount, amountToDebit); // code under test

Assert.AreEqual(originalValue - amountToDebit, account.Balance);

I'm not saying this the exact test I'd start with but its representative of the sorts of tests I'd be writing. This is a pure state test and in Ron Jeffries terminology is testing functionality not sequence.

As I go I'd be writing more and more tests and more and more code. As I went I'd refactor and after a while the method may be delegating to Specifications, Rules, Method Objects, Strategies, Entities, ValueObjects, Factories and other classes to do its work.

Some of the new classes will merely exist to ensure the code reads well (pure fabrications in Larman terminology).

As I extracted these pure fabrications I probably wouldn't change the tests, leaving them at the level that they started at (FundTransferService). I could change the tests to be tests specific to the new rule though. I'd just extract the code, ensure the tests passed, refactor the tests, ensure they passed...but then pure fabrications are very much open to redesign at any time so I'm not sure it makes so much sense to write tests for them specifically (questionable?).

Anyway what this means that I'm doing top down development (starting at public interface to the domain) but I'm not using stubbing or interaction testing, instead I use state testing and evolve the code and refactor a lot until I'm pretty happy.

Alternative - Interaction Tests Driving The Design

So the question is what is driving me to change the design, if I was doing proper interaction testing then I would be defining the interactions and then writing the code to fulfill it.

I could probably do this for the high level interactions, for example between entities. I'm not sure I want to include that information in every test but I could use it as a design technique and encode it in a few tests. This is quite high level interactions so I avoid the risk of overspecified software.

Ultimately I would still need some state tests (see this thread for a good discussion of the reason you still need state tests) but I'd be using interaction testing to drive my design.

However as I say I find a domain model is filled with all sorts of little domain classes that only exist because we chose to refactor the code to make it read better, these classes are not necessarily part of the ubiquitous language and are purely an imeplementation detail (they are pure fabrications).

Making interaction testing work for these pure fabrications seems to me to be a bit of a bad idea, these classes exist because of refactoring and I cannot plan for the interactions with them upfront.

Maybe if I stick to only using interaciton testing for the high level interactions, rather than interactions with pure fabrications, I'll get the benefits of interaction testing without the costs. I think thats what I'll try next.

Share This - Digg It Save to Stumble It! Kick It DZone

Wednesday, December 05, 2007

Identity In Forums

I'm not overly bothered about the identity issue that was discussed on the ALT.NET forum recently but it did make me think of this DDD thread that I came accross recently. Kinda amusing seeing people seriously respond to someone with a nick like that.

Share This - Digg It Save to Stumble It! Kick It DZone

Thursday, November 29, 2007

Crap4J - Code Complexity and Coverage Analysis

Crap4J was mentioned in the ALT.NET forum, seems like another interesting code analysis idea but this time it does code complexity and add code coverage.

There is also an entertaining video that covers the thinking behind it, and it sounds like Crap4N is on the way too...

Share This - Digg It Save to Stumble It! Kick It DZone

Wednesday, November 28, 2007

AOP - Alternatives To PostSharp

Whilst looking at PostSharp I found quite a few useful links and also managed to look at a few alternatives.

Aspect# seemed interesting but it only does virtual members (or interfaces I presume) and its certainly not transparent to the user of the classes as you have to go through an AspectEngine. Anyway no matter how useful it could have been it looks like Aspect# is a dead end and is on the way out.

Other than Aspect# there are a few other choices, Spring.NET has an AOP solution and Eric Bodden has a list of .NET AOP solutions and I'll definitely take a look at some of them, AspectDNG in particular sounds good.

I'll be interested to see if I can find any solution thats as simple as PostSharp Laos or that has such a nice way of supporting compile-time weaving.

Links - PostSharp

  1. Using AOP for validation
  2. PostSharp AOP reference
  3. AOP with PostSharp Part A
  4. AOP with PostSharp Part B
  5. Bitter Coder
  6. DotNetKicks - Not much yet...

Links - AOP

  1. Ayende - 7 Approaches To AOP In .NET
  2. Characterization Of Current Approaches
  3. Spring.NET
  4. Eric Boddens List Of Current Approaches
  5. Wikipedia

Share This - Digg It Save to Stumble It! Kick It DZone

Tuesday, November 27, 2007

AOP and the Domain - Dirty Tracking

A new requirement has come in to our system, essentially we need to be able to tell external systems when (some) changes happen to our domain objects. How we notify the systems is one issue but how we actually track domain changes is another one.

Some of the changes are quite high level, such as a Customer state change, and some are just little changes to the data of an object, such as a change to a Customers first name.

We could handle the high level state changes by calling through Service classes, for example CustomerStateChangeService.Activate(customer). However we also need to think about how to track all the data changes, we've considered a few ways that we could do this and two of the most obvious are:

  1. Services - Instead of directly setting a Customers first name call through a Service which would then take care of ensuring that we give other systems a chance to track the change. We could look at something like the anticorruption layer example.
  2. Domain Events - The domain classes could raise events, to some extent this is nicely decoupled but it bothers us that our domain objects would be raising events simply because an external system needs to know about the change. It also means that pretty much every property setter in the domain could potentially be raising an event once its set the value, not very elegant.
  3. NHibernate - Maybe include an interceptor that can help us work out what has changed when NHibernate comes to persist. This sounds easy but in practice would be a mess and certainly isn't very intuitive.

Anyway these and other solutions didn't appeal so I started looking at PostSharp. Now I'm no fan of overly technical solutions to domain problems. However I'd heard good things about AOP and thought it might help us solve the problem, especially since logging is quite often given as an example of where AOP can help. I was thus hoping that it could give us a good, simple, solution to our problem.

However when I came to use PostSharp I couldn't believe how good it was. In particular it allows you to use attributes to specify code that you want PostSharp to inject in post-build...


To show how simple it is I've tried to come up with what I consider to be a useful example which you can get from google code. In order to run it you first need to install PostSharp, once you've done this you can open and run the project.

First off note that SimpleDomainClass is marked with AllPropertySetsNotificationAttribute. If you open AllPropertySetsNotificationAttribute you can see that it is using reflection to attach an extra attribute, PropertySetNotificationAttribute, to each property setters.

This may seem silly until you run the application and see that the code in PropertySetNotificationAttribute.OnSuccess is being automagically run.

This is happening because the post-build PostSharp process is ensuring that where it sees a method boundary aspect, such as PropertySetNotificationAttribute, it inserts the necessary IL to ensure the code in the attribute is run.

You can see this if you open Reflector and view the disassembly output for SimpleDomainClass, look at the property setters and note the extra code including the call to OnSuccess in the attribute.

In case you were wondering this is a simplified version of the binding sample that actually comes with PostSharp.


In our case you could easily imagine PropertySetNotificationAttribute calling out to a IDomainChangeNotificationService, the actual implementation could be retrieved from a Service Locator or injected in using IOC.

Project References

You will note that our domain assembly references the assembly that I've put the attributes into and also PostSharp.Laos and PostSharp.Public. This bothers me slightly but from what I can see they are quite lightweight assemblies so it doesn't seem to be a killer.


I'm still at the earlier stages of understanding PostSharp but I'm very impressed.

Like TypeMock and NHibernate it is very powerful and also seems to come with very good documentation.

I'm also stunned by how easy it was to do what I wanted to in this case, but then I guess I shouldnt be as the docs indicate that Laos has been designed to be simple rather than being fully featured.

Share This - Digg It Save to Stumble It! Kick It DZone

Tuesday, November 20, 2007

Interfaces and Abstract Classes In The Domain

I was just looking at a DDD example that had an interface on nearly every class in the system, including the domain classes. One reason for this is mocking (which I don't favor for the domain anyway) but the other seemed to be to improve the design. This second reason made me think I should disuss why I don't think the "everything needs an interface" approach is a good one for the domain.

First off I should say that the topic has already been discussed.:

  1. DDD Thread
  2. Pro Interface View
  3. Pro Abstract Class View

I disagree slightly with the two viewpoints, to me you only need abstract classes or interfaces in a minority of cases. In the domain I believe you should only add an abstract class or an interface where you believe that the addition is improving the design, most commonly when it is a useful abstraction.

I do agree with the second view as even when you do have a useful abstraction an interface isn't always enough, you sometimes want to enforce the constraints (bake them in) and an abstract class is better for this.

Having said that I wouldn't take my Customer class and just create an ICustomer interface (or an abstract base class) on it, instead I'd look at the usage of the class and the coupling to it which might lead me to extract meaningful interfaces. In fact in many cases I'll just start out by referring to the concrete Customer domain class, only introducing abstractions where I know they are needed.

In summary only put in interfaces or abstract classes in the domain where they are helping, don't go for what Fowler refers to as "Interface Implementation Pair" approach.

Actually this post sums that up my view nicely.

Share This - Digg It Save to Stumble It! Kick It DZone

Domain Presentation View

Was just reading a blog entry from Ray Houston on his view classes. The scary thing is we have exactly the same approach, right down to the naming of the classes!

In our case if we had a Customer aggregate and we were displaying lists of Customers in a grid then loading each instance of the aggregate would be deeply inefficient. We'd be loading lots of data/rules that we didn't need and we'd also be missing information that we needed to display in the grid because that information would come from other aggregates. We might thus end up navigating associations from one aggregate to another just to get simple primitive data types to display in the grid, which is wildly inefficient.

What we really needed for these situations were fast loading read-only classes that have information from one or more aggregates but that only load the information/behavior needed for the display in the grid. Enter our Info objects.

We have Info objects (ClientInfo) and InfoLoaders (ClientInfoLoader). The loaders only have retrieval methods and are not repositories. The Info classes are then mapped to database views.

We actually have the Info objects in another assembly called Presentation, but we do have a reference from the Presentation to the domain assembly so that the Presentation classes can use things like Services/Rules/Specifications/Enums and so on. We could remove this dependency if we needed to, but it would be a little bit of effort and so far I don't see it as a major issue.

So far we only have a few Info objects, certainly not one per aggregate. We also don't create one Info object for each view (as in an MVP view) so its possible that when you load an Info object you get data that you don't need. If this became an issue we could deal with it.

Anyway I shouldn't have bothered explaining it as Ray has already done a good job. I also believe there is a discussion of this sort of thing on the DDD newsgroup (don't have link), but it is heartening to know we are not the only people using this approach.

Share This - Digg It Save to Stumble It! Kick It DZone

Monday, November 19, 2007

Domain Model Project Structure

Myself and two colleagues had a very interesting discussion about the structure of our domain projects. Originally we had this sort of folder structure within our domain projects:

This wasn't awful but it also wasn't perfect, after a bit of work we decided to restructure like this:

Its difficult not to prefer this folder structure as it keeps things that are related together in the folder structure and emphasizes key abstractions/classes (such as Customer here).
Unfortunately we've had to create "Component Parts" folders that, we plan, will contain any classes that are not key aggregate roots. The top level "Component Parts" folder has very reusable classes, but the one within "Customer" is for classes that relate just to the "Customer" aggregate(s).
This idea of a "component part" may seem odd but it does help show that whilst CustomerIdentifier is important it is not in any way as key as Customer.
The discussion also emphasized that it is impossible to get a project structure that we are completely happy with, trying to impose a single hierarchy on a domain model is just so difficult to do cleanly.
Its also worth noting that the small size of this example means it is not a good example, but we have already started giving this folder structure a go and so will hopefully find out just how well it works.

Share This - Digg It Save to Stumble It! Kick It DZone

Sunday, November 18, 2007

Repository Implementation (DDD/NHibernate)

Christian Bauer (Hibernate bod) has written a post about the repository pattern. I think the post is a little bit misinformed and concentrates too much on one possible way to implement the repository pattern, however it does show that people are still having trouble working out how to implement repositories so I thought I would explain how we do it.

I actually think the pattern is relatively simple, especially when used with NHibernate, so here goes...

1. Accessing From The Domain
I don't acces the repositories from the domain, with NHibernate I rarely see the need. I don't actually want to couple them because:

  1. Complexity - Having the domain classes call the repositories makes it harder to understand them.
  2. Testing - Most DDD practitioners seem to focus on state testing and thats certainly my preference, but if your calling repositories from the domain those tests become (as I see it) layer crossing tests and testing becomes more difficult.

I don't want to have to mock out repositories when testing the domain, for me thats reason enough to avoid the coupling. There isn't too much about this one the Web other than one forum post and bits and pieces on peoples blogs, but making your domain testable in isolation is (in my view) well worth the effort.

2. Associations
I tend to focus on modeling the most important associations in the domain.

Within an aggregate this is simple, you can always navigate from the root to the parts.

Where I want to associate one aggregate with another I'll put the most important association in the domain model and (optionally) handle the inverse using a repository e.g.:

IList orders = customer.Orders;
Customer customer = customerRepository.GetForOrder(customer);

Ofcourse sometimes you might bidirectional association here, if it wasn't costing you too much complexity/coupling wise.

Note that in many cases people focus on modeling the association from the one to the many, e.g. Order to Customer. This sometimes works, sometimes it doesn' whatever makes sense. I try to do it without simply putting in the associations that make persistence simplest.

3. Implementation With NHibernate
NHibernate makes cascading and lazy loading simple. In the mapping files I don't lazy load within an aggregate though I do lazy load between aggregates, cascading only goes as far as aggregate boundaries.

The implementation becomes ridiculosly simple, for example for our key repository its basically this:

public class CustomerRepository : Repository

//..any custom queries

The base class is doing all the heavy lifting, for simple cases all I need to say is that the key to the Customer table is an int (using a generic generic parameter, which is missing from the code because blogger is cutting it out :)).

The base class is also very simple, it has methods for SaveOrUpdate/GetById/GetAll and we have extra an IDeletionRepository that we can add on which just has a Delete method. We also have a RetrievalRepository base class for completely readonly cases.

I am coupling the implementation of my repositories to NHibernate but that has so far not proved to be an issue. What we do strive to avoid is putting anything NHibernate specific on the interface of the repository, not just to follow the pattern but because its sensible as we may not always use NHibernate for all of the queries. So I avoid making the repository a leaky abstraction by, for example, passing in some ICriteria to one of the queries.

3.1 Testing
Testing the basic Save/Update/Concurrency also becomes mickey mouse as we have a base class called AggregateRootPersistenceTestBase that does the heavy lifting.

As an example this base class has a TestSaving method that delegates to a SaveTestHelper, this class does the following:

1) Create Repository - Create an instance of the repository under test.
2) Create Aggregate - Create an instance of the aggregate we are testing the persistence of.
3) Save - Save the aggregate to the database (save then flush).
4) Reload - Evict the saved aggregate from the session (or Clear the session) and reload it from the database. We could instead use a seperate ISession, either way we are ensuring that the reloaded object is fresh from the DB (not from the first level cache).
5) Compare - Compare the two objects, this works as I've written an ObjectHierarchyComparer that can be given two objects and will use reflection over their properties to ensure they match. In navigates right down the hierarchy until it gets to build in primitives to compare so can handle very complex object structures.

This is very simple, all you need to do to use SaveTestHelper is pass in two delegates, one that creates the repository and one that creates the aggregate. You can optionally pass in a string[] of property names that the ObjectHierarchyComparer should ignore (such as properties that get default values from the DB, because the reloaded object will have different values for them).

This is all very reusable. We then write tests for the aggregate roots in a few different scenarios:

  1. Unpopulated - Create an instance and save it.
  2. Populated - Move it into a non-default state (if it has lifecycle), populate the entire aggregate.
  3. FullyPopulated - Same but with associations to other aggregates populated.
We also test other parts of the aggregate, I can provide more details if its useful...

3.2 Custom Queries
Any custom queries are written in the repositories, preferably in HQL/ICriteria so we can refactor the DB or code (made harder if you encode SQL). You could actually put the HQL into a named query in the mapping file if you wanted to.

3.3 Eager Fetching
We haven't really dealt with this issue fully yet but Udi Dahan has posted about it. I don't think I'd use his implementation but I do like the idea of fetching strategies and I'd probably choose the appropriate one in a coordination/application layer.

4. Specifications
We probably underuse specifications, they can be useful if your getting lots of custom queries on the repositories that are all just for specific cases:

public IList GetByFullName(...);
public IList GetByFirstAndSurname(...);
public IList GetBySurname(...);

You could encapsulate these name queries in one or more specifications and pass them in:

public IList GetByName(CustomerNameSpecification specification);

The problem is converting your (domain) CustomerNameSpecification into a query. I don't want the class itself to be talking in terms of SQL/HQL/ICriteria so choices that we've though of are:
  1. Conditional - A single CustomerNameSpecification and the CustomerRepository picks it apart, for example the CustomerNameSpecification would have a FirstName property and if its not null the repository adds an appropriate ICriteria to the query.
  2. Switch - Subclasses of CustomerNameSpecification (such as CustomerFirstNameSpecification) and a switch in the repository that calls an appropriate method to create the query for each type of specification.
  3. Visitor : This one of the GOF patterns. Subclasses of CustomerNameSpecification (such as CustomerFirstNameSpecification) and each "accepts" a QueryVisitor, the power of double dispatch is then used to ensure the appropriate method is called on this class.
None of these options are great, though the third is definitely the least awful.

My hope is that in the future extension methods and LINQ should make this easier.

5. Performance
I've discussed eager queries but sometimes domain clases just aren't appropriate. For example we have cases where we display lists of objects in the GUI, for example we might have a grid displaying Orders that actually displays the Customer name.

Loading each Order domain object and the associated Customer is going to be deeply inefficient, for those cases we map seperate presentation (or as we call them info) objects.

We'd thus map an OrderInfo class to a database view that would bring in the information that is needed from whatever tables are involved. These classes are loaded using Loaders (not repositories) to emphasize that they are not domain classes. We also nly create these classes if we are sure that they are needed, so you only find them where we have proved that using the domain classes was going to cause performance issues.

It is worth emphasizing that this is not a presentation model, these are read-only classes that are not in any way associated to the aggregates in the domain.

Share This - Digg It Save to Stumble It! Kick It DZone

Tuesday, November 13, 2007

New Features In NHibernate 2.0

Ayende has just blogged about the new features in NHibernate 2.0, the one that we are most interested in is the ability to map multiple tables to one class. This is very useful and we've been using it for a while.

Share This - Digg It Save to Stumble It! Kick It DZone

Thursday, November 08, 2007


I listen to a few podcasts but my new favorites are:

  1. Software engineering radio - The "enterprise" episodes are great, but I'm yet to be sold on the ones that are on other topics (such as real time systems).
  2. Ask Udi - Good content but the topics are so interesting that the podcasts seem too short, I'm hoping for an extended version in the future.

I have to say both are very useful.

Share This - Digg It Save to Stumble It! Kick It DZone

Sunday, November 04, 2007

Evolving Coordination Layer

I've posted about this before but on my project we have a coordination layer that sits above the domain but I thought I'd add another post about it.

Since a coordination layer is quite closely linked to a domain assembly we have more than one of them, so if we had two domain models our projects would be:

  1. Company.Customers.Domain
  2. Company.Customers.Domain.Coordination
  3. Company.Customers.Peristence
  4. Company.Finance.Domain
  5. Company.Finance.Domain.Coordination
  6. Company.Finance.Peristence
This gives me a lot of flexibility. For example our Finance domain has a reference to our Customers domain but there is no reference going the other way (at all). This is the way we want it but we have a process where by to create a Customer you must already have a saved CustomerCreditCheckRequest object from the Finance assembly, we put that logic into a CustomerCreationService in the Coordination layer. This service would be very short and its form would be:
  1. Create the Customer by calling into a CustomerFactory within the Customer domain assembly.
  2. Create a CustomerCreditCheckRequest and associate it with the new Customer before saving it. This logic obviously calls off to a repository and to the Finance domain and so cannot go into Company.Customers.Domain.
  3. Return the Customer.
The class names are all made up but you see my point, and note that all the interesting domain domain logic is kept within CustomerFactory. It does mean the coordination layer is quite highly coupled (in this case to Finance) but if that becomes a problem we can deal with it (interfaces and injection), and for now it doesn't seem to me to be at all important.

When Is A Domain Service Enough?

There is another "issue", what do if we already have a coordination layer service and no domain service and want to decide where to put some code. For example lets say Order/Customer aggregates are in a single domain assembly but a rule is that for an Order to become active we must ensure the Customer has a valid Address. This is a cross aggregate rule but we could enforce it in a domain service, there is no specific need for the coordination layer to be involved. So we could have an OrderActivationService in the domain, however we already have an OrderService in the coordination we put the code in there instead?

Although I originally thought of just putting the cross aggregate logic in the domain I've been swayed by a colleague and of course even we could put a call to the OrderActivationService (domain) inside a suitably named method in the OrderService, keeping the logic in the domain but giving a simple single class interface to the GUI.

Compared To Service Layer

Evan Hoff did point out that this layer could be mistaken for a Service Layer, the links is here. As we discuss they are quite different as the coordination layer is not responsible for things like email/transactions/session management and is very closely linked to the domain itself. Not to say we might not also need a service layer at some stage, or that the coordination layer might not become one in time...

Share This - Digg It Save to Stumble It! Kick It DZone

Sunday, October 21, 2007

xUnit Test Patterns

The first thing you notice with this book is the sheer size of it which is a little daunting. However you quickly realize that you don't actually need to read the entire thing cover to cover, the pattern based layout means that you can leave a lot of the content for use as a reference if you choose to.

Before getting to the patterns the book covers several other topics including test smells and principles, I thought these sections were as important as the patterns themselves so I thought I'd write a short section on each of them.

Test Smells

The smells are put into several categories:
  1. Project - These could be around quality/cost/resources and are the sorts of things that a project manager might notice.
  2. Behavior - You see these during compiling or running the tests, for example tests that are fragile.
  3. Code - Problems with the quality of the test code, such as duplication.

Each section has multiple smells in it and each of the smells is discussed in a good level of detail.

This is covered, superbly, in chapter 4. Meszaros describes the fact that your testing philosophy will affect the way you test and that it is thus important to understand different approaches to testing. In particular he identifies some philosophical differences:

  1. Test First or Last :- Whether to do true TDD, test last or something in between.
  2. Test or Examples :- Do we view our unit tests primarily as specifications or tests.
  3. Test-by-Test or Test All-at-Once :- Do we write a test then the code to fulfull it, work out all the tests for a class before writing the code or do something in between.
  4. Outside-In or Inside-Out :-Highest level testing leading inwards to lower level testing or vice versa, can affect the way you test with dependencies.
  5. State or Behavior Verification :-The same sort of thing as covered by Martin Fowler in his article Mocks Aren't Stubs. His view is that behavior (mock/interaction) testing can lead to tests that do a better job of isolating the system under test (SUT) but at the cost of more difficult refactoring.
  6. Fixture Design Upfront or Test-by-Test :- How to handle your test fixtures, an important topic covered in superb detail in the book.

Meszaros then lists his personal philosophy:

  1. Test first
  2. Tests are examples
  3. Write the tests one at a time, sometimes listing all the tests to help his thinking.
  4. Outside in development helps clarify the tests to write at each level.
  5. State verification is his preferred option, going for behavior (interaction or mocking based testing) verification where he needs to in order to increase coverage.
  6. Performs fixture design on a test-by-test basis.

I personally found this chapter of the book excellent and it led nicely onto the section on principles.


Meszaros uses the term principles because they are not things that everyone will agree with and are too high level to be patterns, the list is discussed at this link but I've chosen to list the first three here:

  1. Write the Tests First
  2. Design For Testability :- Decouple ya hear.
  3. Use the Front Door First :- Overuse of backdoor verification or behavior verification and mocking can result in overspecified software (fragile tests). The author recommends using behavior verification where appropriate, such as when doing layer crossing test.


Subsequent sections cover test smells and patterns in more detail. I would say that some of the test patterns are less useful than others, but in general these sections are very comprehensive and will be useful whehter you choose to read them in detail or just use them as a reference.

How It Ties Together

What I love about this is that as you get to this chapter you begin to see how things tie together.

For example if your philosophy leads you to prefer state based verification (as the author does) then the principle of using the front door first makes sense to you.

This decision is then backed up by the later section on Interaction Styles and Testability Patterns which discusses why and when round-trip tests are preferrable to layer-crossing tests (not violating encapsulation being one advantage with the fact that you don't get overspecified software being another).

In turn you may then get smell of Behavior Sensitivity which can lead you to need to use the patterns Creation Method and Custom Assertions.

It is truly superb, you can really go from one section to another and see how it all ties together which is all important as if we don't really understand why we do what we do and what the alternatives are then we can't possibly choose the best path.

Even better the author forces you to see that there are multiple ways to test and each is valid, reminding you that you must be open minded about other peoples approaches.

This really is an excellent book, I'm not alone in thinking this either and Sam Gentile has posted about the book in the past and the reviews on Amazon are excellent.

I think I've learnt a lot from the book, its made me question some things and confirmed some of the ways we test. For example we currently use state based testing of our domain using the front door, which the author confirms is his preferred approach (its also the approach most DDD practitioners go for). However when someone joined our team recently she pointed out that our testing of our coordination layer could be improved if we focussed more on interation testing, especially as they are usually layer crossing tests.

If I have one issue with the book its that in some places it doesn't link togther too well. For example smells/principles/philosophies/patterns are linked and sometimes the relationships are not made clearly enough. A table linking a philosophy to principles then patterns and strategies might be worth trying coming up with...

Share This - Digg It Save to Stumble It! Kick It DZone

Friday, September 21, 2007

Good Articles - DDD/NHibernate/Resharper

These ports aren't necessarily new but I did find them interesting:

  • Getter Eradicator - Anything Martin Fowler writes is worth reading but this one in particular is good. I've tried reading Holubs patterns book but the bit at the start about encapsulation annoyed me enough that I stopped, as this article makes clear layering and the need to keep things that change together together (SRP) sometimes do affect the way you design meaning you do want to have getters.
  • Don't Build Domain Objects That Can't - I agree with almost all of this and thought it was a good summary.
  • Smart DTO - We don't use this approach but its always an option.

  • Fetching Strategies - Udi blogged about this a while back and has now provided more details.

  • Resharper - I'm a basic user but this guide is going to be my savior.

Share This - Digg It Save to Stumble It! Kick It DZone

Friday, September 14, 2007


I've been following BDD, from a distance, for a while and was reading a couple of good articles about it, including one by AgileJoe and one by Dan North. Scott Bellware also has a lot of posts on BDD but they're mainly quite abstract, at this stage I'm more interested in people showing how BDD can work and how to do it well rather than hearing about how great it is but you may want to give them a look.

The idea interests me but I have reservations, which I wanted to write down.

Whilst BDD and TDD go well together I don't think that user stories are the right way to specify the expected domain behavior.

To me user stories are useful for getting the high level, almost workflow, related requirements and acceptance tests. These are supposed to be at a very high level and shouldn't have any details about the user interface or below.

In fact the information you gather from a user story is quite different from the information you need when defining a domain model and in some cases your users (who specify the user stories) may not be the people specifying the required behavior of the domain (the domain experts).

In his excellent book Mike Cohn describes this, including the fact that users can provide good user stories whilst its the domain expert you want to speak to when defining the domain model.

Anyway I'm thinking this is just terminology, Dan North does not really refer to the stories as user stories (though from my reading Scott Bellware does). For some reason if I just consider them as stories I'm far happier, though it is also worth noting that Dan points out that they can be user stories.

User Involvement
Personally I'm 100% convinced that where I work there is no chance of us getting our business experts to work with FIT or something like NBehave, and to be honest I'm not sure I like the idea of them having to specify their tests inside an IDE.

As usual, none. I like the idea and am going to look at it and the FIT style approach, to see what happens. StoryTeller in particular is another interesting piece of software and I'm not yet sure how it overlaps with BDD, particularly as some of the work on BDD is to try and blur the lines between different types of tests.

Share This - Digg It Save to Stumble It! Kick It DZone

Thursday, August 23, 2007

NHibernate Gotchas - Living With Legacy Databases

As we've used NHibernate more and more we've begun to hit against issues, particularly when working against a "legacy" database. So if you are working against a green fields database then your probably fine, you can design from the domain down but if you're not then I think two issues in particular are worth bearing in mind:

  1. Inability to map inheritance at the component level.
  2. Inability to map unidirectional many-one in a satisfactory way in some cases.

Before I continue I should say two things. The first is that the class names are unimportant, I made them up as I don't want to use real class names or examples from our project. Secondly I am not going to provide solutions, I don't know if it’s even possible for an ORM solution to handle these issues better than NHibernate does but you should still know about them.

Components And Inheritance

NHibernate component mappings are great and can improve your domain model but there are limits.

Take the example of an Customer table that has 50+ columns, don't ask me why but it does. You want to redesign the table but thats something to be planned and executed carefully, DB refactoring is painful especially when you have multiple legacy systems and reports/DTS's working against the database.

So for now you want to design your domain model to have a core Customer class and then lots of little inheritance hierarchies off it. For example if the Customer came directly from your Web site then maybe 5 fields in the database are used and if they came indirectly its a different 10.

In each of the two cases you have different behavior and certainly different validation requirements so you decide to have an ICustomerSource interface (making this up as I go) with two subclasses called DirectCustomerSource and ExternalCustomerSource. The Customer will then have a reference to an ICustomerSource which it will be passed in it constructor.

The problem is you cannot do this because your mapping would have each subclass of ICustomerSource mapped as a component, meaning that when the Customer is saved so should the ICustomerSource. The problem is that the component mapping does not support mapping inheritance so, as far as I know, you are slightly stuffed.

You could certainly subclass Customer and get around it that way but that doesn't always work and certainly wouldn't work for us because we actually want to have multiple little class hierarchies hanging off Customer.

I've logged this "issue" in the NHibernate JIRA.

Unidirectional Many-One Associations

Let’s say we have Order and OrderItems. The Order is the root of an aggregate containing OrderItems. In this case we might actually want to do this in the domain by having the Order have a collection of OrderItems. Maybe not for this situation but we often do.In the domain we want to make the association unidirectional though, we can go from a Order to an OrderItem but we don't want to do the reverse association.

Our database design has the ID of the Order as a foreign key in the OrderItem table. Many people see this as a valid database design and so it’s quite possible this is the way you have it done.

The problem is this isn't going to work unless we do one of two things:

  1. Make the foreign key in the OrderItem table nullable.
  2. Add the association from OrderItem to Order.

This topic is discussed in depth in the forums:

  1. Describes why Hibernate works that way.
  2. Another post with more context.
  3. Yet another post on it.

Of course you can just use one of the work arounds but its worth knowing that this issue exists as it can catch you out. I've logged an issue in the NHibernate Jira about it.

Neither issue should convince you that NHibernate is a dead loss, it is still magic but it does have limits and you need to be aware of them.

Share This - Digg It Save to Stumble It! Kick It DZone

TypeMock v IoC (Round 1)

Lots of discussion has kicked off about TypeMock v IoC:

  1. Pro TypeMock - I don't agree with some of this, though I can see the authors point. For me the arguments for depdendency inversion are all in Robert Martins excellent book.
  2. Good Summary - Agreeing with both camps seems sensible :)
  3. Pro DI - Good summary of where DI is useful.
  4. Pro DI 2 - Another discussion on where DI can help.
  5. DI In Context - Really good post with the start of a (hopefully) long and important discussion of how DI relates to different layers.
  6. DI and other patterns - Interface programming, SRP and the rest.
  7. More Pro DI - Another good article on it.
Ultimately I think both have their place and some of the comments are also insightful, in particular I liked Mats Helanders comments to Ayendes post:

When writing an application, you may well be able to analyze in advance to see where you need pluggability points and provide interfaces there. Adding interfaces where you don't anticipate any use for them would of course be a violation of YAGNI.

I really like interface based programming and buy into it. In particular Robert Martins book (one of my favorite programming books) really goes into depth about how to decouple systems. However there is a cost and if you apply the principles too widely I think you get a mess, in particular when you are talking about your core domain/business classes.

I also don't fully agree with the vociferous arguments against TypeMock, I think it can be quite useful where you've looked at your design and decided your happy with it but you also want to do a bit of mocking.

Share This - Digg It Save to Stumble It! Kick It DZone

Testing the Domain

We've been having a lot of discussion in our company about how to test the domain and we've learned a lot from the discussion that I thought was worth sharing.

State Testing
My general view is that testing your domain classes against their public interfaces using state testing is a valid choice. The way we have chosen do this is:

  1. Only use real domain objects in the tests, no mocking.
  2. In general test against the public interfaces of public domain classes.
Eric Evans contributed to a DDD forum thread about this topic and I have to say I agree completely.

Where it does get interesting is where we are using state (as in state pattern) based validation in our domain objects. Nilsson summarizes the problem:

"In reality, adding rules also creates problems with testing. You nee to setup instances to correct states. You can write test helpers (such as private methods) for dealing with that, but it's still a hindrance."

The solution we've gone for up till now is an ObjectMother based approach, so for instance we have a CustomerObjectMother than can produce Customers in all sorts of different states. So if you are testing an Order and you need a Customer in the Active state then you can call into the CustomerObjectMother to get it. The object mother classes have all sorts of different creation/attachment methods.

This approach is definitely not without its flaws, including:
  1. The domain should be enforcing/encapsulating quite a lot so testing it can be more complex than testing the individual (encapsulated) classes within the domain.
  2. If we modify the Customer code and break it then its possible lots tests will break because they use the Customer class in the SUT setup phase (e.g we need a Customer even though we're testing the Order).

The main advantages of the approach are:

  1. Tests at the public interface level mean that we can refactor within the domain safely because they act as a good safety net.
  2. They point out places where the design is a bit overly complex (e.g. if we have lots of Customer states then the ObjectMother has to handle them all).

Decision Time

We've decided to have two levels of tests within the domain, we haven't got good names but essentially they will be:

  1. Domain Tests - Testing at the public interface with real objects, mainly state testing.
  2. Interation Tests - Detailed white box tests going right into the internal classes within the domain, mocking and doing interaction testing as appropriate.

This seems a good compromise, you are testing the public interface so all us TDDers can refactor happily below the public interface level but we've got detailed tests for the smaller classes.

The next question is how many of each test to have, hopefully good coverage by both and a lot of reuse but we'll have to see how that goes. Good.

Interaction Tests

However we have to decide how to mock the domain classes. I'm not at all in favor of covering my domain classes in interfaces, some people do it but there are valid reasons not to not least that the interfaces we put in for mocking hide the real meaningful abstractions within the domain.

Plus in this case we'll often be mocking classes internal to the domain (as in accessibility of internal in .NET). We can do this because we use InternalsVisibleToAttribute to give our unit tests access to the internals of the domain but do we really want to design our internal classes so that they are more easily mockable, then use depdency injection.

An example might help here, lets say our Customer class uses a custom collection called CustomerAddressCollection. This collection has complex logic to track the temporal nature of addresses and the fact that addresses have different purposes (home/work). Users of the Customer don't even know about this custom collection though as we've encapsulated its very existence.

We've tested CustomerAddressCollection but we want to test that Customer works correctly with it, and in this case we want to write an interaction/mocking test. It seems to me we have two choices:

  1. Give CustomerAddressCollection an interface then allow it to be injected into Customer and use Rhino Mocks for the mocking.
  2. Use TypeMock to mock CustomerAddressCollection, mock the methods that we expect Customer to access and then check Customer accesses it correctly.

For now I'm thinking the second option is better.

Share This - Digg It Save to Stumble It! Kick It DZone

Tuesday, August 21, 2007

The Domain And Testing Dependent Layers

I've blogged seperately about the arguments for mocking domain classes when testing the domain itself.

However we also need to consider how to deal with situations where we want to be able to mock out the domain when testing another layer, in particular we need to consider it because Ayende has just posted a good blog article that relates to the topic.

I'll take the example that I put in Ayende's blog. Lets say I have an application layer service that calls to a repository, then to an infrastructure service, then into a domain service/entity and then returns. I want to test the application layer service. In that case I can inject in the repository and infrastructure service to do interaction testing, but would you also inject the domain service/entity.

I think there are a couple of options:

  1. Interfaces - On any domain objects we need to mock.
  2. Virtual members - Any member we need to mock would be virtual.
  3. No Domain Mocking - Don't mock domain classes when testing other layers.
  4. TypeMock - Oh dear, into the lions den....


An interface on every domain object that we need to mock, sure its an option that some people go for but I'm with the group that argues you should only use interfaces in your domain if they are useful. I've blogged about this before but do I really want to be able to replace one Customer with another, probably not so its not a useful extension point to introduce and it ends up introducing complexity I don't need.

We've also gone for a pure (or as pure as we can get) persistence ignorant approach, so I don't want to then have to mess with my domain to get it to fit in with IoC and mocking frameworks.

Virtual Members

Interfaces are not the only way to mock domain objects, Rhino Mocks can mock virtual members too and since most of our aggregate roots are lazy loadable their members are virtual (NHibernate requires this).

This would seem to me to be a half baked solution, if we're going to mock why not use TypeMock.


It is a hugely powerful tool and you can misuse it. However in this case we'd have chosen to use it in a way that lets us maintain the design we want, which to me is an acceptable useage.

No Domain Mocking

Should be an option, why not let the Application layer service call into the domain service/entity.

Our domain is POCO and in fact very rarely calls out to the repositories directly. This isn't the way everyone goes as can be seen here and here.

We can get away with this thanks to NHibernate, in fact the only time our domain classes go to repositories to get reference data and when we go with that approach we hide it behind a Registry/Service Locator. By and large this works fine, where it doesn't we use a special coordination layer, part of the domain but with different responsibilities.

So using the domain should be simple.


So far we haven't needed to mock the domain classes when testing at a higher level, if we do I'm thinking that using TypeMock is a good option.

Share This - Digg It Save to Stumble It! Kick It DZone

Friday, August 10, 2007

NHibernate Gotchas

Although I love NHibernate we do seem to be hitting up against some of its problems these days, I've discussed some of them here but I wanted to document a few more.

Reference Data

You cannot easily map lists of essentially static data.

Dictionaries/Collections Containing Other Collections

We are currently restructuring our domain model massively, one of the main reasons is so that we can exploit the Party archetype pattern. We were able to redesign the DB at the same time as the domain so we didn't hit up against the normal set of issues that stop us redesigning our domain without changing the DB. However we did meet other issues.

For example in the party pattern a particular role (Customer) could be involved in multiple relationships. We need to have a relationship between the role and the relationship and it would seem sensible to have a dictionary keyed by PartyRelationshipKind (enum) where the value was an IList of PartyRoles.

The performance would be good, it'd be clean domain wise and we'd all be winners. Only slight problem is that it isn't supported, as is discussed in the forum entry. We ended up basically mapping the relationships into one big IList and then iterating through it when we wanted to get those that related to a particular PartyRelationshipKind.

Although NHibernate supports generics the support is limited, in particular you may come upon the following issues:

  1. Mapping a generic class - Currently not supported. You can work around this by mapping a concrete class that inherits from the generic base class.
  2. Mapping classes inheriting from a generic baseclass - A JIRA article can be found here, the solution is to map each of the subclasses seperately.

Share This - Digg It Save to Stumble It! Kick It DZone

Tuesday, August 07, 2007

Linq And Specifications

We've been puzzling about how to improve our use of the specification pattern.

In particular currently if we have a specification CustomerNameSpecification that acts over objects in memory then we want it to be joined up to the assoicated GetCustomersWithName method on the CustomerRepository.

Up till now this has been difficult, we don't want NHibernate to leak into the domain and so we've just kept the specifications as operating over in-memory objects and the associated queries on the repositories use ICriteria/HQL.

Then Linq came along. Now I'm no expert on it, tried it out an age ago and since I've only been reading about it but it does look interesting and there have been a few interesting articles about it, both from a single blog:

  1. Persistence Ignorance And Linq - Good introduction to how Linq can support DDD in .NET.
  2. Linq and Specifications - The section "Specifications and Repositories" is interesting. There is obviously a tradeoff as the specification gets more complex/abstract but the advantages could make it worthile.

Anyway its interesting stuff.

The specification approach is great and the idea of having in-memory repositories using Linq is detailed too. Neither are new ideas but both could be very important in the future and solve problems and its good to have articles on them. Also with Linq coming to NHibernate it'll be interesting to see what is possible.

Part way down one of the articles is a link to an example app, will have to look at it at some stage.

Share This - Digg It Save to Stumble It! Kick It DZone

Friday, June 22, 2007

Specification Pattern Implementation

We've been discussing specifications as a way of expressing what objects we need to get back from repositories and we found lots of interesting articles:

  1. In Memory Filtering Of Repository Results - This approach seems to get all the objects then use the specification to filter in memory.
  2. Linq - There must be loads of other posts on this sort of things but this ones very good.
  3. .NET Example - I personally have a few issues with this approach, its not high enough in abstraction but I presume you could build quite abstract specifications on top of it.
  4. DDD Discussion 1 - Mainly focusing on what Linq brings us.
  5. DDD Discussion 2 - The reality of Query Object v Specification using current technologies, its a mother of a discussion but interesting none the less.
  6. Ayende - proposes creating ICriteria based queries from the domain. Personally I minimize calls from the domain to the repositories, just to make testing easy. In general though although we use NHibernate our repositories don't expose NHibernate specific features on the interface, partly for simplicity but mainly so if we did need to walk away from NHibernate we'd have some chance of managing to do it.
  7. Ayende 2 - Doesn't really give much away.

My reading of the whole thing is that specifications are great for validation rules (we use this approach), great for selecting things from in memory collections and great when constructing new objects in factories.

However trying to use a single specification that can evaluate in memory objects and that can spit out the necessary HQL/ICriteria/SQL for use by the repository is not going to be simple. The best I think most people can do is to have a method to evaluate in-memory objects and an HQL or query object to do the equivalent against the DB. Or wait for Linq and see how that all fits in.

Anyway on the more general topic of the specification pattern there are some great links, two of which are:

  1. Fowler / Evans - Two experts at work. I just love the way they write, with the +/- approach, every time I read one of their documents or a section in their books I learn something.
  2. In Memory - Very nice approach to using it for using specifications to make decisions, .NET specific. Great post.

Share This - Digg It Save to Stumble It! Kick It DZone

Wednesday, June 20, 2007

Titan - Another DI Approach

Another DI approach called Titan is now in beta, this time with a focus on simplicity.

Share This - Digg It Save to Stumble It! Kick It DZone

More Dependency Injection Stuff

There seems to be increasing talk abotu dependency injection so I've been reading about it, this is in particular is a superb article about it. I'm still not 100% sold on it for our application though for a few reasons:

  1. I find the service locator and registry approaches just as testable and a good deal simpler.
  2. We don't really want to be doing some of the clever things that DI would allow, such as using the decorator pattern for repositories and then adding in functionality by reconfiguring.

Still its a great topic and anything Oren writes is worth reading.

Fowlers original article on it is also obviously worth a look.

Share This - Digg It Save to Stumble It! Kick It DZone

Wednesday, June 13, 2007

Interfaces For Domain Classes

Its amazing what you find on the Web, I did a search to see how people handle mocking out dependencies within the domain (for example mocking a Customer when testing an Order) and came upon loads of interesting DDD articles.

The first one covers whether domain classes should use interfaces, I agree with the author and view interfaces as something we should only add when they really add value.

For example if I create an IOrder interface is that really improving my design? It might make mocking easier but in my view its only worth creating the interface if it has been defined for a real client of the class. So if we have multiple types of order, or if we want to break the complex interface of an Order down into multiple seperate (and targeted) interfaces then great. However if it is just so I can use mocking to get in mock orders, or because I believe that interfaces really server as good contracts, na. Not when it complicates the domain so much and hides the places where interfaces really are useful.

As someone correctly says in the comments for the post the need to version interfaces does not necessarily apply to the sorts of interfaces you define in the domain. Sure the interfaces are published, but if you only have 1-3 applications using the interface then surely its easier to change them.

I'm also not 100% sure I agree about intention revealing exceptions. In our case when you ask a domain object if its invalid it will do some validation and return a collection of BrokenDomainRules (or something like that). Inside the collection the rules are intention revealing (e.g. DateIsDuringWeekRule) but the exception that you get when you try to do something without meeting the preconditions (maybe make a state change that the object is not ready for) is just a DomainValidationException. The exception does contain the broken rules though, for me thats enough as its the rules (not the exceptions) that I think are part of the domain language.

Having said all that this is also a good post, pushing for using interfaces in the domain. The truth is I think interfaces are useful if they're cut down and design from the clients perspective, as always.

Share This - Digg It Save to Stumble It! Kick It DZone

Tuesday, June 12, 2007

Dependency Injection And The Domain

I've always been slightly against using DI to inject repositories into the domain on the code base I work on, because I think it increases complexity and obviously you are then coupled to the DI framework. In thus use a custom approach based on Fowlers' Registry Pattern.

Having said that there is an interesting discussion of the topic in an excellent blog entry by one of the contributors to the DDD forum, and the auto mocking container approach would make using DI and integrating it into testing easier.

I still think the registry based approach was the correct one for us, its not particularly new or fancy but it works well.

Having said that I guess it is worth considering Windsor, or something like Microsofts DI approach, as an alternative...though I still prefer that my domain and its unit tests have no association with something like Windsor as I want to make the domain and its tests as simple as possible.

Share This - Digg It Save to Stumble It! Kick It DZone

Tuesday, June 05, 2007

Team Project - Areas And Interations

Interesting blog entry about how to structure your team projects.

Good on Eric Lee for bothering to publish about it but the lack of guidance from Microsoft about how to use Team System is a major issue, and even now I don't really know how to apply the structure he is suggesting.

In general I think that Microsoft are making it difficult for the average team trying to use Team System well because they just haven't told us how.

Share This - Digg It Save to Stumble It! Kick It DZone

Saturday, June 02, 2007

Ruby On Rails Screencasts

Finally got round to listening to the ruby on rails screencasts. The one about the database migration was very interesting. You have to put up with the noise of the keys on the keyboard being hammered and a bit of sniffing but it is worth it to see how it works.

In general although ruby on rails interests me I'm not going to be giving up on DDD any time soon, I still think viewing your domain model as "wrapping the database" (description on the ruby on rails front page) and using the database as your starting point is in many cases the wrong way to go. We also couldn't use it on the project I am on because database schema changes are going to be expensive/difficult so we want the domain to vary independently from the database (as far as possible). Ruby on rails still interests me but I think so far I agree with the author of this post about ruby one rails being a different mentality from DDD.

The question of learning Ruby the language is more difficult. It is an attractive language and I like the idea of using it for DSL's but I think the author of this post on whether its worth learning another language is spot on, I'll be better served spending time on TDD/OOP/SOA/DDD instead.

Share This - Digg It Save to Stumble It! Kick It DZone

Friday, May 25, 2007

Domain Driven Design - Why we need to keep it simple

I'm really happy to see how much discussion there is of DDD these days and it really helps that technologies like NHibernate make it work. I'm also hopeful that ADO.NET Entity Framework will support this style of development whilst also addressing lingering issues, including how your domain model can live happily in a system that has lots of reports or a legacy database.

However the content of some of the discussions is starting to surprise me as instead of discussing real domain modeling issues many seem to want to focus on technology.

The first thing I should say is that discussing DDD without discussing persistence, usually ORM, is silly. Persistence is pervasive in many systems so you need to discuss how you will handle it.

However the same does not go for things like AOP, its perfectly possible to discuss DDD without going near AOP. In fact I would say that it is preferable to try to avoid discussing these complex options every time we discuss DDD for a few reasons:

  1. It will turn off to many readers.
  2. The more complex technologies are sometimes not required.
  3. There is a risk that many people will end up with overly complex solutions if they start out with the view that they have to use something like Spring.NET to make DDD a reality.

Having said that its obviously good that we have the option to use things like AOP but I personally prefer to avoid using them until I really feel they are needed.

Share This - Digg It Save to Stumble It! Kick It DZone

Thursday, May 24, 2007


Just tried out Sandcastle and it seems to have progressed a long way. After downloading it I'd recommend you look at the Wiki not least as it has some GUIs for it, which you'll need as the normal procedure for generating the help is a bit complex. I downloaded the NDoc style one which seemed quite good.

The help file it produces is also quite professional but it was fairly slow considering how small the codebase is, though I seem to remember NDoc wasn't too rapid either.

Share This - Digg It Save to Stumble It! Kick It DZone

Tuesday, May 22, 2007


Great podcast about Spec#, there is also a link that you can use to ask MS to add the features to C#.

Until the features are in C# I don't think I'll be looking at using them but they do sound useful.

Share This - Digg It Save to Stumble It! Kick It DZone

Sunday, May 20, 2007

Problems With Team System

The Cost

I don't mind Microsoft charging large quantities for VSTS for developers, its par for the course. However there are some things I do resent:

  1. To do basic things like maintain the (AWFUL) VSMDI file or view code coverage results you seem to need to to have the tester edition
  2. If your users want to interact with Team System, using the portal or TeamPlain, they need expensive CALs
  3. There is no longer any database support, so you need to have team system for database developers.

To me its the first two that are an issue, and that really should be dealt with.


I've found that the quality of Team System isn't that great, but I guess you expect that from the first version of such a large project. Features that we consider key, such as unit testing and check in policies, just do not work as well as we'd expect.

However what bothers me is that when I log these issues using Microsoft Connect all I get is "cannot reproduce". I can't argue with that but there seems to be two problems:

  1. Far from fixing issues I've reported SP1 seems to have introduced new ones.

  2. When you do get problems you get very vague error messages that give you no chance to fix the problem and that don't provide Microsoft with the information they need to fix the problems. Take the date time related error message that I get when my testing check in policy fails (which it does all the time).
  3. As far as I can see SP1 didn't fix any of the issues I was getting but did introduce new ones.
  4. Very old issues like this and this have been reported by several people but Microsoft make no attempt to indicate whether they are doing anything about them.
As an example of an error message...

...needless to say the related issue was immediately marked as not reproducible.

I hate it, its a maintenance nightmare that we need to keep around because the check in policies and code coverage functionality needs it.

Release Cycle
With open source software you tend to get releases relatively regularly and on a good project each release improves the quality. I'm not sure if I would say this happens with VSTS.

The online documentation is quite often very poor and you end up looking at the odd blog entry to get information. As an example we wanted to have one of our team system builds do the following:
  1. Get the database scripts
  2. Build them
  3. Run the integration tests

Simple you might think, so how do we get the database scripts. Well we quickly identified we needed to use CoreGet. Do a search in MSDN and you get this page with this example:

What we thought, we can't specify a path within the Team Project to get, so we couldn't specify that we only wanted to get the "CrmDatabase" within the "Database" team project.

Our search continued and eventually we found this blog entry. Superb, just what we needed. So why didn't MSDN provide us with this information.

To be honest this is no isolated case, in general I'd say that 80% of the time we need team system information we find it in blogs rather than in MSDN, not perfect.

Open Source
Since we have so many problems getting testing and code coverage to work well in Team System I'm wondering why any company evaluating VSTS would choose it over open source.

I guess the whole integrated environment idea is a good one but when many of the integrated features don't work particularly well, and when each of them is expensive, and when the open source alternatives are in most cases at least as good...

Share This - Digg It Save to Stumble It! Kick It DZone

Tuesday, May 08, 2007


Brian Buttons blog had a link to an excellent video on BDD by Dave Astels.

I wasn't familiar with the term but I really enjoyed Astels book on TDD so hearing about BDD very interesting. BDD just seems to be refocusing on doing TDD as it was supposed to be done. I have a few issues with the idea though:

  1. We already have a lot of acronyms, I'm not entirely sure a new one will help especially because the difference between TDD and BDD is quite subtle.
  2. TDD serves several purposes, one of which is testing. Sure its testing the contracts but the tests themselves are very useful when you redesign and refactor.

I definitely agree with a lot of it, especially things like Should/Can based test naming and not mapping your test classes one to one with the classes being tested.

Having said that its a big step from doing TDD well to coming up with a completely different concept, anyway the video itself is superb.

Share This - Digg It Save to Stumble It! Kick It DZone

Monday, May 07, 2007

NHibernate and Coarse Grained Locking

One of the requirements of the aggregate pattern is that the aggregate has a root and a boundary and that the aggregate as a whole is the unit that we consider when looking at concurrency.

This has bothered me for a while as NHibernates concurrency is based on a version column at the row level, which is a completely different way of doing things.

For example if a Customer has a CustomerIdentification and both are in one row then we're OK, modifying either and saving the Customer will cause NHibernate to check that the row hasn't changed since we loaded it.

But lets say that the CustomerIdentification is in it's own table, now if we modify the CustomerIdentification and save then the save will pass even if someone else has modified the Customer. That can cause problems (see Eric Evans' book on DDD for concrete examples of the issues it can cause).

I looked it up in Nilssons "Applying Domain-Driven Design and Patterns" and was in luck as page 127 had the following:

" Order including its OrderLines is a concurrency conflict detection unit of it's own or - more conceptually - an Aggregate...".

He then goes on say that because the aggregate is the unit as far as concurrency we don't need to worry about other users interfering. Exactly what we need. I have read the book before so I was surprised that if it had a solution to this I'd missed it so I looked on, page 309 indicates two things:

  1. We want to use coarse-grained locks at the aggregate root level.
  2. We want to use a version field on the aggregate roots

The second bit confused me but its only for optimistic offline locking (though I would have thought you'd use that for all objects or for none). It doesn't seem to get me any closer to actually getting coarse grained locking working though so I skipped to page 341 covers how NHibernate works with concurrency and it tells me that NHibernate doesn't give us coarse grained locking. And thats all, no more reference to how we can get this important capability, even though the earlier examples assumed that we had a way of doing it. Back on the shelf for you.

Having said that I was particularly interested in the discussion of coarse grained locking of aggregates in Fowlers excellent "Patterns Of Enterprise Application Architecture". On page 439 the idea of a "root lock" is introduced, the root of the aggregate would have a lock and all parts of the aggregate would use it.

The book points out that for this to work we need to be able to navigate from any part of the aggregate back to the root, indirectly (with each one having a reference to its "parent") or directly (each has a reference to the root).

For me for now this is too much, having to couple each part of the aggregate directly to the root is too much and even coupling each part to the parent is troublesome, not least as to ensure that relationship is maintained when reloaded from persistence I'm going to need to store the relationship in the database. Unless I'm missing something obvious, which is quite possible.

For now I'll forget it :)

Share This - Digg It Save to Stumble It! Kick It DZone

Saturday, May 05, 2007

TypeMock / Rhino.Mocks - Designing for testability

On the project I work on we use a combination of Rhino.Mocks and TypeMock.

I'd used TypeMock a lot in the past and found it to be superb and I have always been skeptical on the advantages of designing for testability when dealing with domain classes. TypeMock also helped me avoid having to look at IOC, simplifying my domain and the related infrastructure code.

However I decided to give Rhino Mocks a shot and I did find it useful, if you are prepared to accept the limitation that it can only do interfaces/virtual methods/delegates.

In general I decided to favor TypeMock and only use Rhino.Mocks where it fitted the design I wanted, for example I use Rhino Mocks when mocking out dependencies to infrastructure.

There's a lot of debate about the use of TypeMock though, some people seem to disagree with it:

There are also a few people who seem to thing it is worth using though:

It’s an interesting discussion and I guess your take on it is affected by many factors including the types of systems you work on and where in the code base you’re working.

As an example in our application we have persistence ignorant domain assemblies (Customer/Order) and separate persistence assemblies containing the repositories (CustomerRepository/RepositoryOrder). Nine times out of 10 we can ensure our domain assemblies never need to access repositories by careful use of NHibernate's features, making the domain classes very easy to unit test.

However in some cases the domain objects do need to call out to the repositories, what should we do then? We need to make sure that what we maintain the ability to test the domain simply. We could have looked at IOC but instead I went for a simpler Registry pattern based approach. You pass in a RepositoryFactory to this registry during initializaiton (constructor injection) and the interfaces of the repositories (though not the implementations) are in the domain assembly (separated interface).

This works fine and because the repositories and the factory implement interfaces I can use Rhino Mocks. Superb, so I can create a mock repository factory and pass it to the registry and that factory will create mock repositories.

This is fine and is a good example of designing for testability. However we also have cases where one domain object (A) uses another complex domain object (B). When testing A we want to mock B but we're happy with the design as it is so we don't want to start shoving interfaces onto B or using dependency injection. In those cases I turn to TypeMock.

Share This - Digg It Save to Stumble It! Kick It DZone

NDepend Podcast

Excellent podcast about NDepend.

It definitely seems like NDepend has moved on a long way from when I originally used it, since it was good then I can only imagine what you can do with it now.

One unfortunate thing is that Robert Martin's .NET book, on which a lot of the package dependency analysis is based, does not seem to mention NDepend and the NDepend Website makes little mention of Robert Martin.

It kind of seems like both the book and the tool could both do with referencing the other to help getting their existence out to the masses.

Share This - Digg It Save to Stumble It! Kick It DZone

Thursday, May 03, 2007

First Impressions Of Validation Application Block with DDD

I would highly recommend the Enterprise Library drill down webcasts:

I've just watched the validation application block one and it left me with a few questions:

Looks good but I'm not sure how valid it is in practice on proper domain model based development. In particular a few things worried me:

  1. Primtive Validation - The view seems to be that the primitive validation with the application block will deal with most cases but I'm not sure this is true...I think I'll end up creating a lot of custom rules just as I do now.
  2. Configuration Flexibility - The configuration flexbility might be useful in some cases, associate the rules to the domain objects in XML configuration where appropriate. However the video was slightly confusing on how they were advising we used this. David was saying start with attributes and then move to configuration based approach once your domain model was "set". Tom was saying use configuration if the rules are likely to be flexibile or you don't have the code that you want to apply to rules to. Personally I think Toms approach seemed more appropriate for the type of development I do.
  3. Attributes v Configuration Files - The whole Ruleset approach seems to be a bit basic. It seems odd that rulesets are not considered the default, lots of companies are applying domain models/layers where that layer is written in C#/Java and the rules are written in those languages. Business rules engines are not the only way to do it and the offhand way it was discussed left me a little worried. In our project most of our domain objects do have some inbuilt states and the validation rules are based on the state so and I'm not sure Rulesets give us enough flexibility.
  4. Self validation - Where its necessary you can put validation methods into the classes being validated. Evans deals with the issues with such an approach in pp 224 on his book, the obvious one being if you go down that path your domain objects will be full of rule code. To me a Specification based approach is far better and so I would imagine we'd be using CustomValidators instead.
  5. Custom Validators - Getting more into Specification style approach. Your rules must derive from a supplied base class. These classes look quite like the implementation of our IDomainRule interface, all very simple. You also need to write an attribute, again its simple enough and if you don't use this approach you'd need to write your own code to ensure validation rules are applied.
  6. Policy Injection App Block Integration - The example integrating the Policy Injection seemed to me to be resulting in very ugly code, attributes like that on every argument would not be nice at all. I like the declarative approach but having a one line call to something like this ArgumentValidation.ThrowIfOutsideRange(....) in the method would seem to me to be much nicer.
  7. Integration Adapters - Interesting way to integrate your domain validation into the UI, it'll be interesting to see how it works in practice and whether its flexible enough for us.In particular I'm not sure how this will work with the ASP.NET proxy validates as you need to specify the RulesetName but in our case which ruleset to use depends on the context. Having said that this could prove very useful.

On the Ruleset point I added a topic to CodePlex about it so hopefully I'll get some feedback.

Share This - Digg It Save to Stumble It! Kick It DZone