Thursday, March 27, 2008

Resharper Plugins - NHibernate and log4net

Looks like there Stefan Lieser has created a Resharper plugin that will help ensure your HBM's are kept up to date when you refactor. Seems its still at an early stage but it should prove very useful.

Achmed has also put together a log4net Resharper plugin which could also be very useful.

There is also a thread on both in the Resharper forum.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Domain Role Interfaces - Optimising Fetching

Ayende has a post about how Udi Dahan uses role interfaces in the domain, specifically to allow optimizations of fetching.

Its an interesting enough idea, though I'm not really sure on it, but it did also lead to someone linking to Fowler discussing role interfaces. Never seen that particular page before but it was very relevant and I liked the term for the alternative approach, header interface.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Tuesday, March 25, 2008

Trying For Readable Tests

Whilst playing with using BDD style specifications to drive the granular design of my classes I've tried to see if I can also look at ensuring my tests are as readable as possible and I thought I should document my current thoughts on the results.

In this description I'll use the word specification/test interchangeably...I'm just not ready to talk about state or interaction specifications.

State Tests

I've been following the approach of having small test fixtures (contexts), for example:

using Concerning = System.ComponentModel.CategoryAttribute;
using Specification = NUnit.Framework.TestAttribute;
using Context = NUnit.Framework.TestFixtureAttribute;
using NUnit.Framework;

namespace AddressMapperSpecifications
{
[Context]
[Concerning("AddressMapper")]
public class When_mapping_addresses
{
#region Fields

private AddressDetails _mappingFrom;
private Address _mappingTo;

#endregion

# region
Context

[SetUp]
public void SetupContext()
{
_mappingFrom = AddressObjectMother.CreateAddress();

_mappingTo = new AddressMapper().Map(_mappingFrom);
}

# endregion

#region
Specifications

[Specification]
public void Street_one_is_mapped()
{
Assert.AreEqual(_mappingFrom.StreetOne, _mappingTo.StreetOne);
}

[Specification]
public void Street_two_is_mapped()
{
Assert.AreEqual(_mappingFrom.StreetTwo, _mappingTo.StreetTwo);
}

I like this style because it leads me to have small fixture/context classes, likely only a very few specifications are going to want to share exactly the same fixture so I avoid the issue of my classes getting too big.

I also think that the small fixture size and the class/test names together result in an approach that I find very readable.

Interaction Tests

As I see it you use use mocking for a few reasons, one of the most obvious ones is to for convenience. Say A calls B which calls C, if I write my tests for C but my clients use A then I need to show that when I call A we can expect C in turn to be called (and also can show the arguments/return values). I see this as mocking to make granular testing easier, there are other reasons to use mocking/stubbing but this discussion doesn't necessarily relate to them so well.

The previous example showed a state test/specification where just having an assertion (or assertions) in the specification method worked. Arguably moving the setup and interaction with the system under test (SUT) into the SetupContext method helped make it all more readable. We can't really do that so easily with mocking tests because in a mocking test you have to setup your expectations up front so the tests take take (basically) this form:

  1. Setup including setting expectations on the mock.
  2. Exercise the SUT.
  3. Verify that the expectations on the mock were met and optionally do other verifications.
  4. Cleanup (if required).

If we move the setting up of the mocks into the SetupContext method then we'd end up with a situation where each class had one specification (because its unlikely we'll want to setup the same expectations multiple times). Even worse looking at the specification method would tell you nothing because to understand a mocking test you need to start by looking in detail at the expectations.

I've thus found that so far my interaction tests are taking this form:

namespace AccountMessageMapperSpecifications
{
[Context]
[Concerning("AccountMessageMapper")]
public class Interactions_when_mapping_an_account : RhinoMockBase
{
# region Fields

private Account _subjectOfMessage;
private DomainMessage _messageToBeMapped;

private AccountMessageMapper _underTest;

# endregion

#region
Context

[SetUp]
public void SetupContext()
{
_subjectOfMessage = AccountObjectMother.CreateAccount();
_messageToBeMapped = new TestDomainMessageBuilder().WithDomainEntity(_subjectOfMessage).Build();

_underTest = new AccountMessageMapper();
}

#endregion

#region
Specifications

[Specification]
public void Primary_address_is_mapped()
{
IMapper<IAddress, Address> _mapper = Mocks.CreateMock<IMapper<IAddress, Address>>();

using (Record)
{
Expect.Call(_mapper.Map(_subjectOfMessage.Owner.PrimaryAddress)).Return(new Address());
}

using (Playback)
{
_underTest.Map(_messageToBeMapped);
}

_mocks.VerifyAll();
}

#endregion
}
}

DISCLAIMER: I've just bodged this together so it isn't great. I'm particularly not liking the way it looks when I pass a generic interface in to get a mock for it!

Most of the code is in the actual specification method, including all of the setup, the interaction with the SUT and finally the call to verify that the expectations were met. I've moved as much as I think I sensibly can into the SetupContext method but the fixture is still quite lightweight and reusable, for example I could easily put another method in that showed the interaction with another mapper (e.g. Telephone_number_is_mapped).

Overall though I think this style of interaction test is quite readable, and to me the combination of the two tests explains very well the behaviour of a low-level component (AddressMapper) and also the interaction that a higher level component (AccountMessageMapper) has with it.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Monday, March 24, 2008

NHibernate - Working Around Lack of Component Inheritance

Lets say we want to map this class hierarchy:

image

Table wise we only have one table called MultipleClassesToOneTable:

image

So we're trying to map MainClass and MainClassKind to a single table and we want it to handle the fact that MainClassKind has subclasses. Why are we doing this, because we're working with a pre-existing database that is hard to work with.

Ideally we want MainClassKind to behave like a <component> but we can't map it that way because <component> does not support inheritance so the question is how do we map it?

MainClass

This is the class that would act as our aggregate root and which will manage the ID, its mapping is quite simple:

<class name="MainClass"
table="MultipleClassesToOneTable" lazy="false">

<
id name="Id" column="Id">
<
generator class="identity" />
</
id>

<
property name="Name" column="Name" />

<
one-to-one name="Kind" access="property" cascade="all-delete-orphan" />
</
class>

Note that this class is mapped to MultipleClassesToOneTable and its generating the identity value which is stored in Id. The class also has an association to a MainClassKind, but its a pretty dull ordinary mapping file.

MainClassKind

If we could map this class as a component of MainClass then we would, since we can't we have to do something a bit more interesting:

<class name="MainClassKind" table="MultpleClassesToOneTable" lazy="false">

<
id name="Id" column="Id" >
<
generator class="foreign">
<
param name="property">MainClass</param>
</
generator>
</
id>

<
discriminator column="Kind" insert="false" />
<
property name="Kind" access="property"/>

<
property name="Description" column="Description" />

<
one-to-one constrained="true" name="MainClass" access="property"/>

<
subclass discriminator-value="0" name="FirstKind" >
<
property name="FirstClassesExtraValue" access="property"/>
</
subclass>

<
subclass discriminator-value="1" name="SecondKind"/>

</
class>

So what is notable about this mapping:

  1. Generator - The Id is coming from the associated MainClass object.
  2. Discriminator Mapped Twice - The Kind column is mapped twice, once as the discriminator and once as a property. We only have the property mapping because for some reason when we use <discriminator> like this it doesn't update the associated column, I believe this is a bug. I've thus set insert="false" on the discriminator and mapped the property.
  3. Shared Row - As discussed we need to have a relationship back to the MainClass to allow us to use its Id. We could map this as a field but here I've included it as a property. Anyway it's mapped as a <one-to-one> and since both map to the same row they do technically have the same ID.

Hrm...

Result

Well I'm not exactly happy with this mapping, it seems like a hacky way of getting the behaviour I want, but I guess its an option if you really want to redesign your model in such a way without redesigning your database. Is there a better way to handle this though?

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Friday, March 21, 2008

Windsor Configuration Files - Fluent Interface

I've been using Windsor on a Windows service for the last couple of weeks and the nastiness of its XML files quickly became a pain.

I don't tend to mind XML too much, for example I think the XML configuration option with NHibernate is pretty good, but Windsor's is just way too wordy and repetitive.

Anyway I had another look at Binsor but having to learn Boo and the DSL itself is hard enough and without IDE support (other than SharpDevelop) I thought it was too much.

Good news is it looks like there is already a fluent interface project for Windsor. You can read about it at Hammett's blog (or in this dev thread), there is also a blog entry about an approach that allows automatically registering services from an assembly. All very good and examples

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Tuesday, March 18, 2008

.NET Readonly Collections & LSP

Chad Myers has a post on LSP, the example isn't technically of LSP (see comments) but it did make me think of one of my pet peeves in the .NET framework.

As you may know you can call List<T>.AsReadOnly and get back a read-only wrapper, you get back an object that supports IList<T> so you try to use it:

List<string> dinosaurs = new List<string>();
ReadOnlyCollection<string> asReadOnly = dinosaurs.AsReadOnly();

((IList<string>)dinosaurs).Add("Tyrannosaurus");
((IList<string>)asReadOnly).Add("Bob"); // kablam, NotSupportedException

Now looking at the documentation the exceptions are made clear, for example on ICollection<T>.Add. This means that it's not an LSP violation, but it does annoy me because it means that most of the methods on IList<T> will raise exceptions if they are not suitable for use with the actual type behind the interface.

And remember this could be a real problem. For example your method had been accepting List<T> and had been calling Add on that class. You decide that its nicer to use base types where possible so you change the method to accept IList<T> and now your open to being passed a ReadOnlyCollection which will immediately blow up.

So why is the design the way it is, dunno. Does seem like it was discussed but it's not a decision I like. I'd have probably had a specific IReadonlyCollection<T> interface. In fact we have that interface in our code base and we've found it very useful indeed, particularly in the domain where we want to make clear when things are read-only without having to resort to exceptions.

If you don't know what LSP is then read this PDF or even this link (which I found whilst searching for the PDF) but really to get your head around a lot of this stuff you need to go to the books, in particular Agile Principles.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Monday, March 17, 2008

Dependency Inversion (DIP) - Where I draw the line

I was reading James Kovacs' MSDN Magazine article Tame Your Software Dependencies for More Flexible Apps. I spent some time writing some comments to put in the related blog entry but I thought I should write up my thoughts in a little more detail because I do think that my views on dependency inversion principle (DIP) differ from other peoples.

What Does DIP Give Me

First off what is DIP, if you don't know then go read Robert Martins description and/or do a search on Google because there is lots of stuff out there (including this article by Jeremy Miller).

Testing is often the underlying reason that people push for DIP. That's a big pity because it leaves people thinking that DIP is only useful for testing, which is nonsense.

So lets go back to some of the reasons Robert Martin introduced DIP:

  1. Coupling Important Logic To Details - High-level modules should not depend on low-level modules, you don't want important business code coupled to implementation details.
  2. Reuse - You want to be able to reuse your important policy/business logic in different contexts.

So how do we violate DIP, well we can easily do it by making our domain logic depend on implementation details. Put your repositories implementation in the same project as the domain logic and call those repositories from the domain and you've got it (though the repository classes do provide some encapsulation of the details). Likewise coupling your domain logic to a specific logging framework, or maybe to a specific vendors API.

So DIP is great, no doubt, an a sensible principle. However I disagree with some of the forum entries/blog posts and so on about DIP...

Issues I Have

So let me make clear again, I don't have a problem with DIP, in fact I think it is very useful. However I do have an issue with the way people describe DIP because in many cases it differs from my experience, here are the issues I have with some of the writing about it:

  1. Layering - Robert Martins got something on layering in his post Layers, Levels & DIP. For me the key layer is the domain so I focus more effort on its dependencies because I consider it a "higher" layer, therefore DIP all the way. However I'm not so worried when it comes to the Service layer, why? Well because our service layer is lightweight (as little domain logic as possible) and shallow (not likely to have a deep call stack) so deciding to inject later is an easier change than in the domain (where you might decide you need a service 6 layers deep in a call change). The point I'm making is that DIP relates to coupling between layers but indeed the amount I'm worried about coupling depends on which layer its to and from.
  2. Mocking - I've blogged about it many times before but I've come to the view that decoupling to allow me to get in Test Doubles is not always improving my design. It depends though, but I think many people assume that extracting all sorts of interfaces to mock makes for a great design but I'm not sure it does.
  3. Coupling - So I extract an interface from each repository and now my other layers can be nice and decoupled from the repository. Sure, but you have to remember that if you do change the repository it's probably not going to be by creating another subclass (an aggregate is unlikely to have 2 repositories even if you do switch ORM) but instead by modifying the existing repositories interface. In these situations the interface will need to change, so the interface hasn't really helped hidden the change from the dependent layers. Of course the discussion on interfaces is a big one and I am of course a fan of interfaces/ABC's but I think that they work best when you target their usage and take some time to design the interfaces (to suit the clients).
  4. Flexibility - The argument goes that if every service style class supports an interface then its easy to decorate them e.g. to add auditing. It is. However you have to weight it up, the chances of you needing to decorate all your repositories is unlikely, if you do you may opt for a doing it a different way (e.g. PostSharp), and even if you do decide to do it using interfaces then you can extract those interfaces and inject at that time. Of course if your code has many clients then putting the interfaces in up-front might be a good idea but in general I'm a big fan of extracting interfaces when you need them (in general) just like I'm a big fan of refactoring to patterns instead of just using design patterns.
  5. Not All Dependencies Are Equal - How worried about a dependency I am is based on where it is going from/to.
  6. Mixed Up With DI/IoC - Some people love the fact that when they use and IoC container they get to see all the dependencies in the constructor, this sort of argument can be had separately from a discussion of DIP though.
  7. Inverting Ownership - In Agile Principles Robert Martin makes it clear that a key change is to invert the ownership of the interface, so that the interface is designed specifically for the client(s). Now I've tended to find that my repositories/services are simple enough that they only have one real interface (the one they get from being a concrete class) but we have had cases where we've had them implement interfaces specifically design for their client(s).
  8. Swapping Implementation - The obvious advantage. However am I really going to remove one repository implementation and plug in another one, maybe one using another ORM. Nope, that'd be a massive change and I couldn't just do it for one repository.

Note when I discuss interfaces above I'm talking about naive interfaces, OrderRepository having an IOrderRepository interface (interface-implementation pair). If you are creating small role interfaces (ISP) and are giving the resulting interfaces more domain meaning (IOrderBook) then you are absolutely doing a good thing. Even better if the interfaces you define are customized for the specific clients. However I think that few, very few, articles on IoC/DI/DIP focus enough on these things.

So What Point Am I Making

Personally I think a lot of talk about DIP/DI/IoC misses the key point, design. Too many examples focus on details like how you configure your container and assume the natural pre-eminence of the interface approach.

I think this is a bit of a mistake, like I say I like interfaces but I am usually skeptical about the ICustomerRepository style approach because in many places I think people use that style without thinking about what real benefits they are getting.

In our case our service layer directly creates repositories/domain service layer when it needs them. This could backfire, for sure, but so far I'm not convinced that its a big mistake and if we find that it was wrong it will be relatively easy to change. Am I violating DIP when my service layer contacts a repository directly, yes. However I'm doing it with my eyes open and (I hope) with a good understanding of the consequences.

My advice would be, before going out and learning about Windsor/Binsor and the rest find out a bit more about DIP and why its useful, that way you can judge for yourself how to proceed.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

DDD Links

Really just posting this to give myself an easy link to this blog entry with tonnes of DDD links.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Friday, March 14, 2008

BDD - Mock then replace

Just re-reading Dan North's article Introducing BDD and realized that I'd forgotten about this section:

At first, the fragments are implemented using mocks to set an account to be in credit or a card to be valid. These form the starting points for implementing behaviour. As you implement the application, the givens and outcomes are changed to use the actual classes you have implemented, so that by the time the scenario is completed, they have become proper end-to-end functional tests.

I must admit I haven't tried this approach yet but it is one that makes sense to me so I think I'll try it because I really think having end-to-end functional tests is very attractive.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

BDD - Uncle Bob's Stable State

In a TDD thread on tests as specifications there was a link to Uncle Bob talking about Stable State: An Emergent Rule, an excerpt is:

If you look carefully at the specification of the Bowling Game you will see that the state of the Game is changed only by the setup block within the context blocks. The specify blocks simply interrogate and verify state. This is in stark contrast to the JUnit tests in which the test methods both change and verify the state of the Game.

Really its the same idea as some people practicing BDD have, keep the test/specification methods small and preferably just doing Assertions (or mock verifications I guess).

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Tuesday, March 11, 2008

BDD in Practice - Mocking

Some of the articles on BDD put a heavy emphasis on mocking and outside-in development.

However so far the discussions on the BDD group seem to have left me feeling that when to mock is just as much of an issue in BDD as it is in TDD. Having said that as I try BDD I'm making sure I also spend the time to think about the ways I use mocking, just to formalize my thoughts.

Some of this is just my unordered thoughts, just stuff I'm thinking as I write the specifications using BDD. I've broken it down to sections based on the reasons I'm mocking.

Mocking As A Design Technique

I've been trying to do some outside-in style mocking as I go, I've discussed this in an earlier post but I am finding that thinking about collaborations (for Services at least) is quite an interesting technique. I've tried it in the past and didn't like it too much, but I think I'm now at a stage where I'll try to work it in to my normal practices.

I think I'm using the technique naively though. For example I ended up writing up-front mocking tests for an XmlFileBasedSender which takes in a DTO and ensures its packaged as XML and put in a suitable file. I blindly followed the technique but ended up with what I consider to be a pretty silly design:

public XmlFileBasedSender(IXmlFileNameProvider fileNameProvider, IConfigurationService xmlConfigurationService,
IFileSystem fileSystem, INotificationToXmlConverter notificationMapper)

To me INotificationToXmlConverter and IXmlFileNameProvider are just small lightweight strategy/helper/policy classes where as the other two are meaningful abstractions (especially as with them I'm wrapping built in .NET framework functionality). I'd think this style of constructor is more meaningful:

public XmlFileBasedSender(IConfigurationService xmlConfigurationService, IFileSystem fileSystem)

To be fair this isn't a problem with the technique, I've probably just gone overboard and at mockobjects.com Steve Freeman was good enough to point out that he might have just written an integration test for the service and might have encapsulated the little policy classes.

Of course even if I do just have XmlFileBasedSender create the instance of NotificationToXmlConverter in the constructor I might want to stub out NotificationToXmlConverter. In that case I'm stubbing for convenience rather than to use it as a design technique, but I think that's OK (especially since we use TypeMock).

I'm also not planning to use this technique all the time, I think state based testing is still often the way to go and that in many cases I'll stub rather than mock but that's not to say that this technique doesn't have its place. I especially liked using it in combination with integration tests. So after writing interaction tests for XmlFileBasedSender I wrote tests for each of the dependencies and then wrote a single integration test for XmlFileBasedSender (at which point I discovered I'd forget to create a class implementing IClock).

Finding the balance between mocking/state tests and between integration/unit tests is something I'll continue to think about, though presumably never coming to a conclusion on.

Mocking/Stubbing Out Dependencies

I needed to mock out calls to DateTime.Now and File.Create, using TypeMock to mock these would not be a good idea and anyway I'm happy enough to wrap in these cases (though wrapping DateTime.Now is a little irritating). There's lots out there about wrapping like this but essentially I ended up with tests like this (using string based mocking because I'm using the free version of TypeMock for the example):

[TestMethod]
public void Current_time_retrieved_from_clock()
{
#region Setup Expectations

Mock mockClock = MockManager.MockObject(typeof(IClock));
mockClock.ExpectGet("Now", DateTime.Now);

IClock clock = (IClock)mockClock.MockedInstance;

#endregion

new
ClassUnderTest(clock).DoSomething();
}
Technically I guess this is mocking for design, but even if I wasn't using mocking as a design technique I'd still need to wrap the file system and clock just to allow me to stub them out.

Mocking To Make Granular Testing Easier

ClientMapper uses a ClientFeeDestinationMapper so I've written tests for the ClientFeeDestinationMapper and then written one test that proves that the ClientMapper interacts with the ClientFeeDestinationMapper. This test was not about working out role interfaces so this is not me using mocking as a design technique and I just used TypeMock to mock out the concrete ClientFeeDestinationMapper.

One problem with this is that the tests (including the state ones) are very granular so any change means re-writing them. For example I plan to try using Otis (XML based object-object mapping) again, at which point ClientFeeDestinationMapper will probably disappear and ClientMapper will do everything (everything being kicking off the mapping process). Now in order to do that I first need to re-write my tests, which is a pain but not really related to the fact that I'm mocking or to BDD.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

NHibernate Gotchas - Orphans and one-to-one

I'd previously posted about NHibernate Gotchas but we just came accross a big one, cascade options on one-to-one.

Essentially we'd used the cascade option "all-delete-orphan" on a which we thought was fine. NHibernate didn't complain and the schema indicated it was a valid option.

Unfortunately the cascade option was totally ignored and indeed after quite a bit of effort we found that the docs do indicate its not supported an its a known issue.

Workarounds, not sure. Deleting the one-to-one from the ISession directly is not an option for us in our POCO domain so we may need to look at other mappings.

Worth knowing anyway!

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Monday, March 10, 2008

NHibernate - Changing Inheritance Strategy

We've recently been putting in the Party archetype for managing information about People/Organizations and their roles in regards to our systems. When mapping the roles we had two good choices:

  1. Table-per-subclass - Give each role its own table as well as having a table for the basic role data.
  2. Table-per-hierarchy with join tables - One table to manage all the basic role data (shared by all roles) and one table for each role that required extra data (using the join-table approach).

Our database expert(s) preferred us to choose the latter, particularly because some of our roles do not have extra information and if we'd gone for table-per-subclass we'd have given those roles their own (pretty much empty) tables.

Unfortunately the join-table approach has issues, not least that the join-tables cannot support their own components so we changed our mind and instead went for the table-per-subclass approach. The amazing thing is that one of my colleagues, Kathryn, managed to make this change and have our tests passing in no more than a couple of hours.

This is brilliant, all it required was a few changes to the mappings, creation of a couple of tables for the roles that weren't (currently) adding any data and a couple of update scripts to ensure those tables were populated and that the discriminator column was removed from the party role table.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone

Wednesday, March 05, 2008

BDD in Practice

As I'm trying BDD I plan to write about what I'm finding, more as a way of organizing and recording my (current) thoughts than anything else.

Stories

For now I'm ignoring the story based approach to BDD, I still think it's probably going to be useful but I'm not sure I'm yet in a place where I can take advantage of it so I'm just going to focus on the specification style. Doing it this way is also simpler because in my view the specification based approach is just "better TDD" but the story based approach seems to me to be a bit more fundamental and affects more than the developers.

Naming Style

I've gone for this style of test class/method naming:

class When_mapping_a_customer

Name_is_mapped()
...
class When_grouping_related_domain_changes
All_changes_for_an_entity_are_grouped()
...

I'm enjoying naming tests in this sort of style, I've toyed with a few other approaches but this one has so far kept me happy as the class/test names are not too long but they are quite expressive.

I've taken out all the "should_" from the tests because it felt like noise especially when I viewed the list of test names in the IDE.

Of course this is very similar to the approach others have specified, including Agile Joe and there will certainly be better naming schemes around.

BDD (Astels Style)

Some seem to see BDD as primarily being about making it easy to trace a specification (test) back to its context. You start out with a context (small test fixture for a particular case) and then write your specifications, most of the code ends up in the test class setup method and the tests themselves are nice and small. You don't just end up with one big CustomerTest class full of tests that require different fixtures or which are for different features.

I do find that BDD's style of having lots of test classes (small fixtures) and attempting to have one line of code in the specification (test) method can make it easy to understand what is being specified.

It's not all good though, one of my colleagues points out that he likes to see all of the code in the test method.

We were also slightly worried by the fact that the specifications for a class end up fragmented across quite a few files, which is fine but you might have some fun finding the specification for behaviour for a particular class in a large code base. I'm getting around this at the minute by tagging the fixture classes with a [Concerning("AccountMapper")] as discussed by Agile Joe, you can then search for the class name and find all the specifications. Not very advanced though, but then again it will improve if tools come in that support the BDD style.

Oh and Brian Donahue has a post on the same sorts of issues so I'm not alone in think about this, and I'm sure they are issues that many other people are dealing with.

At the moment I'm not using any special framework for these specifications at the moment, which I think is a valid choice (for now). I'll probably look again at the specification side of NBehave and at SpecUnit.NET but it seems too early to be using either on commercial projects at the moment.

Initially my reaction was that this is really just TDD with a test class per fixture policy that focuses on really small fixtures. It hardly seems like a fundamental change, but having done it I think the combination of naming and small test fixtures is quite useful and could (perhaps) lead to people doing TDD "properly".

Team System

This is probably only interesting to a few people but there are issues with using these approaches in Team System. Nothing major but annoyances.

For example you cannot (as far as I can see) re-alias [TestMethod] to be [Specification]. Also the test list window is ridiculously constricting and there is (so far) no support for integrating other test runners into Team System.

There may be workarounds for some of these issues, for now I'm ignoring these topics though and just doing the best I can.

Share This - Digg It Save to del.icio.us Stumble It! Kick It DZone