Thursday, February 28, 2008
Sunday, February 24, 2008
After seeing it on Greg Young's blog, and after getting sick of my code snippets in blogger not working, I decided to try syntaxhighlighter. To find out how to use it with blogger read the usage guidelines and more importantly this page which tells you how to get setup to use it with blogger.
Anyway here is an example:
public class DomainChangeTrackingService : IDomainChangeTrackingService
public void ProcessMessage(DomainChangeMessage message)
Wednesday, February 20, 2008
I've started using NBehave specifications to try and work out how BDD can affect the way I work.
As I touched on in my previous post. I'm hoping to try and find answers to my questions and the first one I wanted to deal with is how to best use BDD as a way of getting outside-in design.
I'm going to give an example of two approaches to using outside-in design with BDD, first though I'll explain what I'm testing:
I'm writing a Windows Service that will do the following (initially):
- Pick up unprocessed messages describing events related to domain classes.
- Go through each message in turn.
- Take the message and convert it to an XML document
- Send the XML message to our integration platform (BizTalk).
- Mark the original message as processed.
- Look for steps 3-5 until done.
I'll define this overall story in NBehave syntax as:
.IWant("all domain event messages to be processed")
.SoThat("we send them to the integration platform");
To get going I want to start with a single simple test:
- Load the single unprocessed domain event message
- Create the XmlDocument describing the event and domain object.
- Send the XmlDocument.
The first way I think you can use BDD for high-level testing is something like the approach from Mock Roles, Not Objects:
Its worth noting here that I'm using TypeMock (and only features from the free community edition) and not injecting in the dependencies so I'm not strictly sticking to the approach that the mockobjects guys push for. Having said that I am thinking that this is certainly a situation where I will be bringing in DI/IoC.
public void We_process_change_X_to_Y()
DomainEventMessage message = CreateMessageForAccountActivation();
#region Setup Expectations
XmlDocument emptyDocument = new XmlDocument();
Mock mockMapper = MockManager.Mock(typeof(AccountEventMessageMapper));
Mock mockMapperFactory = MockManager.Mock(typeof(DomainEventMessageMapperFactory));
mockMapperFactory.ExpectAndReturn("GetMapper", new AccountEventMessageMapper()).Args(message);
Mock mockMessageSender = MockManager.Mock(typeof(XmlMessageSender));
.WithScenario("processing single event message")
.Given("there is a single unprocessed message relating to an X", MockRepositoryToReturnMessages)
.When("we process the event messages", ProcessMessages)
.Then("an appropriate Xml message is sent", VerifyExpectedCallsWereMade);
So what do I think of it? I guess this test is useful, it definitely made me think about the collaborators. To get the test to pass I have to put the followinge code into the SUT:
This seems like a good initial design (ignoring names of classes/members and possibility of DI). Having said that I'm not convinced that I wouldn't have come up with a design as good or better if I'd just relied on normal state based testing. Mind you at least with this approach I have come up with a list of requried collaborators early which is useful.
IList unprocessedMessages = new DomainEventMessageRepository().GetAll();
AccountEventMessageMapper mapper = DomainEventMessageMapperFactory.GetMapper(unprocessedMessages);
XmlDocument xmlDocument = mapper.ConvertToXmlMessage(unprocessedMessages);
Now down to a problem though, although this test now passes my SUT does nothing because the collaborators all have do nothing implementations. This means that we're out of luck if we're expecting this specification to act as our acceptance criteria.
Lets look at what I might do for a state based test:
In this case the methods being called would probably be real implementations or at worst would use test stubs/spies. So I'd be thinking that CreateAndSaveMessage would create a real DomainEventMessage and would save it to the database.
public void We_process_change_X_to_Y()
.WithScenario("processing single event message")
.Given("there is a single unprocessed message relating to an X", CreateAndSaveMessage)
.When("we process the event messages", ProcessMessages)
.Then("an appropriate Xml message is sent", AssertCorrectXmlDocumentProduced);
So how do I get this test to pass, the truth is it will take me a while. I'll need to implement all the collaborators fully enough to handle this simple case (single message being processed) and I need to define the expected XML document before starting the work.
The good bit is that once I'll know that the system is handling the one simple case, which makes it a useful acceptance test.
I actually think that for the problem at hand both tests are good.
I don't rate the way most people use mocks, it often just ends up in unreadable over-coupled tests that are difficult to follow. However in this case we're showing high level collaborations that are indeed meaningful. I question the value of always thinking about all of the collaborators upfront but in this case it could be useful.
How do I apply both types of tests though, the options I'm going to try are:
- Write Both - Since I'm not sure I need that many tests that show the collaborations this could work.
- Write Mock Ones And As I Implement Remove The Mocks - I'd write a mocking test and as I write the real collaborators I'll go back and replace the mock implementation with the real one. Once there is no mocking in the test I'm done.
- Avoid Mocking - This is the approach discussed in this NBehave thread.
Initially I might go for the first approach, writing a few mock/behavior style tests but mainly writing state ones.
Monday, February 18, 2008
Although I have now begun to play with writing high level specifications with NBehave I must admit that I'm not getting much further with understanding BDD in general.
I thought I should put my questions into a blog post, I guess its more of a brain dump than anything else though.
Is BDD just better TDD?
Its tempting to view BDD as doing TDD well. In reality though there are lots of ways to practice TDD but BDD is slightly more specific, such as the way it drives for an outside-in approach.
Having said that I think you can use BDD without buying into the whole Mock Roles, Not Objects approach but it is interesting that BDD is pushing that style of working. I've tried it out and it can have advantages but unsurprisingly it also has some serious problems.
Outside-in can mean many things but I like the definition from XUnit test patterns. Anyway even if you do practice outside-in you don't necessarily buy fully into the approach where you start with the GUI (or slightly below) and mock your way to all your collaborators (as described at Wikipedia or Mock Rokes, Not Objects).
If you practice DDD then you probably focus on the domain early, often then using classicist style testing practices (I've blogged about this before). If you do use mocking its probably when testing services and your probably not using it to definite ISP compliant role interfaces (or maybe you are, is it working because I am interested).
I think you can also find value in outside-in testing but this time starting from the public interface to the domain/application. You can start with a high level test and then start using TDD for the details, as discussed here. Once all the code is written. That doesn't preclude you using Test Spies or Stubs where appropriate, but it does mean that you don't need to jump directly into mocking which can lead to fragile tests.
If you do believe in mockist style testing, defining collaborations and going from there, then the high level BDD tests are a good place to do it however because thats the place where you will be thinking of high level interactions between entities/services/factories and so on. Showing those interactions in the tests could have some value, though whether you extract role interfaces is another issue.
Having said all this I do think you need to be thinking about/defining the GUI at the same time as working on the domain model to avoid overcomplication. For example you may implement a complex object hierarchy in the domain when for this version of the software something simple would have done. I've been bitten by this before, but I also think that starting from the GUI and working downwards is not the way to define a domain model. In my experience you usually need to do both GUI based work and domain work early on.
There are smart people who use outside-in testing in ways that I have no experience of (see this TDD thread which was a real eye opener for me, emphasizing just how differently people tackle testing/design).
High Level or All Levels?
For me the most exciting idea is writing high level BDD tests for two primary reasons:
- Stories - Lower level tests are less likely to be driven by stories from the stakeholder.
- Refactoring - High level tests are the ones that will provide benefits because they will be more immune to refactoring.
listStoryThis obviously isn't how you'd do it, but I'd be thinking that if you do use BDD at this low level then you are perhaps better just writing BDD tests without using something like NBehave (as shown on the wikipedia article of ListTest and by Jimmy Bogard in Converting tests to specs is a bad idea).
.IWant("my list to behave correctly when items are added")
.SoThat("I can use it in my software");
Can you use BDD for infrastructure/integration work?
NOTE: Here I'm talking about higher level (Scenarios, or Application Examples) specifications.
Ultimately everything you do is user driven but some problems make it difficult to tie your work back to a story.
For example my current project is an integration with an external system. This is quite a lot of work and defining user stories can be tricky as the users are not going to be interested till the integration is complete. Thanks to some brilliant help on the XP forum, not least Simon Jones' post, I've managed to get user stories to work for such a task but it does involve a little bit of work.
However for such a project do BDD tests make sense? Take the current piece of this project, a Windows Service that the users don't even directly use. I'm trying to use NBehave specifications but it can get a little odd, for example using the NBehave syntax who do I specify in the "As A" part of the story? Really the users don't even care that the service exists, but to be fair for this sort of work I'm happy to think outside the box a little and I think thats fine.
So yes I do think high level specifications can work for infrastructure/integration tasks.
Based On User Stories?
We use user stories and so the idea of writing the high level specifications based on user stories is attractive and I think it is a valid approach.
It is worth noting that whilst behaviour-driven.org seems to indicate that use cases are a good source whats-in-a-story does explain that its just as applicable where other requirements techniques are used (which is to be expected).
BDD and the Ubiquitous Language
I originally thought that BDD aimed to define, or help in defining, the (DDD) ubiquitous language. Its hard to know though so lets look at how BDD is linked to a ubiquitous language in some of the main articles:
- Wikipedia - "Behavior-driven developers use their native language in combination with the ubiquitous language of Domain Driven Design". I have no idea what this means, is it using the DDD ubiquitous language in the tests or are we saying that BDD is forming ubiquitous language for writing specificatons?
- Introducting BDD -Here BDD is the "ubiquitous language for the analysis process itself".
- Whats in a story -No mention.
- behaviour-driven.org - "It aims to help focus development on the delivery of prioritised, verifiable business value by providing a common vocabulary (also referred to as a UbiquitousLanguage) that spans the divide between Business and Technology.". This seems to indicate that your BDD specifications are supposed to look use the ubiquitous language.
As far as I can see there is absolutely no consistency in what the different sources of BDD mean when they talk about the ubiquitous language. I also think that taking the term ubiquitous language and using it outside of the context that DDD provides is unnecessarily confusing.
So what if the specifications are written in the ubiquitous language?
NOTE: This discussion only really applies to the higher level tests against your domain model, I don't think lower level (implementation detail) or infrastructure tests are going to be written in the ubiquitous language.
As discussed before according to behaviour-driven.org BDD:
...aims to help focus development on the delivery of prioritised, verifiable business value by providing a common vocabulary (also referred to as a UbiquitousLanguage) that spans the divide between Business and Technology."...This sounds good and if I'm writing the tests then I will use the ubiquitous language. However if I'm doing BDD properly then I'll have others involved:
- BDD Process - "A SubjectMatterExpert (typically a business user) works with a BusinessAnalyst to identify a business requirement"
- Whats in a story - "the stories are the result of conversations between the project stakeholders, business analysts, testers and developers. BDD is as much about the interactions between the various people in the project as it is about the outputs of the development process."
As discussed above my own view is that in many cases user stories will feed into the BDD specifications. So do I expect user stories to be written in the ubiquitous language, not really and for these reasons:
- Different Audiences - Our users are not necessarily our domain experts and even if some of them are we probably have some who are not. Having domain experts write the user stories is not good (been there, done that) and having the users define the domain model is no better (too simplified). I thus think that expecting the users to understand the ubiquitous language, which probably has little to do with their day to day job, is unreasonable.
- Clutter - Stories are there to specify the behavior the user wants. With this in mind I'm not sure that cluttering them with domain model details is any more useful than including GUI details in them. Does a user care what model you put in place, probably not and I'm not sure we should expect them to.
- Evolving Language - You are also likely to evolve your ubiquitous language as you learn more about the domain, probably resulting in it becoming more complex. The question would then be whether you build this complexity into the user stories, I'd argue that doing so will just confuse things.
Thats the way I see it for user stories and offhand I can't think why this wouldn't apply equally to BDD. So I'm not seeing BDD affecting our ubiquitous language all that much, I think instead when writing user stories we should aim to use the language of the users.
Obviously if your domain experts are involved in writing the BDD specifications then using the ubiquitous language will be more attractive.
So In Conclusion...
BDD is many things to many people and although some people are trying to tie it down I'm not sure it will work.
Unfortunately although I think it is positive that there are so many ways to describe BDD I do find that in some cases (ubiquitous language) it is unnecessarily confusing.
Friday, February 15, 2008
I just had a situation today that emphasized to me why TypeMock can be so useful.
IDomainChangeTrackingService interface in a domain assembly implemented by an infrastructure service class (DomainChangeTrackingService) in another assembly. We use DI with a ServiceLocator to get the implementation of the service into the domain.
Initially the DomainChangeTrackingService just persists the DomainChangeMessages passed to it, so all it does is call out to the DomainChangeMessageRepository:
public class DomainChangeTrackingService : IDomainChangeTrackingServiceTesting
public void ProcessMessage(DomainChangeMessage message)
I've already written integration tests for the DomainChangeMessageRepository so when testing the DomainChangeTrackingService a single interaction test that just checks it calls the DomainChangeMessageRepository would be enough.
The problem was that DomainChangeTrackingService creates and uses the DomainChangeMessageRepository.
Without TypeMock I'd probably have handled this with more decoupling. I'd probably have made the repository implements an interface and the implementation of that interface is injected into DomainChangeTrackingService (or it could get it from the service locator).
This could be useful in the future but I'm certainly not wanting to worry about it right now and certainly isn't decoupling I'd be looking for. For me this is one the situations where TypeMock is great, I can interaction test DomainChangeTrackingService without having to change my design.
Wednesday, February 13, 2008
Greg Young has a great post titled Mocks are a code smell.
As he explains the title is to grab your attention and although he does see mocking being overused, which is definitely my view and seemed to be the general feeling at the mocking session ALT.NET UK, the post itself covers a very interesting way of handling communication within the domain.
To be honest Greg's ideas and implementations of this pattern are more advanced than mine, and I know he uses it a lot more in his designs that I do, so I'm looking forward to reading his other posts on this topic.
Trying It Out - Start Simple
If you're daunted by the idea of going to a messaging approach then you could always start simple.
As an example I would say that what Greg is suggesting is just a more advanced version of the approach that I intend to use for dirty tracking within our domain. Messages are generated when domain events happen and these will be registered with a service that you get from a service locator.
This makes testing simple as you can just use a test spy (same approach as Greg seems to be using) but is also a design that I like in terms of lowering coupling.
The AMC affecting design thread on ALT.NET has triggered a few other threads. Ayende and Jeremy Miller have their own posts discussing why your business code should be ignorant of IoC, something I fully agree with.
Tuesday, February 12, 2008
There was an interesting thread on the ALT.NET forum called "AMC: Changes to the way we think".
Now I don't use the auto-mocking container so it hasn't changed the way I think but I did want to comment on some of the ideas on the thread.
Should We See All Dependencies In The Constructor?
A lot of people seem to think that seeing a classes dependencies in its constructor is important, the argument being that seeing this tells you a lot about the design of the class.
This is a compelling argument and to some extent I agree with it but it misses one key point, not all dependencies are created equal. A dependency on, for example, a domain service or a repository tells me more about how a class works than a dependency on a logging/dirty tracking service. The first is a meaningful part of the design, the second is just an aspect of the implementation.
You could also say that seeing all a classes dependencies doesn't necessarily tell you much about how it behaves, to know that you need to see its own behavior and the way it uses those dependencies.
I'd also say that this is a case where people argue about the improvements in design when in some cases we are only doing what we are doing to fit in with the implementation constraints of the tools we use. We need to pass the dependencies in so we use constructor injection, to mock we need interfaces (or virtual members) so we end up injecting interfaces. The end result is very decoupled but is it useful decoupling, if we started from scratch and did the simplest thing that can work (YAGNI) would this be the design we'd come up with? Probably not...
Dependencies from the domain, Ayende indicates that he prefers his domain classes don't depend on non-domain services.
I buy into this too but for things like dirty tracking of domain classes it can get difficult but this is where AOP can prove useful, in those cases you're domain classes might have a run-time dependency on non-domain services but I think this is perfectly accessible. We can also probably use test spies for these sorts of dependencies, which makes for conveniant testing.
To me the auto-mocking container is a good idea but two things worry me about it.
The first is that it couples your tests to IoC, which when I first read about IoC was seen as a bad solution. I guess you can put up with this though.
The second issue I have with it is that it hides the dependencies that aren't important to the test. It seems like we've exposed the dependencies in the constructor to allow IoC and to allow replacing them in tests. However this makes the tests harder to read so we introduce a component to fix that issue. It just seems like it might be worth taking a step back and re-evaluating before you use the AMC, after doing this you might want to go ahead and use it of course :)
"Bad Designs" Can Work
So our domain classes have few dependencies other than on other domain classes (not on any services outside the domain). Where there are dependencies they are through an interface to the service, you get the service from a Service Locator.
However have layers above this, including a coordination style layer that talks to repositories and the rest.
So how do we get the repositories and infrastructure services into the services in the coordination layer, choices would be:
- Extracted interfaces passed in to the constructor of the service - Pass in an ICustomerRepository (or rename the interface to give it domain meaning).
- Pass concrete classes into the constructor of the service - Pass a CustomerRepository in.
- Make the service methods static but pass in the dependencies.
- Get the required services from a Service Locator.
Which do we do, none. The service methods are all static and when one of these domain coordination services needs a repository/infrastructure service it just creates it.
Its not as decoupled as we could make it but it's clear and simple and the layer in question is quite thin. If we needed to decouple we could without much effort and so I'm quite comfortable with the design we have.
What I'm trying to say is that I sometimes think people take things too far and that sometimes you can couple things safely. Maybe tomorrow you will need to rethink, but maybe not.
This doesn't mean that I don't rate IoC/DI or decoupling in general, I do. However I like to be able to decide for myself how far to take it.Coupling Code To IoC
This was one of the original suggestions and I don't particularly like it. If you don't want to see the dependencies passed in to the constructor then I'd say you should use a Service Locator (which could in turn call out to a container) or use the hub service style approach.
Wednesday, February 06, 2008
We're starting to do a lot of mapping between domain classes and other forms, so far mainly so that we can export representations of our domain objects to external systems.
Performing the mapping quickly becomes a real pain and testing the mappings is even worse.
I don't think there is much you can do about the dullness of the testing, but I have been looking for a framework that will make the mapping easier and on an ALT.NET thread on the topic someone suggested I look at a little library called Otis.
So far I'm very impressed so here's what I've found. If you want to know more download the binaries or source code, the advantage of the source code is that it has a sample with it. The WIKI also has good information but I wanted to write what I've found so far, mainly to remind myself.
I think I'm going to use the XML file approach as its cleaner, I've name the files "*.otis.xml" and made them "Embedded Resources". I also setup the XSD that you get with the binaries to give me intellisense which is very useful.
Lets look at a simple example:This shows the following:
- UserEntity.Id -> UserDTO.Id
- UserEntity.UserName -> UserDTO.TheUserName
- UserEntity.Advisor -> UserDTO.Advisor.Name
As you can see the mapping is written from the standpoint of the source class, once you get this its quite easy to follow.
You can read more about the mappings here but the sample with the source code is also good.
To get the same to work I had to using the following C# code:
Configuration cfg = new Configuration();This allows me to do a one-way mapping from the UserEntity to the UserDTO.
dtoFromEntityAssembler = cfg.GetAssembler ();
UserEntity entity = new UserEntity(5, "Bob Dole", "bdole");
entity.Advisor.Name = "sdaddds232";
UserDTO dto = dtoFromEntityAssembler.AssembleFrom(entity);
Tuesday, February 05, 2008
Finally got round to reading Mats Helanders InfoQ post Aspects of Domain Model Management.
The article is very long, with a lot of content being around hacky solutions to the problems that the author is delaing with (attaching non-business logic to the domain).
I read all of the article but if you get bored you can skip most of the middle bit right up to Using "Aspect Oriented Programming".
The bit on proxies is interesting, if you're using NHibernate then you are already using the Infrastructural Proxy Subclass approach for lazy-loading (and collections) so you should be familiar with it. However trying to use that approach to handle your own requirements is not going to be clean or transparent.
Back to AOP. The author uses his own NAspect framework for run-time AOP however we were thinking of using PostSharp as compile time seemed good enough and it is more transparent. Having to use abstract factory, or just factories, for all object creation just does not appeal though.
The attribute based approach is cool though, and its similiar to the way you attach behavior with PostSharp . All in all I think I'm going to plow on with PostSharp and then see how it goes (as described in my post AOP and the Domain - Dirty Tracking).
I blogged about this a while ago but we've been trying introduce a user story based requirement process, our user stories would be stored in TFS and the users would access them using the Web Access Power Tool (previously called TeamPlain).
Sounds good, especially as the tool is free. Problem is every single user who wants to use the web portal to access TFS is going to have to have a CAL. I've complained about this before, and even logged a bug with Microsoft about it, but really it doesn't seem like they understand how big a problem this is.
Given the situation I would truly question whether any company should consider using Team System unless they are also prepared to hand over large sums of money to get their users involved. If you decide not to use TFS for artifacts such as user stories then its really just an expensive source control system.
Jimmy Bogard, who is becoming one of my favorite bloggers, has two excellent posts on Scrummerfall here and here. If you are "doing agile" but finding its only really changing the development team then they may be relevant.
To be honest we don't do Scrummerfall, we didn't start out with Scrum.
Monday, February 04, 2008
I thought I'd blog about some of the topics that were discussed at ALT.NET UK, or more correctly my views on those topics.
Styles Of Testing
I've blogged about the fact that I tend not to use interaction testing much. I find interaction testing can result in overspecified software and sometimes in tests that are hard to read. To be fair some of this could be countered by better use of mocking, for example diffrentiating between mocks and stubs (or Stubs and Expecations if thats the terminology you prefer), but state based testing is still my preferred approach especially for the domain model.
In the disucssion of mocking at ALT.NET UK it seemed that most of the attendees agreed that we are now overusing mocking. Whilst there was little disagreement with the idea of stubbing/mocking in some situations (e.g. between layers or domain modules) most people did seem to think it was dangerous if not used carefully.
Ian Cooper has blogged about this topic too.
What granularity to test at, something that constantly irks me. To some people unit testing is always testing a single class, I've always thought that you can use unit tests for (small) groups of classes. In particular I do this when I have a helper class that I've extracted out of the class I was previously testing.
There are advantages and disadvantages to testing groups of closely related classes together though. The main advantage (especially early in design) is the tests can withstand refactoring and the main disadvantages are that they can be more complex and don't have such good defect localization.
Anyway Ian Coopers blog entry sum up my views on this entire topic. We also discussed whether to move/copy the tests down to the extracted class when you use extract class refactoring.
I'm totally inconsistent on this, I sometimes test against the extracted class and sometimes leave the tests at the level of the class I extracted it from.
Design For Test or Design For Design
The topic of design for testability came up a lot, though not in the way you might expect!
I don't believe there is any good substitute for thinking about your design. Sometimes designing for testability, especially if you favor lots of mocking using a traditional mocking tool, does bring lots of decoupling it's not necessarily as good as focussed decoupling.
Anyway I knew Roy Osherove was no longer recommending designing for testability and so was expecting a lot of disagreements around this topic. In actual fact though there wasn't much disagreement on the topic.End To End Testing
At one session Tana Isaac discussed Watin which can be used to write tests for your GUI. The discussion covered whether GUI testing is a good idea because it can be more trouble than it's worth. However Tana pointed out that the tests they are writing are not only acting as good specifications but are not at all flakey and indeed are rarely being modified.
Others also mentioned the Web testing functionality in VS 2008, which apparently is far better than what was in previous versions.
The general discussion fitted in with the contributions that Nat Pryce made to an interesting TDD thread. Nat sums up the way he tests in this post.
All in all myself and John both came away thinking that we needed to look more at Watin and end-to-end testing in general.
Nick Hines from Thoughtworks led a really good session on BDD at ALT.NET UK. The session clarified a lot of things for me, though as always after a bit of thought I am left with plenty more questions so I thought a blog entry was in order.
What Is BDD
My initial experience with BDD was Dave Astels doing a videocast about it. Better TDD and focussing on design were the name of the game. This made sense but when I started looking into it more I got a bit lost. The documentation seems to focus on the influence of DDD and the use of interaction testing. This confused me a bit as unless you practice need-driven development its questionable whether you are using interaction design as a domain design technique. I was also confused as to how BDD fit in with DDD, other than both sharing the idea of a ubiquitous language.
Anyway the discussion clarified the fact that its fair to view BDD as better TDD and it has little to do with need-driven development (though I guess you could use them together). It also doesn't specify that you must be using interaction testing.
Where to use BDD
We also came to the conclusion that these tests were quite high level, more influenced by users (user story acceptance tests) or a business analyst. I'm thus not clear that they would influence the domain design other than at a shallow level as your users are normally not your domain experts. This is probably what Greg Young means when he talks about BDD being used to for a shared language which may be distinct from the ubiquitous language.
Practically this probably means writing these tests against high level domain entities or your application/service layer. The discussion did cover whether you then take the BDD tests into the domain and indeed right down to hidden implementation details. I'm still not clear on this but if we are using BDD tests as specifications then starting out with detailed high level specifications isn't a bad idea, it gives us a good way of specifying acceptance criteria.
The question is then how detailed to make these high level BDD tests. They have to be quite detailed and comprehensive if they are acceptance tests but if you make them too comprehensive then you are going to end up testing the same things at multiple levels, initially with a high level BDD test then with more detailed "unit" tests.
This seems attractive to me though, Jimmy Bogard has a great post about how he tried to make sure his BDD tests are somewhat immume to refactoring (changing implementation).
Ray Houston is one of many people blogging about learning BDD, including this really interesting post.
Saturday, February 02, 2008
At the recent discussion of mocking at ALT.NET UK we discussed using test spies.
We actually make good use of test spies now and they have some advantages. Let me give you an example where they come in handy. Our domain classes occassionally need to contact external services, interfaces implemented by these services are put in the domain (seperated interface) with the implementations available to the domain using a Service Locator.
What do to when testing though, we have maybe 1% of the the tests where we need to setup a mock version of the service and the rest don't care about the service so they can run with a do-nothing stub.
However if we register a mock version of the service with the Service Locator and if we forget to cleanup propely then that mock service will affect other tests. Its also painful having to put the stub service back in after you are doing using the mock.
The solution, suggested by one of my colleagues, is simple. We don't ever mock the service but instead we use a test spy. We register the Test Spy with the Service Locator in a method tagged with AssemblyInitializeAttribute so it gets run before any tests in that assembly.
All the normal tests run normally, and if they in some way cause the system to interact with the test spy then the interaction happens totally silently, for example maybe the test spy logs the interaction by adding an item to an internal collection.
So what about the 1% of the tests that really want to test against the test spy (the tests that might otherwise have used a mock). Well in the test fixture initialization we reset the test spy then at the end of each test we ask the test spy what calls it received and verify that they were what we expected.
This works a treat, really simple solution that makes the tests very easy to write.
NOTE - When to call services from the domain
The comment from Andreas made me realize that I didn't say when I think a domain class should talk to a service.
Normally I avoid these sorts of dependencies in order to keep the domain code clean, simple and easily testable. However for cross cutting concerns like logging/dirty tracking an AOP based approach (see my PostSharp posts) where we introduce the code that calls the service is very clean.
In particular this works because for those cases we can have the domain class contact the (infrastructure) Service but we don't care about any return values, hence the applicability of a Test Spy.