BDD Forum
Agile Joe has setup a BDD group on Google Groups. There is also a thread on BDD at the ALT.NET forum right now.
Oh and there is also a dead Yahoo BDD group.
Technical blog.
Agile Joe has setup a BDD group on Google Groups. There is also a thread on BDD at the ALT.NET forum right now.
Oh and there is also a dead Yahoo BDD group.
Posted by Colin Jack at 2/28/2008 0 comments
Labels: BDD, TDD, Unit Testing
After seeing it on Greg Young's blog, and after getting sick of my code snippets in blogger not working, I decided to try syntaxhighlighter. To find out how to use it with blogger read the usage guidelines and more importantly this page which tells you how to get setup to use it with blogger.
Anyway here is an example:
public class DomainChangeTrackingService : IDomainChangeTrackingService
{
public void ProcessMessage(DomainChangeMessage message)
{
new DomainChangeMessageRepository().Save(message);
Debug.WriteLine("Nothing");
}
}
Posted by Colin Jack at 2/24/2008 4 comments
Labels: Blogging
I've started using NBehave specifications to try and work out how BDD can affect the way I work.
As I touched on in my previous post. I'm hoping to try and find answers to my questions and the first one I wanted to deal with is how to best use BDD as a way of getting outside-in design.
I'm going to give an example of two approaches to using outside-in design with BDD, first though I'll explain what I'm testing:
SUT
I'm writing a Windows Service that will do the following (initially):
Simple Test
story
.AsA("...")
.IWant("all domain event messages to be processed")
.SoThat("we send them to the integration platform");
Its worth noting here that I'm using TypeMock (and only features from the free community edition) and not injecting in the dependencies so I'm not strictly sticking to the approach that the mockobjects guys push for. Having said that I am thinking that this is certainly a situation where I will be bringing in DI/IoC.
[Test]
public void We_process_change_X_to_Y()
{
DomainEventMessage message = CreateMessageForAccountActivation();
_testMessages.Add(message);
#region Setup Expectations
XmlDocument emptyDocument = new XmlDocument();
Mock mockMapper = MockManager.Mock(typeof(AccountEventMessageMapper));
mockMapper.ExpectAndReturn("ConvertToXmlMessage", emptyDocument).Args(message);
Mock mockMapperFactory = MockManager.Mock(typeof(DomainEventMessageMapperFactory));
mockMapperFactory.ExpectAndReturn("GetMapper", new AccountEventMessageMapper()).Args(message);
Mock mockMessageSender = MockManager.Mock(typeof(XmlMessageSender));
mockMessageSender.ExpectCall("Send").Args(emptyDocument);
#endregion
story
.WithScenario("processing single event message")
.Given("there is a single unprocessed message relating to an X", MockRepositoryToReturnMessages)
.When("we process the event messages", ProcessMessages)
.Then("an appropriate Xml message is sent", VerifyExpectedCallsWereMade);
}
This seems like a good initial design (ignoring names of classes/members and possibility of DI). Having said that I'm not convinced that I wouldn't have come up with a design as good or better if I'd just relied on normal state based testing. Mind you at least with this approach I have come up with a list of requried collaborators early which is useful.
IList unprocessedMessages = new DomainEventMessageRepository().GetAll();
AccountEventMessageMapper mapper = DomainEventMessageMapperFactory.GetMapper(unprocessedMessages[0]);
XmlDocument xmlDocument = mapper.ConvertToXmlMessage(unprocessedMessages[0]);
new XmlMessageSender().Send(xmlDocument);
In this case the methods being called would probably be real implementations or at worst would use test stubs/spies. So I'd be thinking that CreateAndSaveMessage would create a real DomainEventMessage and would save it to the database.
[Test]
public void We_process_change_X_to_Y()
{
story
.WithScenario("processing single event message")
.Given("there is a single unprocessed message relating to an X", CreateAndSaveMessage)
.When("we process the event messages", ProcessMessages)
.Then("an appropriate Xml message is sent", AssertCorrectXmlDocumentProduced);
}
Initially I might go for the first approach, writing a few mock/behavior style tests but mainly writing state ones.
Posted by Colin Jack at 2/20/2008 1 comments
Labels: Agile, BDD, Design, Unit Testing
Although I have now begun to play with writing high level specifications with NBehave I must admit that I'm not getting much further with understanding BDD in general.
I thought I should put my questions into a blog post, I guess its more of a brain dump than anything else though.
Is BDD just better TDD?
Its tempting to view BDD as doing TDD well. In reality though there are lots of ways to practice TDD but BDD is slightly more specific, such as the way it drives for an outside-in approach.
Having said that I think you can use BDD without buying into the whole Mock Roles, Not Objects approach but it is interesting that BDD is pushing that style of working. I've tried it out and it can have advantages but unsurprisingly it also has some serious problems.
Outside In?
Outside-in can mean many things but I like the definition from XUnit test patterns. Anyway even if you do practice outside-in you don't necessarily buy fully into the approach where you start with the GUI (or slightly below) and mock your way to all your collaborators (as described at Wikipedia or Mock Rokes, Not Objects).
If you practice DDD then you probably focus on the domain early, often then using classicist style testing practices (I've blogged about this before). If you do use mocking its probably when testing services and your probably not using it to definite ISP compliant role interfaces (or maybe you are, is it working because I am interested).
I think you can also find value in outside-in testing but this time starting from the public interface to the domain/application. You can start with a high level test and then start using TDD for the details, as discussed here. Once all the code is written. That doesn't preclude you using Test Spies or Stubs where appropriate, but it does mean that you don't need to jump directly into mocking which can lead to fragile tests.
If you do believe in mockist style testing, defining collaborations and going from there, then the high level BDD tests are a good place to do it however because thats the place where you will be thinking of high level interactions between entities/services/factories and so on. Showing those interactions in the tests could have some value, though whether you extract role interfaces is another issue.
Having said all this I do think you need to be thinking about/defining the GUI at the same time as working on the domain model to avoid overcomplication. For example you may implement a complex object hierarchy in the domain when for this version of the software something simple would have done. I've been bitten by this before, but I also think that starting from the GUI and working downwards is not the way to define a domain model. In my experience you usually need to do both GUI based work and domain work early on.
There are smart people who use outside-in testing in ways that I have no experience of (see this TDD thread which was a real eye opener for me, emphasizing just how differently people tackle testing/design).
High Level or All Levels?
For me the most exciting idea is writing high level BDD tests for two primary reasons:
listStoryThis obviously isn't how you'd do it, but I'd be thinking that if you do use BDD at this low level then you are perhaps better just writing BDD tests without using something like NBehave (as shown on the wikipedia article of ListTest and by Jimmy Bogard in Converting tests to specs is a bad idea).
.AsA("developer")
.IWant("my list to behave correctly when items are added")
.SoThat("I can use it in my software");
As far as I can see there is absolutely no consistency in what the different sources of BDD mean when they talk about the ubiquitous language. I also think that taking the term ubiquitous language and using it outside of the context that DDD provides is unnecessarily confusing.
So what if the specifications are written in the ubiquitous language?
NOTE: This discussion only really applies to the higher level tests against your domain model, I don't think lower level (implementation detail) or infrastructure tests are going to be written in the ubiquitous language.
As discussed before according to behaviour-driven.org BDD:
...aims to help focus development on the delivery of prioritised, verifiable business value by providing a common vocabulary (also referred to as a UbiquitousLanguage) that spans the divide between Business and Technology."...
This sounds good and if I'm writing the tests then I will use the ubiquitous language. However if I'm doing BDD properly then I'll have others involved:Thats the way I see it for user stories and offhand I can't think why this wouldn't apply equally to BDD. So I'm not seeing BDD affecting our ubiquitous language all that much, I think instead when writing user stories we should aim to use the language of the users.
Obviously if your domain experts are involved in writing the BDD specifications then using the ubiquitous language will be more attractive.
Anyway Greg Young has two superb posts on this namely BDD and the Shared Language and BDD and the Shared Language: The Stakeholder.
So In Conclusion...
BDD is many things to many people and although some people are trying to tie it down I'm not sure it will work.
Unfortunately although I think it is positive that there are so many ways to describe BDD I do find that in some cases (ubiquitous language) it is unnecessarily confusing.
I just had a situation today that emphasized to me why TypeMock can be so useful.
Design
IDomainChangeTrackingService interface in a domain assembly implemented by an infrastructure service class (DomainChangeTrackingService) in another assembly. We use DI with a ServiceLocator to get the implementation of the service into the domain.
Initially the DomainChangeTrackingService just persists the DomainChangeMessages passed to it, so all it does is call out to the DomainChangeMessageRepository:
public class DomainChangeTrackingService : IDomainChangeTrackingServiceTesting
{
public void ProcessMessage(DomainChangeMessage message)
{
new DomainChangeMessageRepository().Save(message);
}
}
Posted by Colin Jack at 2/15/2008 5 comments
Greg Young has a great post titled Mocks are a code smell.
As he explains the title is to grab your attention and although he does see mocking being overused, which is definitely my view and seemed to be the general feeling at the mocking session ALT.NET UK, the post itself covers a very interesting way of handling communication within the domain.
To be honest Greg's ideas and implementations of this pattern are more advanced than mine, and I know he uses it a lot more in his designs that I do, so I'm looking forward to reading his other posts on this topic.
Trying It Out - Start Simple
If you're daunted by the idea of going to a messaging approach then you could always start simple.
As an example I would say that what Greg is suggesting is just a more advanced version of the approach that I intend to use for dirty tracking within our domain. Messages are generated when domain events happen and these will be registered with a service that you get from a service locator.
This makes testing simple as you can just use a test spy (same approach as Greg seems to be using) but is also a design that I like in terms of lowering coupling.
Posted by Colin Jack at 2/13/2008 2 comments
Labels: ALT.NET, AOP, DDD, Design, Unit Testing
The AMC affecting design thread on ALT.NET has triggered a few other threads. Ayende and Jeremy Miller have their own posts discussing why your business code should be ignorant of IoC, something I fully agree with.
Posted by Colin Jack at 2/13/2008 2 comments
Labels: Design
There was an interesting thread on the ALT.NET forum called "AMC: Changes to the way we think".
Now I don't use the auto-mocking container so it hasn't changed the way I think but I did want to comment on some of the ideas on the thread.
Should We See All Dependencies In The Constructor?
A lot of people seem to think that seeing a classes dependencies in its constructor is important, the argument being that seeing this tells you a lot about the design of the class.
This is a compelling argument and to some extent I agree with it but it misses one key point, not all dependencies are created equal. A dependency on, for example, a domain service or a repository tells me more about how a class works than a dependency on a logging/dirty tracking service. The first is a meaningful part of the design, the second is just an aspect of the implementation.
You could also say that seeing all a classes dependencies doesn't necessarily tell you much about how it behaves, to know that you need to see its own behavior and the way it uses those dependencies.
I'd also say that this is a case where people argue about the improvements in design when in some cases we are only doing what we are doing to fit in with the implementation constraints of the tools we use. We need to pass the dependencies in so we use constructor injection, to mock we need interfaces (or virtual members) so we end up injecting interfaces. The end result is very decoupled but is it useful decoupling, if we started from scratch and did the simplest thing that can work (YAGNI) would this be the design we'd come up with? Probably not...
AOP
Dependencies from the domain, Ayende indicates that he prefers his domain classes don't depend on non-domain services.
I buy into this too but for things like dirty tracking of domain classes it can get difficult but this is where AOP can prove useful, in those cases you're domain classes might have a run-time dependency on non-domain services but I think this is perfectly accessible. We can also probably use test spies for these sorts of dependencies, which makes for conveniant testing.
Testing
To me the auto-mocking container is a good idea but two things worry me about it.
The first is that it couples your tests to IoC, which when I first read about IoC was seen as a bad solution. I guess you can put up with this though.
The second issue I have with it is that it hides the dependencies that aren't important to the test. It seems like we've exposed the dependencies in the constructor to allow IoC and to allow replacing them in tests. However this makes the tests harder to read so we introduce a component to fix that issue. It just seems like it might be worth taking a step back and re-evaluating before you use the AMC, after doing this you might want to go ahead and use it of course :)
"Bad Designs" Can Work
So our domain classes have few dependencies other than on other domain classes (not on any services outside the domain). Where there are dependencies they are through an interface to the service, you get the service from a Service Locator.
However have layers above this, including a coordination style layer that talks to repositories and the rest.
So how do we get the repositories and infrastructure services into the services in the coordination layer, choices would be:
Which do we do, none. The service methods are all static and when one of these domain coordination services needs a repository/infrastructure service it just creates it.
Its not as decoupled as we could make it but it's clear and simple and the layer in question is quite thin. If we needed to decouple we could without much effort and so I'm quite comfortable with the design we have.
What I'm trying to say is that I sometimes think people take things too far and that sometimes you can couple things safely. Maybe tomorrow you will need to rethink, but maybe not.
This doesn't mean that I don't rate IoC/DI or decoupling in general, I do. However I like to be able to decide for myself how far to take it.
Coupling Code To IoCPosted by Colin Jack at 2/12/2008 0 comments
Labels: ALT.NET, Dependency Injection, Design, Unit Testing
We're starting to do a lot of mapping between domain classes and other forms, so far mainly so that we can export representations of our domain objects to external systems.
Performing the mapping quickly becomes a real pain and testing the mappings is even worse.
I don't think there is much you can do about the dullness of the testing, but I have been looking for a framework that will make the mapping easier and on an ALT.NET thread on the topic someone suggested I look at a little library called Otis.
So far I'm very impressed so here's what I've found. If you want to know more download the binaries or source code, the advantage of the source code is that it has a sample with it. The WIKI also has good information but I wanted to write what I've found so far, mainly to remind myself.
Mapping Files
I think I'm going to use the XML file approach as its cleaner, I've name the files "*.otis.xml" and made them "Embedded Resources". I also setup the XSD that you get with the binaries to give me intellisense which is very useful.
Lets look at a simple example:This shows the following:
As you can see the mapping is written from the standpoint of the source class, once you get this its quite easy to follow.
You can read more about the mappings here but the sample with the source code is also good.
Configuration
To get the same to work I had to using the following C# code:
Configuration cfg = new Configuration();This allows me to do a one-way mapping from the UserEntity to the UserDTO.
cfg.AddAssemblyResources(Assembly.GetExecutingAssembly(), "otis.xml");
IAssemblerdtoFromEntityAssembler = cfg.GetAssembler ();
UserEntity entity = new UserEntity(5, "Bob Dole", "bdole");
entity.Advisor.Name = "sdaddds232";
UserDTO dto = dtoFromEntityAssembler.AssembleFrom(entity);
Posted by Colin Jack at 2/06/2008 1 comments
Labels: DDD, Design, Object-Object Mapping
Finally got round to reading Mats Helanders InfoQ post Aspects of Domain Model Management.
The article is very long, with a lot of content being around hacky solutions to the problems that the author is delaing with (attaching non-business logic to the domain).
I read all of the article but if you get bored you can skip most of the middle bit right up to Using "Aspect Oriented Programming".
The bit on proxies is interesting, if you're using NHibernate then you are already using the Infrastructural Proxy Subclass approach for lazy-loading (and collections) so you should be familiar with it. However trying to use that approach to handle your own requirements is not going to be clean or transparent.
Back to AOP. The author uses his own NAspect framework for run-time AOP however we were thinking of using PostSharp as compile time seemed good enough and it is more transparent. Having to use abstract factory, or just factories, for all object creation just does not appeal though.
The attribute based approach is cool though, and its similiar to the way you attach behavior with PostSharp . All in all I think I'm going to plow on with PostSharp and then see how it goes (as described in my post AOP and the Domain - Dirty Tracking).
I blogged about this a while ago but we've been trying introduce a user story based requirement process, our user stories would be stored in TFS and the users would access them using the Web Access Power Tool (previously called TeamPlain).
Sounds good, especially as the tool is free. Problem is every single user who wants to use the web portal to access TFS is going to have to have a CAL. I've complained about this before, and even logged a bug with Microsoft about it, but really it doesn't seem like they understand how big a problem this is.
Given the situation I would truly question whether any company should consider using Team System unless they are also prepared to hand over large sums of money to get their users involved. If you decide not to use TFS for artifacts such as user stories then its really just an expensive source control system.
Posted by Colin Jack at 2/05/2008 0 comments
Labels: Agile, Team System
Jimmy Bogard, who is becoming one of my favorite bloggers, has two excellent posts on Scrummerfall here and here. If you are "doing agile" but finding its only really changing the development team then they may be relevant.
To be honest we don't do Scrummerfall, we didn't start out with Scrum.
Posted by Colin Jack at 2/05/2008 0 comments
Labels: Agile
I thought I'd blog about some of the topics that were discussed at ALT.NET UK, or more correctly my views on those topics.
Styles Of Testing
I've blogged about the fact that I tend not to use interaction testing much. I find interaction testing can result in overspecified software and sometimes in tests that are hard to read. To be fair some of this could be countered by better use of mocking, for example diffrentiating between mocks and stubs (or Stubs and Expecations if thats the terminology you prefer), but state based testing is still my preferred approach especially for the domain model.
In the disucssion of mocking at ALT.NET UK it seemed that most of the attendees agreed that we are now overusing mocking. Whilst there was little disagreement with the idea of stubbing/mocking in some situations (e.g. between layers or domain modules) most people did seem to think it was dangerous if not used carefully.
Ian Cooper has blogged about this topic too.
Granularity
What granularity to test at, something that constantly irks me. To some people unit testing is always testing a single class, I've always thought that you can use unit tests for (small) groups of classes. In particular I do this when I have a helper class that I've extracted out of the class I was previously testing.
There are advantages and disadvantages to testing groups of closely related classes together though. The main advantage (especially early in design) is the tests can withstand refactoring and the main disadvantages are that they can be more complex and don't have such good defect localization.
Anyway Ian Coopers blog entry sum up my views on this entire topic. We also discussed whether to move/copy the tests down to the extracted class when you use extract class refactoring.
I'm totally inconsistent on this, I sometimes test against the extracted class and sometimes leave the tests at the level of the class I extracted it from.
Design For Test or Design For Design
The topic of design for testability came up a lot, though not in the way you might expect!
I've never believed design for testability is necessarily a good idea, I've blogged about this topic in a couple of posts before including here and here.
I don't believe there is any good substitute for thinking about your design. Sometimes designing for testability, especially if you favor lots of mocking using a traditional mocking tool, does bring lots of decoupling it's not necessarily as good as focussed decoupling.
Anyway I knew Roy Osherove was no longer recommending designing for testability and so was expecting a lot of disagreements around this topic. In actual fact though there wasn't much disagreement on the topic.
End To End TestingPosted by Colin Jack at 2/04/2008 2 comments
Labels: ALT.NET, AltNetUK, BDD, Unit Testing
Nick Hines from Thoughtworks led a really good session on BDD at ALT.NET UK. The session clarified a lot of things for me, though as always after a bit of thought I am left with plenty more questions so I thought a blog entry was in order.
What Is BDD
My initial experience with BDD was Dave Astels doing a videocast about it. Better TDD and focussing on design were the name of the game. This made sense but when I started looking into it more I got a bit lost. The documentation seems to focus on the influence of DDD and the use of interaction testing. This confused me a bit as unless you practice need-driven development its questionable whether you are using interaction design as a domain design technique. I was also confused as to how BDD fit in with DDD, other than both sharing the idea of a ubiquitous language.
Anyway the discussion clarified the fact that its fair to view BDD as better TDD and it has little to do with need-driven development (though I guess you could use them together). It also doesn't specify that you must be using interaction testing.
Where to use BDD
We also came to the conclusion that these tests were quite high level, more influenced by users (user story acceptance tests) or a business analyst. I'm thus not clear that they would influence the domain design other than at a shallow level as your users are normally not your domain experts. This is probably what Greg Young means when he talks about BDD being used to for a shared language which may be distinct from the ubiquitous language.
Practically this probably means writing these tests against high level domain entities or your application/service layer. The discussion did cover whether you then take the BDD tests into the domain and indeed right down to hidden implementation details. I'm still not clear on this but if we are using BDD tests as specifications then starting out with detailed high level specifications isn't a bad idea, it gives us a good way of specifying acceptance criteria.
More Questions
The question is then how detailed to make these high level BDD tests. They have to be quite detailed and comprehensive if they are acceptance tests but if you make them too comprehensive then you are going to end up testing the same things at multiple levels, initially with a high level BDD test then with more detailed "unit" tests.
This seems attractive to me though, Jimmy Bogard has a great post about how he tried to make sure his BDD tests are somewhat immume to refactoring (changing implementation).
Other Links
Ray Houston is one of many people blogging about learning BDD, including this really interesting post.
Posted by Colin Jack at 2/04/2008 5 comments
Labels: ALT.NET, AltNetUK, BDD, Unit Testing
At the recent discussion of mocking at ALT.NET UK we discussed using test spies.
We actually make good use of test spies now and they have some advantages. Let me give you an example where they come in handy. Our domain classes occassionally need to contact external services, interfaces implemented by these services are put in the domain (seperated interface) with the implementations available to the domain using a Service Locator.
What do to when testing though, we have maybe 1% of the the tests where we need to setup a mock version of the service and the rest don't care about the service so they can run with a do-nothing stub.
However if we register a mock version of the service with the Service Locator and if we forget to cleanup propely then that mock service will affect other tests. Its also painful having to put the stub service back in after you are doing using the mock.
The solution, suggested by one of my colleagues, is simple. We don't ever mock the service but instead we use a test spy. We register the Test Spy with the Service Locator in a method tagged with AssemblyInitializeAttribute so it gets run before any tests in that assembly.
All the normal tests run normally, and if they in some way cause the system to interact with the test spy then the interaction happens totally silently, for example maybe the test spy logs the interaction by adding an item to an internal collection.
So what about the 1% of the tests that really want to test against the test spy (the tests that might otherwise have used a mock). Well in the test fixture initialization we reset the test spy then at the end of each test we ask the test spy what calls it received and verify that they were what we expected.
This works a treat, really simple solution that makes the tests very easy to write.
NOTE - When to call services from the domain
The comment from Andreas made me realize that I didn't say when I think a domain class should talk to a service.
Normally I avoid these sorts of dependencies in order to keep the domain code clean, simple and easily testable. However for cross cutting concerns like logging/dirty tracking an AOP based approach (see my PostSharp posts) where we introduce the code that calls the service is very clean.
In particular this works because for those cases we can have the domain class contact the (infrastructure) Service but we don't care about any return values, hence the applicability of a Test Spy.
Posted by Colin Jack at 2/02/2008 2 comments
Labels: DDD, Interaction Testing, Mocking, TDD, Unit Testing