I've gone through the regular evolution when it comes to writing tests. At first there was no time, didnt bother. This was mostly down to the fact I was building demo apps and samples that got less use than it took to build them, working with senior devs who had made their name making said apps.

The next stage is reluctant acceptance. The team I was put on wrote tests, so I guess I should too. In fact, it was more of a requirement, with code coverage metrics driving how many tests we write. The tests written at this stage were horrible, too many dependancies, lots of mocks, lots of setup, and code deplication. Very fragile tests, some even depended on databases, real-world time, all that crazy stuff.

Finally after a while the penny dropped, and testing is making sense. Some of the big lessons that had to be realised where the following:

  • Tests need to assert one atomic thing.
  • Test names need to explain what is being tested.
  • Arrange, Act, Assert.
  • Tests are code. Refactor it, remove duplication.
  • Unit Tests have no external dependancies.
  • Integration Tests compliment unit tests.
  • User Automation tests are valuable and catch different issues.
  • TDD is not about writing tests.

So let's take a look at some of these points in details.

What is a 'unit' test?

At first, I thought out tests were too big, they seemed to be testing a lot in one test. Then I realised most of the code in the test class in question was mocking, setup and preparation. Why? Well the main reason is the object we are testing has several dependencies, and they all needed to be mocked and faked, so that we could test this class as a unit.

for example, lets say we have a class, and it takes 4 services (should be enough to make a point).

public class CustomerService {
    ...
    public CustomerService(IEmployeeRepository employeeRepository,
                           ICustomerRepository customerRepository,
                           ISecurityProvider securityProvider,
                           IInternationalisationProvider internationalisationProvider)
    {
    ...
    }

    public void Process(Customer newCustomer) {
    ...
    }
}

We want to test this method, with mocking, we might end up with a setup like:

[TestInitialise]
public void Setup() 
{
    _employeeRepositoryMock = new Mock<IEmployeeRepository>();
    _employeeRepositoryMock
        .Setup(er => er.FindEmployeeRecord(It.Is<Guid>(TEST_USER_ID)))
        .Returns(CreateTestCustomerEmployeeRecord());
	_employeeRepositoryMock
        .Setup(er => er.LoadEmployeeTypes())
        .Returns(CreateEmployeeTypeData());
	_employeeRepositoryMock
        .Setup(er => er.CreateNewEmployeeRecord(It.Is<Guid>(TEST_USER_ID), It.IsAny<CustomerEmployeeRecord>()))
        .Returns(CreateDefaultSuccessResultForCreateEmployeeRecord());

    _customerRepositoryMock = new Mock<ICustomerRepository>();
    _customerRepositoryMock
        .Setup(er => er.GetCustomer(It.Is<Guid>(TEST_USER_ID)))
        .Returns(CreateTestCustomer());
     ...        
    _customerServiceMock = new Mock<ISecurityProvider>();
     ...
    _internationalisationProviderMock = new Mock<IInternationalisationProvider>()
     ...
}

And you could imagine with more dependencies what else could crop up, not to mention the code that appears inside the tests themselves.

This is where the lesson is. Well, two lessons:

  • Try to minimise your dependencies so they are easier to mock, stub and fake.
  • If several classes are heavily dependent for testing, they might make up a single unit.

The trick seems to be defining the size of testable units, and clearly defining the unit dependency boundaries. Sometimes the bulk of the actual logic can pull pulled out completely and tested in a more functional way, with data passed in, data returned. A lot less of these internal dependencies that you have to mock, the retrieval of data is all through the inputs of the function or class instead.

The three A's

Tests need to be made up of three steps. each step should try and be only one or two actions.

Arrange: coordinate some setup for your test. This might be in the test method itself, creating an instance of the unit under test. This might be in the constructor, or per-test setup method. Having a lot of setup for a test is a code smell.

Act: perform the task that is being tested. Maybe a static method, or method on your unit under test. This should be a single atomic action, from your system code would be expected.

Assert: confirm that the state of play after acting is what is expected. Check the result of the method has the correct state, or check the state of the unit under test has been modified correctly. Again, assert one thing. It might take multiple assert statements, but they should represent a single atomic state.

[Test]
public void CustomerService_CreateNewCalledWithValidCustomer_NoProblemsOccur()
{
    //Arrange
    var sut = CreateService();
    var newCustomer = GetValidCustomer();
    
    //Act
    var saveResponse = sut.CreateNew(newCustomer);
    
    //Assert
    Assert.True(saveResponse.Successful);
    Assert.Empty(saveResponse.Warnings);
}

Personally I don't like having the comments in there, but spacing my test methods into the three sections make it much clearer to read what is happening at each stage, and what is the Act part.

Test code is code

The quality of test code is appalling. Yeah, this is an over-generalisation. I should be clearer and say, the quality of the code people write when they start writing tests is appalling. I'm guilty of it, and I've seen it with other people when they start out writing tests. Now this could be a localised issue and be related to the testing and mocking frameworks 'éxperienced' testers use in .Net, and the inherent complexity they introduce. The other thing it could be, and the one I'm conscious of, is that people don't seem to treat test code with the same respect they do the rest of the code in their application. Quality, standards and best practice goes out the window, it seems, when they write their test code. Follow the rules and everything gets better fast. SOLID anyone?

Automated UI Testing

This is a whole topic in itself, so I will have to post something more in depth about this later, but the punchline is Automated UI Tests abstract functionality from implementation detail. Yes, we couple to design decisions still, but done right UI test can abstract over the design and still focus on the functionality. This makes these tests less fragile and prone to change based on refactoring implementations. Something that unit and integration tests often struggle with (albeit because of the way we probably architect that test code in the first place). Having these tests in place gives you the most bang for your buck, since we know that the UI does (or doesn't if the tests fail) allow the user to perform the operation/task the test runs through. This covers all the moving parts involved in that action from client application code all the way to persistence. Yes, these are integration tests in a way, but the Black-Box nature gives them their most value, not to mention they do exactly what the user does, not an approximation of it. Of course you should still complement this with unit tests over logic and integration tests at other points in your system as well (APIs, modules/components etc).

Conclusion

So what have I learned? Well, the main thing is that there is much more to learn. And like becoming a better developer in general, it's all about practice, observe, improve. Everyone takes their own journey, but the lessons learned take us all to a similar destination in the end: write more tests, improve the quality of your code and applications.

Thanks @saramgsilva for suggesting I add some code samples to this post.