Its a pretty common rule of thumb to avoid monoliths and write more small classes. Think about the solid principles.

  • Single responsibility per class. This implies less in it. Smaller.
  • Open for extension closed for modification. Take out the things that will change. Smaller.
  • Liskov substitution principle says substitutability is the way to go. Easier to ensure when your class is smaller.
  • Interface segregation. Smaller interfaces.
  • Dependencies being passed in means less surface area accessible or exposed. Smaller.

Ok, some of that might be a stretch, but as a stand-alone rule of thumb, more small classes is a key tenant that helps you meet most if not all of the above principles.

Here is an example from work today that I think shows the simplicity of this principle, that is often missed or overlooked.

The Problem

We have a class FooService. Now this class is responsible for getting us some Foo. (really need to find a picture of what a Foo actually looks like, the internet thinks it looks like a Dave Grohl).

public class FooService : IFooService
{
    public Foo GetFoo(CustomerId id)
    {
        Foo foo;
        ... //Actual implementation logic
        return foo
    }
}

The implementation is not important. It might be a database call, or a restful call, a reading from a hardware device, whatever.

Now we have this requirement that we are calling the Service from several places (properly injected and scoped of course) but we realise that we are calling it several times per request scope (lets assume this is MVC). We want to be able to reuse the result across the request scope, but not any wider.

Solution Decision

One thing we could do, is make a call at the start of the scope, and pass the result along to where it is needed. The problem there would be that we are a modular system and we have 4-5 calls between us and the place we need the value. Also they are down two different use case paths, and everything is constructor injected. Lets try another approach. Caching.

Caching is a great way to reuse a result temporarily. In fact I would argue that’s exactly what its for. So let’s start with the naive version.

Solution Implementation: the wrong way

Lets add caching to our class. Should be fairly simple, something like this:

public class FooService : IFooService
{
    Dictionary<CustomerId, Foo> _cache = new Dictionary<CustomerId, Foo>();
    public Foo GetFoo(CustomerId id)
    {
        Foo foo;

        if(_cache.ContainsKey(id){
            return _cache[id];
        }

        ... //Actual implementation logic

        _cache[id] = foo;

        return foo
    }
}

Looks fine right? Sure thing, but we should take care of the scope in our container so it gets correctly set to request scope.

Bind<IFooService>().To<FooService>().InRequestScope();

Something resembling a container setup. Cool.

Why is this the wrong approach? I have two specific issues with this code.

  • We have two concerns to our class now. Retrieving and Caching.
  • We separated two parts of our cache configuration, the Where from the Scope, across the system.

The first concern is the main point of the article. Subtle violations occur without even realising it (I’m guilty of this as much as anyone). We need to constantly ask ourselves what the concerns of a class are, to slowing build up a knowledge of known cases to be on the lookout for. This class has two concerns. We can separate them, and Ill show how easy that is to do next.

The second is almost more subtle though. Here, we are separating the same concern, caching, into different places. because the cache logic is internal, you can’t see it from where it is used. That is fine, since that is not your concern where you use it. But to configure it, you rely on the container for your IOC framework to be configured correctly. It is not clear from the cache code what the lifetime is, and not clear from the lifetime that it relates to caching. We need to bring these two pieces of information closer together. Luckily, by pulling out the cache concern from the service, we almost get that for fee. Lets see how.

Solution Implementation: the right way

We start by going back to our original implementation.

public class FooService : IFooService
{
    public Foo GetFoo(CustomerId id)
    {
        Foo foo;
        ... //Actual implementation logic
        return foo
    }
}

Instead of writing the caching inside the existing class, we create a new wrapper class instead.

public class FooServiceCacheWrapper : IFooService
{
    private IFooService _actual;

    public FooServiceCacheWrapper(IFooService actual)
    {
    	_actual = actual;
    }

    public Foo GetFoo(CustomerId id)
    {
        Foo foo;

        if(_cache.ContainsKey(id){
            return _cache[id];
        }

        foo = _actual.GetFoo(id);

        _cache[id] = foo;

        return foo
    }
}

There are three key parts to point out in this implementation. It adds almost exactly the same code as we were going to add (no extra work was required to go this alternate root). We didn’t have to change the existing code to do it. And third, we are implementing the same interface as the original, and wrapping around it a sort of caching layer. This is almost AOP, and is making use of the Proxy Pattern.

The second part is in our IOC config:

Bind<IFooService>()
    .To<FooServiceCacheWrapper>()
    .UsingConstructorParameter<FooService>()
    .InRequestScope();

or

Bind<IFooService>()
    .To(c=> new FooServiceCacheWrapper(c.Resolve<FooService>))
    .InRequestScope();

The syntaxes are made up, but the concept is there, and hopefully one looks familiar enough for you to implement in your IOC of choice. We Resolve our existing interface to our new Wrapper class instead. And part of resolving our wrapper class, is to use the concrete version of the service that will do the actual work. This is a simplistic approach similar to the idea of the Chain of responsibility pattern. We are now composing together our classes to achieve the desired result.

As a bonus, we see now that the place where we configure our caching around the service, we also define the scope of the cache class. Much nicer to see these two concepts in the same place now, don’t you think?

Hopefully this opens your eyes and starts you thinking. Keep looking out for places where you have been ignoring the single responsibility pattern, and make more small classes.