Last updated at Mon, 06 Nov 2017 21:03:53 GMT

In my consultancy practice, one of the things that I do most frequently is help teams write so-called “clean code.”  Usually, this orients around test-driven development (TDD) and writing code that is easily maintained via regression tests and risk-free refactoring.  Teams want to understand how to do this, and how to do it in their production code (as opposed to in some kind of toy “let’s build a calculator” exercise).

One of the most prominent, early sticking points that rears its head tends to be application logging.  Why?  Unit testing is all about isolating objects, instantiating them, and rapidly verifying their behavior in memory.  Logging is all about dumping as much information as possible to a file (or database, service, etc with appenders) from as many places as possible.  Unstoppable force, meet immovable object.

For instance, consider the following C# code from a “let’s build a calculator” exercise.

[TestMethod]
public void Adding_Numbers_Works()
{
    var target = new Calculator();

    var result = target.Add(2, 2);

    Assert.AreEqual<int>(4, result);
}

public class Calculator
{
    public int Add(int x, int y)
    {
        return x + y;
    }
}

Life here is good.  The calculator’s add method returns the sum of the integers, and we have a test method that supplies 2 and 2, and confirms a result of 4.  That’s a screaming fast, very clear test.  But, consider what happens if we amend our calculator to take advantage of logging.

public class Calculator
{
    public int Add(int x, int y)
    {
        Logger.Log($"Entering add method with parameters {x} and {y}.");
        Logger.Log($"Returning result of {x + y}");
        return x + y;
    }
}

Now, with that same test, everything goes wrong.  Instead of passing, it throws an exception about not having write access to some weird directory on your machine that you’ve never even heard of.  After digging a bit, you see that it’s the directory in which the test runner executes when running your solution’s tests.  Hmmm.

Unit tests and file I/O (or external system concerns in general) do not tend to play nicely together.  And, while you can hack your own environment to allow logging while your unit tests are running, it gets much dicier when your teammates also need to run these, and when the build machine does as well.  For this reason, I tend to steer developers away from a world in which their unit tests result in file I/O.

But that world can’t be one that eliminates logging.  Logging is far too valuable an auditing and troubleshooting tool to simply toss aside.  The question then becomes, how do you have both?  How do you have clean, unit-tested code and logging?  Well, there are a number of techniques that I’ll discuss.

Different Run Modes

Depending on language and framework, you may have the option of code that is selectively included at the time of building/compiling.  In C#, it’s possible to add a conditional attribute to a method.  Effectively, this gives you the option to turn the logging method into a no-op by default and to enable it for production deployments.

This is a pretty nifty way to sidestep testability issues.  When the unit tests are running, the calls to the logger just kind of magically disappear, but they exist in production.  On the downside of this, anything that’s “magical” in your code can be confusing to team members, and I generally advise people to execute testing in an environment as close to production as possible.  Having flags in your codebase, however elegant, that say “if production, do this otherwise, do this” subtly erodes the value of your tests.

Dependency Injection

If messing with compiler flags isn’t your thing (or possible in your codebase), you can always inject your dependency.  In the above code snippet, the logger is accessed through a static method (which is common).  This means that there’s no avoiding it or mitigating the situation in a unit test.  But consider the following, alternative code snippet.

public class Calculator
{
    private readonly ILogger _logger;

    public Calculator(ILogger logger)
    {
        _logger = logger;
    }
    public int Add(int x, int y)
    {
        _logger.Log($"Entering add method with parameters {x} and {y}.");
        _logger.Log($"Returning result of {x + y}");
        return x + y;
    }
}

Here, the logger is injected into the calculator class, which gives the unit test method the ability to introduce a mock object.  Basically, since the logger is only an interface, we can supply an alternative implementation at test time that does not touch a file system.

This is often a preferred approach, but it does have a notable downside, which is that it tends to clutter the constructor of every object that needs logging responsibilities.  Code that favors this approach can get kind of noisy, and all for the sake of something that is not the primary focus of the object (and thus one might argue, philosophically, a violation of the Single Responsibility Principle (SRP)).

Derived Logger

The author of the SRP, Bob Martin, offers an innovative solution.  One possible approach, he suggests, is to create a derivative of Calculator called “LoggingCalculator.”

public class Calculator
{
    public virtual int Add(int x, int y)
    {
        return x + y;
    }
}

public class LoggingCalculator : Calculator
{
    private readonly ILogger _logger;

    public LoggingCalculator(ILogger logger)
    {
        _logger = logger;
    }

    public override int Add(int x, int y)
    {
        _logger.Log($"Entering add method with parameters {x} and {y}.");
        _logger.Log($"Returning result of {x + y}");
        return base.Add(x, y);
    }
}

With this approach, Calculator stays “pure” and (unit) testable.  But since LoggingCalculator is a derivative of Calculator, it can be swapped in anywhere and used for it.  All LoggingCalculator (and Logging*) for any class does is add logging functionality, so that can be tested elsewhere, such as in integration tests.

While this does remove the noise from the classes that you want to test and from a lot of your code, it does have the drawback of creating a lot of noise in the project as a whole.  You now have up to twice as many classes/files as you’d have otherwise had.

Aspect-Oriented Programming

If you really want to cut out the boilerplate as much as possible, you can consider aspect-oriented programming.  Aspect-oriented programming is aimed at consolidating all code for a cross-cutting concern (like logging) into a single location.  In essence, rather than having every method in your code base worry about logging, you have code in one place that worries about the mechanics of logging and about inspecting the rest of the code and making the determination as to where the logging should occur.

This is a powerful concept, but it is also heavy on black magic.  Generally, AOP is accomplished with a technique known as *weaving, *in which the AOP code stores information about how the code you’ve written will be transformed into production code.  If you’re not careful, this can produce some pretty complex bugs at runtime.

Judicious Logging

One last, and more moderate point worth mentioning is that you can be judicious about where you log.  I’ve seen plenty of low-value logging code that is totally ubiquitous in applications and turns out to be total noise on the operations side.  Think about logging only in certain modules of your application.  External interaction is a great point heuristic — log inbound requests and responses or log queries to the database along with their results.

It’s not as complete as exhaustive logging, but it can paint a pretty good picture of what’s happening.

Combine for the Win

The techniques I’ve mentioned are not mutually exclusive and they do not exist in a vacuum.  You should mix and match them as necessary and as suits your particular application, style, and preference.  You need logging for traceability, and you need to keep your code clean for maintainability.  With a little practice at these techniques, there’s no reason you can’t have both.