Functional purity is a valuable concept for writing maintainable code, though outside of functional programming languages like Haskell, it’s often treated like a nice-but-expensive luxury. But it turns out that pure functions that aren’t quite so pure can be cheap while still having concrete benefits for code in non-functional languages like C++, Java and Python. For D code, this is supported by the language itself, but there’s nothing D-specific about the overall idea.
“Functional Purity”?
For those new to functional purity, the basic idea is that a pure function in code is like a mathematical function.
sin(x)
just takes a value like π\2 and returns a value like
1. It doesn’t change the value of x
. It doesn’t delete files
from your hard disk. It doesn’t update the laws of trigonometry. It doesn’t keep any record of being called. The next
time you call sin
with π\2, you get the same value 1
again.
Haskell: (Almost) Perfect Purity
Haskell has a very strong stance on purity. It is possible to write functions that mutate data, but in
normal, idiomatic Haskell code, all data is immutable and all functions just return values calculated directly from
their arguments – just like sin(x)
.
This is a very strict way to program, but in return it gives a very strong guarantee: referential transparency. Take this code:
Because of Haskell’s referential transparency, this can be rewritten as
This might seem trivial, but a compiler for another language couldn’t necessarily make the same optimisation. What
if the programmer actually wrote a function that sends spam
emails and called it “sin” out of a sense of guilt? If there’s no restriction on what side effects the sin
function can have, the two code snippets could do very different
things.
Purity for Not-So-Pure Languages
But what about other languages that don’t meet this ideal? Here’s where I risk being compared to a dieter wanting a magic, 10-minute-miracle exercise gadget and say that even if you aren’t using a strict functional programming language, not only can you benefit from designing with pure functions, you can benefit even more from using weaker forms of purity as well.
“Weak purity” has been a concept at least in the D programming language for some years now. Like other languages, D
has a pure
keyword that can be used to mark a function as
pure and enforce that purity at compile time. A function marked pure is essentially restricted to accessing its
arguments, accessing other data only if it’s immutable, and returning values.
Nothing new so far. The big difference from the usual meaning of “pure”, though, is that a “pure” function in D can
have a non-constant pointer or reference type as an argument, and it can modify that argument. Yes, it can mutate
state, and this is according to the language spec. By contrast, if you try the same thing in C with GCC’s pure
attribute extension, the code will compile without warning (at least
in GCC 4.9) but the optimiser will generate bugs based on your false promise that the function is pure (by the usual
definition).
But That’s Not Real Purity!
Well, you can get the usual (“strong”) purity guarantee just by making all pointer or reference type arguments
const
. (Update: return values matter, too – thanks
ag0aep6g) Otherwise, “weak” purity gives
up referential transparency, but still leaves us with a very important feature:
All inputs and outputs of a weakly pure function (including any persistent state) are accessible from the call site.
This is a very simple but powerful idea. To be pedantic, it’s not even guaranteed for a strongly pure function,
which can still return a object with all members marked private
with no friend
s or equivalent, but let’s assume we design in the spirit of
accessible inputs and outputs and look at some of the real-world benefits we get:
Testability
Many programmers feel guilty about not writing enough tests. But often the root problem is that they are writing code that isn’t testable in the first place. It’s obviously impossible to test a function if you can’t control the relevant inputs and can’t see the relevant outputs in your test environment. A well-known workaround is to create mocks and use dependency injection. For example, if code reads the time from the system clock, a “mock” clock object with the same interface will be written that just returns pre-recorded times when read. The code under test will be designed so that this mock can be injected somehow during testing.
In many real codebases, mocks seem to take over. Just about every class is made mockable, which means that all code paths are effectively runtime configurable. This adds complexity and makes things like debugging and code analysis harder. Also, to state the obvious, testing with mocks isn’t testing with the real thing, which makes tests weaker, so that bugs still happen even with high test “coverage”.
It helps to remember that mocking and dependency injection are just attempts to take untestable impure code and make it a bit more like pure code. Things like clocks and filesystems and networks are fundamentally impure, but trying to keep other parts of the codebase pure can help cut back on the amount of mocking needed, while making testing easier and more effective as a bonus. A surprisingly common anti-pattern is functions that do a lot of pure calculation and finish with an impure action – for example, calculating statistics and then writing them to a file. Simply splitting the pure work into its own function instantly makes it testable.
By the way, an important corollary of being able to control all the inputs is that tests can be made completely repeatable. A weakly pure function still gives the same output for the same input.
Error Handling
Just like with testing, a lot of programs are great at detecting errors but terrible at handling them, and frequently this isn’t because the programmer was lazy, but because the system design doesn’t support error handling. Suppose you’re using a game engine API with a function like this:
You run it and get an I/O exception. What do you do? Will retrying sometimes add the data twice? Will not retrying sometimes lose the data completely? What if the data was partially added? We don’t know and can’t do anything because the function has side effects that we care about but can’t access.
Pure functions don’t have this problem. One refactoring would minimise the scope of the impure code by splitting the game data loading into a separate function from the game data parsing and player ranking update. These other functions can now be made (weakly) pure.
Code Readability
One of my little pet peeves is functions like this:
Apparently, it’s important for the caller to know this function has the specific effect of (lol, maybe) enabling the booster, but not when this might happen or whether it happened this time around. I.e., it’s a half-hidden side effect. This is bad encapsulation.
This is a better function:
Although it has the same signature, the function is subtly different. The inner workings are fully hidden. It isn’t important for the caller to know what specific effects this function has, so we don’t get the bad coupling that the previous function implies.
Even better in many cases is a weakly pure interface. For example:
Making both the inputs and outputs visible at the call site might not make the code prettier, but it makes it much easier to understand and check for bugs. If you were searching for one of those “weird” bugs in someone else’s code, which would you rather read?
Code Modularity
Some people might object that the last snippet of code is leaking implementation details. But the inputs and outputs should really just be the interface of a function. Functional purity doesn’t guarantee modular code, but it makes you treat functions more like electronic components with visible pins for inputs and outputs.
Empirically, impure operations like file handling being mixed in with pure operations like data processing seems to be a common root problem with codebases that aren’t modular. Again, separating the two doesn’t guarantee modular code, but it goes a surprisingly long way.
Pure functions also don’t have hidden internal state, and hidden state causes coupling. This goes against what is frequently (badly) taught as “encapsulation” in CS101, but it’s true. If two parts of a codebase try to call a function, they are coupled by whatever internal state is hidden inside it.
This is all heavily related to what John Carmack noticed, though he came to the opposite conclusion about modularity at first.
But Wait, There’s More!
So far I’ve only talked about what we haven’t lost from allowing weak purity, so it might seem like weak purity is just a compromise. But weak purity actually gives us a few things that strong purity didn’t.
Compromise-free Integration with Imperative Code
Let’s go back to the example of calculating stats and then writing them to a file. The impure version has a performance advantage in that it can store all the data in the form of local variables on the stack. If we split the pure calculation code off from the impure file system code, the new calculation function has to output the data as a return value. We can’t return local variables directly, so now we have to copy data or return a heap-allocated data structure, which is kind of a waste. But that’s the price we have to pay, right? Only if we demand strong purity.
By the way, C++’s return value optimisation rule allows C++ compilers to do a similar refactoring, but magically and optionally. I prefer the weakly pure function approach. It’s more explicit, and it’s more flexible, too.
Also, if we’re talking about D-specific design techniques, the same code code can be structured like this (without
giving up the pure
attribute):
Helping More Functions Be Strongly Pure
If you still think that weak purity is just a cop-out compromise, consider this: Pure/impure is a transitive thing – a pure function can’t call a function with side effects, or else it effectively has side effects too. Weak/strong purity is not – a strongly pure function can call a weakly pure function and still present a strongly pure interface to the outside world. So not only is it easier to refactor impure functions to be weakly pure, doing so makes it easier to make other functions strongly pure.
Just as importantly, strong purity (and the optimisations it allows) can easily be inferred by a compiler (and a
human) just by checking whether pointer/reference arguments are const
. From a type system point of view, the weak version of purity just
seems to make more sense for a language with mutable data.
Do We Need to Go Weaker?
Even ignoring fundamentally impure things like computer network I/O, I still don’t make absolutely all my code pure (weakly or strongly) in practice. Sometimes there are other solutions. For example, the error handling issue with the game API can also be resolved by making the function atomic (in other cases idempotency is enough). But even if I can’t (or some reason, won’t) make some code completely pure, the concepts of weak and strong purity are a pretty good guide for evaluating the pros and cons of a code design.
By the way, I tried to focus on the design benefits of weak purity in this post because this other article already has a good explanation of the technical aspects of weak purity in D. There’s a lot I skipped, so go read it if you want to know more.
Update: see the programming subreddit for discussion of this article.