Note: this page is part of the “Essays on Software Engineering”

I discovered Scala and functional programming when I joined Twitter. Scala introduced me to this new world of pure functions and immutability. I loved it so much that I started teaching the language for all new employees (if you ever attended my class, I apologize for all the bad puns and jokes I made).

Over the years, I really embraced functional programming as a discipline. No matter what language I used, I try to always stick to some keys principles:

  • pure functions: the output of a function depends only on its inputs.
  • immutability: keep all data immutable, any modification results in a copy

You do not need to use a functional programming language to use these principles, you can apply them to almost any language. I use them while coding in Java and Python.

John Carmack also tried to apply these principles when building the Doom 3 3D engine (idTech 4), built using C++.

I will skips the reason to use functional programming and focus on immutability.

Idempotence and determinism

You want your function to be easy to understand and test. Ideally, you want the result of a function to be idem-potent: no matter how many times you invoke the function, you get the same result if you provide the same inputs.

That rule is being verified if you use immutable data, let’s illustrate this principle with an example.

Let’s consider a class Person in an hybrid language:

case class Person(name: String, age: Int)

Then, let’s define two older functions. One that mutates the value passed as parameters (let’s assume parameters are mutable).

Person older(person: Person, age: Int) {
  person.age = person.age + age;
  return person;
}

And another older function where we use immutable data (and we copy the argument).

Person older(person: Person, age: Int) {
  return person.copy(age = person.age + age);
}

Now, consider the following block of code

Person p = Person("foo", 42);

Person p2 = older(p, 10);
Person p3 = older(p, 10);

Ideally, what you really want is p2 == p3 since they both have the same inputs. Unfortunately, this will not work with the first function since we mutate the value ahead of time. If you use the first function, p3.age == 62 and if you use the first function, p3.age == 52.

The problem is that such mutation will spread in your codebase like the plague and will make really difficult to follow the codepath. Let 10 developers follow such practices and you will end up with spaghetti code that becomes a nightmare to understand.

Immutability comes hand in hand with pure functions. If you want to embrace immutability, embrace pure functions.

Parallelism

One of the hardest problem on computer science is to write correct parallel programs. With computers having multiple cores all programs are multi-threaded and data is being shared between cores/thread, which requires to use locking mechanism (e.g. mutex, semaphore, etc.).

The reason we use mutex is precisely because a data is being mutated and we want to guarantee only one thread modifies a data at a time. If you use immutable data structure, these problems are non-existent for you since you never modify data structure (you always copy and create new data).

In other words, using immutable data let’s you write scalable programs from the start.

The cost of immutability

The recurrent against immutable data is the additional runtime cost. The argument is that using immutable data structures has a heavy runtime cost since you need to allocate more memory to support instantiation of multiple objects. Using immutable data then requires faster machines with more memory to run a program.

This argument is a pure fallacy for two reasons.

The first reason is technical. There are a lot of technical tricks to implement immutable data without having a huge runtime cost. For example, in Scala, the copy() function generated for each case class makes a shallow copy of a given object, reducing drastically the runtime footprint of your program (compared to deep copy).

The second reason is pure economics. The average salary of a computer scientist is $122,840. Tech companies such as Stripe or Google have the best engineers and still spend 10% to 20% of their time dealing with bad code (source: Stripe study). To put it more clearly: the cost of bad code is more than $10,000 to $20,000 per developer per year.

The reality is: the cost of labor to write and maintain the software is far higher than the cost of hardware. Therefore, we should use all potential methods (in terms of management, programming style or execution platform) to reduce the labor costs. The use of immutability or functional programming is only one candidate among others.

Of course, engineers will always believe they can outsmart the system and produce a more efficient program by using some shady tricks (e.g. produce better assembly cost than the C compiler). The reality is that 99% of the time, they are dead wrong (e.g. compilers do a better job). And in the remaining 1%, the code should be carefully evaluated if this is worth the cost: if the manual optimization is used, it will need to be maintained during the lifecyle of the software (e.g. who wants to maintain assembly code if not an absolute necessity?).