As a professional programmer, there are two main types of tasks you work on. I’ve started thinking about them as the context and the logic.
The logic is what you think this job is going to be about when you first start. How do I slice this collection up? How do I find all the paid invoices for this client and sum up their amounts? How does this date get turned into a string to be displayed on the screen? What floor should this elevator go to next? The logic is what they grill you on in interviews. The logic is algorithms. The logic is sometimes specific to your business. The logic is sometimes reusable. The logic has inputs and outputs that are testable.
The context is…everything else. How do I get this data from that service into this client? How do I make this code from this library talk to that code in that library? How do I make my build compile faster? What UI testing framework are we going to use? How do I fill this view controller up with the dependencies it needs? How do I talk to this nifty new e-ink display I bought? Which compile flags will give me useful stack traces when the app crashes? How do I perform this database migration? The context is everything that’s necessary to get your logic to run successfully, consistently, efficiently.
In Structure and Interpretation of Computer Programs, Abelson and the Sussmans write that an algorithm, the logic, “is not composed of matter at all. However, it is very real. It can perform intellectual work. It can answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm in a factory.” It can’t do any of that stuff without a context in which to run. The context lets it communicate to hardware over protocols, send information to a distant database, and even defines how the code is converted into an intermediate representation, CPU instructions, and finally voltage that plays across the silicon. Without context, the logic is purely abstract.
If I wanted to write a dynamic controller for my HVAC — a thermostat! — in Swift, I probably could do it. I could get some hardware like this to talk to the HVAC over 24V, a Raspberry Pi to run it on, maybe a few Raspberry Pis with thermometer sensors around the house to figure out when to turn the HVAC on and off, probably connect everything over Wi-Fi. But while this is possible, think about how much of your energy would be spent soldering hardware, connecting components, testing, writing servers, defining protocols and wire formats, and then compare that to how much time and energy you’d spend actually writing the dynamic control software. It wouldn’t be easy to write the logic, but it would take a fraction of the time that setting up the context would. (Hmm, now I just have to convince my partner that building our own custom thermostat will somehow be better than our Ecobee. It’ll at least be more fun, that’s for sure.)
How much of your time at your job is actually spent on writing the logic, and how much of it is spent preparing an environment in order for that logic to run? I wouldn’t be surprised at all if I found out that 98% of my time was spent on context.
I think a slightly different (and more familiar) way to think about this is in terms of essential versus accidental complexity, a division first suggested by Fred Brooks in 1986. Essential complexity is the logic, accidental is the context. Dan Luu writes about Brooks’s essay: “while this will vary by domain, I’ve personally never worked on a non-trivial problem that isn’t completely dominated by accidental complexity, making the concept of essential complexity meaningless on any problem I’ve worked on that’s worth discussing.”
Nonetheless, logic is not quite the same thing is as essential complexity, and context is not the same as accidental complexity. One example of something that is logic but still potentially accidental complexity is writing an algorithm like Ruby’s #squish in Swift. It’s still logic, it behaves like something you might ask in an interview question, you have to manipulate abstract symbols to get the right output, but it’s a total accident of history that Ruby has made it so you can use it without thinking about it logically, but Swift hasn’t. Another way to look at it: all context is accidental, but not all logic is essential.
Dan estimates 1% as an upper bound of his time spent on essential complexity.
Another question: how much of your code is logic, and how much of it is an environment in which that code can run? To take a quick example, let’s look at a table view in UIKit and then in SwiftUI:
class CountriesTableViewController: UITableViewController {
let countries: [Country]
override func viewDidLoad() {
super.viewDidLoad()
tableView.register(UITableViewCell.self, forCellReuseIdentifier: "cellIdentifier")
}
override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return countries.count
}
override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: "CountryCell", for: indexPath)
cell.textLabel?.text = countries[indexPath.row].name
return cell
}
}
I count 3 lines of real, core logic. Defining the countries
array, telling it the count of countries
, and assigning the label’s text
property.
And the same thing in SwiftUI:
struct CountriesList: View {
let countries: [Country]
var body: some View {
List(countries) { country in
Text(country.name)
}
}
}
Where did all of that other stuff go? The 3 lines of core logic are still there, but everything else seems to have disappeared. It was all context, non-essential for this purpose. A different tool made it vanish. This problem only gets worse as you codebase gets larger; your time becomes dominated by the context. From stem to stern, a typical feature might need a new database table on the server, some queries for that table, some endpoints that call those queries, some networking code on the client to hit those endpoints, a ton of routing code to get the user to a new view controller, and finally dozens of lines of table view controller code, all so you can put a label on the screen with the contents of a database field.
The context even has social and political elements. Who is writing the endpoint? What are their priorities? How do they earn promotions and how will that affect their writing the endpoint you need? Every time you read a tweet about how you “can learn the coding part while on the job, but the empathy and human components you need to have before you get there” is exactly about this.
This framing, context vs logic, illustrates two things for me:
First, that we all tell ourselves a lie: this job is primarily about the logic, interview candidates should mainly be tested on their ability to think about the logic, a “good” programmer is someone who can write the logic really well. In fact, an overwhelming amount of the job is making the context work. That’s not to say that the logic isn’t important; without the logic, the context doesn’t do anything and you won’t be able to do the job! But without the context, you still can’t do the job, and sadly there’s a lot more context than logic. I’m primarily a context programmer. I wish I weren’t — I enjoy writing the logic a lot more — but it is the reality. I should embrace that and treat the context as my job, rather than as an impediment to “my real job”.
Second, if you can make your context simpler and smaller, you can spend less time on it. Simplifying and unifying your context (where possible) is valuable, since you can recoup value by spending less time working in the context. Don’t use two technologies where one will do.
Some examples of this:
- Using multiple ORMs/data access patterns. The lava layer anti-pattern hurts you specifically because it adds so much extra context to work with. If you can make your context simpler and smaller, you can spend less time on it.
- I moved my server set-up from dynamic-sites-on-Heroku/static-sites-on-Linode to everything on Linode (using Dokku). One tool, one server, everything gets treated the same.
- Use clever tools, languages, and libraries to make the context become less and less impactful. You can see this in Dan’s essay and with the SwiftUI example. A tool like Fastlane brings code-signing, testing, deploying, and integrations all under one roof and lets you manipulate any of them with short Ruby commands. (In addition to unifying disparate things, this also lets you logic-ify your context, which is neat, too.)
You’ll always have a lot of context to wade around in. This is, sadly, your job. Try to minimize this context as much as possible and you can spend a little less time on it and more time on the good stuff.