Grand Central Dispatch, or GCD, is an extremely powerful tool. It gives you low level constructs, like queues and semaphores, that you can combine in interesting ways to get useful multithreaded effects. Unfortunately, the C-based API is a bit arcane, and it isn’t immediately obvious how to combine the low-level components into higher level behaviors. In this post, I hope to describe the behaviors that you can create with the low-level components that GCD gives you.

Work In The Background

Perhaps the simplest of behaviors, this one lets you do do some work on a background queue, and then come back to the main queue to continue processing, since components like those from UIKit can (mostly) be used only with the main queue.

In this guide, I’ll use functions like doSomeExpensiveWork() to represent some long running task that returns a value.

This pattern can be set up like so:

let backgroundQueue = dispatch_get_global_queue(defaultPriority, 0)
dispatch_async(backgroundQueue, {
	let result = doSomeExpensiveWork()
	dispatch_async(dispatch_get_main_queue(), {
		//use `result` somehow

In practice, I never use any queue priority other than DISPATCH_QUEUE_PRIORITY_DEFAULT. This returns a queue, which can be backed by hundreds of threads of execution. If you need the expensive work to always happen on the a specific background queue, you can create your own with dispatch_queue_create. dispatch_queue_create accepts a name for the queue and whether the queue should be concurrent or serial.

Note that each call uses dispatch_async, not dispatch_sync. dispatch_async returns before the block is executed, and dispatch_sync waits until the block is finished executing before returning. The inner call can use dispatch_sync (because it doesn’t matter when it returns), but the outer call must be dispatch_async (otherwise the main thread will be blocked).

Creating singletons

dispatch_once is an API that can be used to create singletons. It’s no longer necessary in Swift, since there is a simpler way to create singletons. For posterity, however, I’ve included it here (in Objective-C).

+ (instancetype) sharedInstance {  
	static dispatch_once_t onceToken;  
	static id sharedInstance;  
	dispatch_once(&onceToken, ^{  
		sharedInstance = [[self alloc] init];  
	return sharedInstance;  

Flatten a completion block

This is where GCD starts to get interesting. Using a semaphore, we can block a thread for an arbitrary amount of time, until a signal from another thread is sent. Semaphores, like the rest of GCD, are thread-safe, and they can be triggered from anywhere.

Semaphores can be used when there’s an asynchronous API that you need to make synchronous, but you can’t modify it.

// on a background queue
dispatch_semaphore_t semaphore = dispatch_semaphore_create(0)
doSomeExpensiveWorkAsynchronously(completionBlock: {
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)
//the expensive asynchronous work is now done

Calling dispatch_semaphore_wait will block the thread until dispatch_semaphore_signal is called. This means that signal must be called from a different thread, since the current thread is totally blocked. Further, you should never call wait from the main thread, only from background threads.

You can choose any timeout when calling dispatch_semaphore_wait, but I tend to always pass DISPATCH_TIME_FOREVER.

It might not be totally obvious why would you want to flatten code that already has a completion block, but it does come in handy. One case where I’ve used it recently is for performing a bunch of asynchronous tasks that must happen serially. A simple abstraction for that use case could be called AsyncSerialWorker:

typealias DoneBlock = () -> ()
typealias WorkBlock = (DoneBlock) -> ()

class AsyncSerialWorker {
    private let serialQueue = dispatch_queue_create("com.khanlou.serial.queue", DISPATCH_QUEUE_SERIAL)

    func enqueueWork(work: WorkBlock) {
        dispatch_async(serialQueue) {
            let semaphore = dispatch_semaphore_create(0)
            dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)

This small class creates a serial queue, and then allows you enqueue work onto the block. The WorkBlock gives you a DoneBlock to call when your work is finished, which will trip the semaphore, and allow the serial queue to continue.

Limiting the number of concurrent blocks

In the previous example, the semaphore is used as a simple flag, but it can also be used as a counter for finite resources. If you want to only open a certain number of connections to a specific resource, you can use something like the code below:

class LimitedWorker {
	private let serialQueue = dispatch_queue_create("com.khanlou.serial.queue", DISPATCH_QUEUE_SERIAL)
	private let concurrentQueue = dispatch_queue_create("com.khanlou.concurrent.queue", DISPATCH_QUEUE_CONCURRENT)
	private let semaphore: dispatch_semaphore_t
	init(limit: Int) {
        semaphone = dispatch_semaphore_create(limit)

    func enqueue(task: () -> ()) {
        dispatch_async(serialQueue) {
            dispatch_semaphore_wait(self.semaphore, DISPATCH_TIME_FOREVER)
            dispatch_async(self.concurrentQueue) {

This example is pulled from Apple’s Concurrency Programming Guide. They can explain what’s happening here better than me:

When you create the semaphore, you specify the number of available resources. This value becomes the initial count variable for the semaphore. Each time you wait on the semaphore, the dispatch_semaphore_wait function decrements that count variable by 1. If the resulting value is negative, the function tells the kernel to block your thread. On the other end, the dispatch_semaphore_signal function increments the count variable by 1 to indicate that a resource has been freed up. If there are tasks blocked and waiting for a resource, one of them is subsequently unblocked and allowed to do its work.

The effect is similar to maxConcurrentOperationCount on NSOperationQueue. If you’re using raw GCD queues instead of NSOperationQueue, you can use semaphores to limit the number of blocks that execute simultaneously.

Thanks to Mike Rhodes, this code has been improved from its previous version. He writes:

We use a concurrent queue for executing the user’s tasks, allowing as many concurrently executing tasks as GCD will allow us in that queue. The key piece is a second GCD queue. This second queue is a serial queue and acts as a gatekeeper to the concurrent queue. We wait on the semaphore in the serial queue, which means that we’ll have at most one blocked thread when we reach maximum executing blocks on the concurrent queue. Any other tasks the user enqueues will sit inertly on the serial queue waiting to be executed, and won’t cause new threads to be started.

Wait for many concurrent tasks to finish

If you have many blocks of work to execute, and you need to be notified about their collective completion, you can use a group. dispatch_group_async lets you add work onto a queue (the work in the block should be synchronous), and it keeps track of how many items have been added. Note that the same dispatch group can add work to multiple different queues and can keep track of them all. When all of the tracked work is complete, the block passed to dispatch_group_notify is fired, kind of like a completion block.

dispatch_group_t group = dispatch_group_create()
for item in someArray {
	dispatch_group_async(group, backgroundQueue, {
		performExpensiveWork(item: item)
dispatch_group_notify(group, dispatch_get_main_queue(), {
	// all the work is complete

This is a great case for flattening a function that has a completion block. The dispatch group considers the block to be completed when it returns, so you need the block to wait until the work is complete.

There’s a more manual way to use dispatch groups, especially if your expensive work is already async:

// must be on a background thread
dispatch_group_t group = dispatch_group_create()
for item in someArray {
	performExpensiveAsyncWork(item: item, completionBlock: {

dispatch_group_wait(group, DISPATCH_TIME_FOREVER)

// all the work is complete

This snippet is more complex, but stepping through it line-by-line can help in understanding it. Like the semaphore, groups also maintain a thread-safe, internal counter that you can manipulate. You can use this counter to make sure multiple long running tasks are all completed before executing a completion block. Using “enter” increments the counter, and using “leave” decrements the counter. dispatch_group_async handles all these details for you, so I prefer to use it where possible.

The last thing in this snippet is the wait call: it blocks the thread and waits for the counter to reach 0 before continuing. Note that you can queue a block with dispatch_group_notify even if you use the enter/leave APIs. The reverse is also true: you can use the dispatch_group_wait if you use the dispatch_group_async API.

dispatch_group_wait, like dispatch_semaphore_wait, accepts a timeout. Again, I’ve never had a need for anything other than DISPATCH_TIME_FOREVER. Also similar to dispatch_semaphore_wait, never call dispatch_group_wait on the main queue.

The biggest difference between the two styles is that the example using notify can be called entirely from the main queue, whereas the example using wait must happen on a background queue (at least the wait part, because it will fully block the current queue).

Isolation Queues

Swift’s Dictionary (and Array) types are value types. When they’re modified, their reference is fully replaced with a new copy of the structure. However, because updating instance variables on Swift objects is not atomic, they are not thread-safe. Two threads can update a dictionary (for example by adding a value) at the same time, and both attempt to write at the same block of memory, which can cause memory corruption. We can use isolation queues to achieve thread-safety.

Let’s build an identity map. An identity map is a dictionary that maps items from their ID property to the model object.

class IdentityMap<T: Identifiable> {
	var dictionary = Dictionary<String, T>()
	func object(forID ID: String) -> T? {
		return dictionary[ID] as T?
	func addObject(object: T) {
		dictionary[object.ID] = object

This object basically acts as a wrapper around a dictionary. If our function addObject is called from multiple threads at the same time, it could corrupt the memory, since the threads would be acting on the same reference. This is known as the readers-writers problem. In short, we can have multiple readers reading at the same time, and only one thread can be writing at any given time.

Fortunately, GCD gives us great tools for this exact scenario. We have four APIs at our disposal:

  • dispatch_sync
  • dispatch_async
  • dispatch_barrier_sync
  • dispatch_barrier_async

Our ideal case is that reads happen synchronously and concurrently, whereas writes can be asynchronous and must be the only thing happening to the reference. GCD’s barrier set of APIs do something special: they will wait until the queue is totally empty before executing the block. Using the barrier APIs for our writes will limit access to the dictionary and make sure that we can never have any writes happening at the same time as a read or another write.

class IdentityMap<T: Identifiable> {
	var dictionary = Dictionary<String, T>()
	let accessQueue = dispatch_queue_create("com.khanlou.isolation.queue", DISPATCH_QUEUE_CONCURRENT)
	func object(withID ID: String) -> T? {
		var result: T? = nil
		dispatch_sync(accessQueue, {
			result = dictionary[ID] as T?
		return result
	func addObject(object: T) {
		dispatch_barrier_async(accessQueue, {
			dictionary[object.ID] = object

dispatch_sync will dispatch the block to our isolation queue and wait for it to be executed before returning. This way, we will have the result of our read synchronously. (If we didn’t make it synchronous, our getter would need a completion block.) Because accessQueue is concurrent, these synchronous reads will be able to occur simultaneously.

dispatch_barrier_async will dispatch the block to the isolation queue. The async part means it will return before actually executing the block (which performs the write), which means we can continue processing.

The barrier part of dispatch_barrier_async means that it will wait until every currently running block in the queue is finished executing before it executes. Other blocks will queue up behind it and be executed when the barrier dispatch is done.

Cancelling blocks

A little known feature of GCD is that blocks can actually be cancelled. Per Matt Rajca, by wrapping a block in a dispatch_block_t and using the dispatch_block_cancel API, you can cancel it.

let work = dispatch_block_create(0) { print("Hello!") }

let delayTime = dispatch_time(DISPATCH_TIME_NOW, Int64(10 * Double(NSEC_PER_SEC)))
dispatch_after(delayTime, dispatch_get_main_queue(), work)


After execution of the block starts, it can’t be cancelled. This makes sense, becuase the queue doesn’t have a sense of what’s going on inside your block, or how to cancel it. You can write your own checks into the block, by using dispatch_block_testcancel:

let work: dispatch_block_t
work = dispatch_block_create(DISPATCH_BLOCK_INHERIT_QOS_CLASS, {
    guard dispatch_block_testcancel(work) == 0 else { return }

This is similar to checking isCancelled within an NSOperation. Note that you have to declare the work variable first, even if you don’t initialize the block itself. This is because you will have to use the work reference inside the block, and Swift won’t let you do it all in one line.

(Also, dispatch_block_testcancel? Who is naming these APIs?)

Queue Specific Data

The NSThread object has a threadDictionary property. You can use this dictionary to store any interesting data. You can do the same with a dispatch queue, using the dispatch_queue_set_specific and dispatch_get_specific methods. I haven’t thought of any clever ways to use this yet, excepting Benjamin Encz’s method of determining if you’re on the main queue:

private let mainQueueKey = UnsafeMutablePointer<Void>.alloc(1)
private let mainQueueValue = UnsafeMutablePointer<Void>.alloc(1)


Now, instead of using [NSThread isMainThread], you can instead check dispatch_get_specific(mainQueueKey) == mainQueueValue to determine if you’re on the main queue (as opposed to the main thread, which is subtly different).

Timer Dispatch Sources

Dispatch sources are a weird thing, and if you’ve made it this far in the handbook, you’ve reached some pretty esoteric stuff. With dispatch sources, you set up a callback up when initializing the dispatch source, and which in triggered when specific events happen. The simplest of these events is a timed event. A simple dispatch timer could be set up like so:

class Timer {
	let timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, dispatch_get_main_queue())
	init(onFire: () -> (), interval: UInt64, leeway: UInt64 = 0.5 * NSEC_PER_SEC) {
		dispatch_source_set_timer(timer, dispatch_walltime(NULL, 0), interval, leeway)
		dispatch_source_set_event_handler(timer, onFire)

Dispatch sources must be explicitly resumed before they will start working.

Custom Dispatch Sources

Another useful type of dispatch source is a custom dispatch source. With a custom dispatch source, you can trigger it any time you want. The dispatch source will coalesce the signals that you send it, and periodically call your event handler. I couldn’t find anything in the documentation defining the policy that guides this coalescing. Here’s an example of an object that adds up data sent in from different threads:

class DataAdder {
	let source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_ADD, 0, 0, dispatch_get_main_queue())
	init(onFire: (UInt64) -> ()) {
		dispatch_source_set_event_handler(source, { [unowned self] in
	func addData(data: UInt64) {
        dispatch_source_merge_data(source, data)

This dispatch source is initialized with a block that will give you the result of all the data that’s been added up so far. You can call addData from any thread with some amount of data, and the source will manage adding that data up and calling the callback.

You can also use DISPATCH_SOURCE_TYPE_DATA_OR instead of DISPATCH_SOURCE_TYPE_DATA_ADD, which will apply a binary OR to the data:

class DataAdder {
	let source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_OR, 0, 0, dispatch_get_main_queue())
	init(onFire: (UInt64) -> ()) {
		dispatch_source_set_event_handler(source, { [unowned self] in
	func mergeData(data: UInt64) {
        dispatch_source_merge_data(source, data)

You could use this to trip a flag from multiple threads. Crucially, the dispatch source’s data is reset to 0 after every each time the block is triggered.

These are the strange depths of GCD. I don’t know how or when I’d use this stuff, but I suspect that when I need it, I’ll be glad that it exists.

Wrap Up

Grand Central Dispatch is a framework with a lot of low-level primitives. Using them, these are the higher-level behaviors I’ve been able to build. If there are any higher-level things you’ve used GCD to build that I’ve left out here, I’d love to hear about them and add them to the list.

Decoding JSON in Swift is a huge pain in the ass. You have to deal with optionality, casting, primitive types, constructed types (whose initializers can also be optional), stringly-typed keys, and a whole bevy of other issues.

Especially in a well-typed Swift world, it makes sense to use a well-typed wire format. For the next project that I start from scratch, I’ll probably use Google’s protocol buffers (great blog post about their benefits here). I hope to have a report on how well it works with Swift when I have a little bit more experience with it, but for now, this post is about the realities of parsing JSON, which is the most commonly used wire format by far.

There are a few states-of-the-art when it comes to JSON. First, a library like Argo, which uses functional operators to curry an initializer.

extension User: Decodable {
  static func decode(j: JSON) -> Decoded<User> {
    return curry(User.init)
      <^> j <| "id"
      <*> j <| "name"
      <*> j <|? "email" // Use ? for parsing optional values
      <*> j <| "role" // Custom types that also conform to Decodable just work
      <*> j <| ["company", "name"] // Parse nested objects

Argo is a very good solution. It’s concise, flexible, and expressive. The currying and strange operators, however, are somewhat opaque. (The folks at Thoughtbot have written a great post explaining it here.)

Another common solution is to manually guard let every non-optional. This is a little more manual, and results in two lines for each property: once as to create non-optional local variable in the guard statement, and a second line to actually set the property. Using the same properties from above, this might look like:

class User {
  init?(dictionary: [String: AnyObject]?) {
      let dictionary = dictionary,
      let id = dictionary["id"] as? String,
      let name = dictionary["name"] as? String,
      let roleDict = dictionary["role"] as? [String: AnyObject],
      let role = Role(dictionary: roleDict)
      let company = dictionary["company"] as? [String: AnyObject],
      let companyName = company["name"] as? String,
        else {
          return nil
    } = id = name
    self.role = role = dictionary["email"] as? String
    self.companyName = companyName

This code has the benefit of being pure Swift, but it is quite a mess and very hard to read. The chains of dependent variables is not obvious from looking at it. For example, roleDict has to be defined before role, since it’s used in role’s definition, but since the code is so hairy, it’s hard to see that dependency clearly.

(I’m not even going to mention the pyramid-of-doom nested if let situation for parsing JSON from Swift 1. It was bad, and I’m glad we have multi-line if lets and the guard let construct now.)

When Swift’s error handling was announced, I was convinced it was terrible. It seemed like it was worse than the Result enum in every way.

  • You can’t use it directly: it essentially adds required language syntax around a Result type (that does exist, under the hood!), and users of the language can’t even access it.
  • You can’t chain Swift’s error model the way you can with Result. Result is a monad, allowing it to be chained with flatMap in useful ways.
  • Swift’s error model can’t be used in an asynchronous way (without hacking it, like providing an inner function that does throw that you can call to get the result), whereas Result can be.

Despite all of these seemingly obvious flaws with Swift’s error model, a blog post came out describing a use case where Swift’s error model is clearly more concise than the Objective C version and easier to read than the Result version. What gives?

The trick here is that using Swift’s error model, with do/catch, is really good when you have lots of try calls that happen in sequence. This is because setting up something to be error-handled in Swift requires a bit of boilerplate. You need to include throws when declaring the function, or else set up the do/catch structure, and handle all your errors explicitly. For a single try, this is a frustrating amount of work. For multiple try statements, however, the up-front cost becomes worth it.

I was trying to find a way to get missing JSON keys to print out some kind of warning, when I realized that getting an error for accessing missing keys would solve the problem. Because the native Dictionary type doesn’t throw errors when keys are missing, some object is going to have to wrap that dictionary. Here’s the code I want to be able to write:

struct MyModel {
    let aString: String
    let anInt: Int
    init?(dictionary: [String: AnyObject]?) {
        let parser = Parser(dictionary: dictionary)
        do {
            self.aString = try parser.fetch("a_string")
            self.anInt = try parser.fetch("an_int")
        } catch let error {
            return nil

Ideally, with type inference, I won’t even have to include any types here. Let’s take a crack at writing it. Let’s start with ParserError:

struct ParserError: ErrorType {
    let message: String

Next, let’s start Parser. It can be a struct or a class. (It doesn’t get passed around, so its reference semantics don’t really matter.)

struct Parser {
    let dictionary: [String: AnyObject]?
    init(dictionary: [String: AnyObject]?) {
        self.dictionary = dictionary

Our parser will have to take a dictionary and hold on to it.

Our fetch function is the first complex bit. We’ll go through it line by line. Each method on a class can be type-parameterized, to take advantage of the type inference. Also, this function will throw errors, which will let us get the failure data back:

    func fetch<T>(key: String) throws -> T {

The next step is to grab the object at the key, and make sure it’s not nil. If it is, we will throw.

        let fetchedOptional = dictionary?[key]
        guard let fetched = fetchedOptional else  {
            throw ParserError(message: "The key \"\(key)\" was not found.")

The final step is add type information to our value.

        guard let typed = fetched as? T else {
            throw ParserError(message: "The key \"\(key)\" was not the correct type. It had value \"\(fetched).\"")

Finally, return the typed, non-optional value.

        return typed

(I’ll include a gist and a playground at the end of the post with all the code.)

This works! The type inference from the type parameterization handles everything for us, and the “ideal” code that we wrote above works perfectly:

self.aString = try parser.fetch("a_string")

There are a few things that I want to add. First, a way to parse out values that are actually optional. Because this one won’t need to throw, we can write a simpler method. It unfortunately can’t have the same name as the above method, because the compiler won’t know which one to use, so let’s call it fetchOptional. This one is pretty simple.

func fetchOptional<T>(key: String) -> T? {
    return dictionary?[key] as? T

(You could make it throw an error if the key exists but is not the expected type, but I’ve left that out for brevity’s sake.)

Another thing we sometimes want to do is additional transformation to the object after its pulled out of the dictionary. We might have an enum’s rawValue that we want to build, or a nested dictionary that needs to turn into its own object. We can take a block in the fetch function that will let us process the object further, and throw error if the transformation block fails. Adding a second type parameter U allows us to assert that the product of the dictionary fetch is the same thing that goes into the transformation function.

func fetch<T, U>(key: String, transformation: (T) -> (U?)) throws -> U {
    let fetched: T = try fetch(key)
    guard let transformed = transformation(fetched) else {
        throw ParserError(message: "The value \"\(fetched)\" at key \"\(key)\" could not be transformed.")
    return transformed

Lastly, we want a version of fetchedOptional that also takes a block.

func fetchOptional<T, U>(key: String, transformation: (T) -> (U?)) -> U? {
    return (dictionary?[key] as? T).flatMap(transformation)

Behold: the power of flatMap! Note that the tranformation block has the same form as the block flatMap accepts: T -> U?.

We can now parse objects that have nested items or enums.

class OuterType {
    let inner: InnerType
    init?(dictionary: [String: AnyObject]?) {
        let parser = Parser(dictionary: dictionary)
        do {
            self.inner = try parser.fetch("inner") { InnerType(dictionary: $0) }
        } catch let error {
            return nil

Note again how Swift’s type inference handles everything for us magically, and doesn’t require us to write any as? logic at all!

We can also handle arrays with a similar method. For arrays of primitive types, the fetch method we already will work fine:

let stringArray: [String]

do {
	self.stringArray = try parser.fetch("string_array")

For arrays of domain types that we want to construct, Swift’s type inference doesn’t seem to be able to infer the types this deep, so we’ll have to add one type annotation:

self.enums = try parser.fetch("enums") { (array: [String]) in array.flatMap( {SomeEnum(rawValue: $0) })}

Since this line is starting to get gnarly, let’s make a new method on Parser specifically for handling arrays:

func fetchArray<T, U>(key: String, transformation: T -> U?) throws -> [U] {
	let fetched: [T] = try fetch(key)
	return fetched.flatMap(transformation)

This will abuse the poorly-named-but-extremely-useful flatMap that removes nils on SequenceType, and reduce our incantation at the call site to:

self.enums = try parser.fetchArray("enums") { SomeEnum(rawValue: $0) }

The block at the end is what should be done to each element, instead of the whole array. (You could also modify fetchArray to throw an error if any value couldn’t be constructed.)

I like this general pattern a lot. It’s simple, pretty easy to read, and doesn’t rely on complex dependencies (the only one is a 50-line Parser type). It uses Swifty constructs, and will give you very specific errors describing how your parsing failed, useful when trying to dredge through the morass of JSON that you’re getting back from your API server. Lastly, another benefit of parsing this way is that it works on structs as well as classes, making it easy to switch from reference types to value types or vice versa at will.

Here’s a gist with all the code, and here’s a Playground ready to tinker with.

I made a side project. I wanted to reimagine what a cookbook might look like if it were reinvented for a dynamic medium. In this world, recipes wouldn’t be fixed to a specific scale. The recipe would be well-laid out on any size screen it was rendered on.

Printed cookbooks are a static medium: the author decides at printing time how the recipes will be formatted and presented. The units, measures, font-size and even the language of the recipe is fixed. The promise computers and programming afford us is that separating the rendering and display from the data grants the user ultimate control. Because the system could understand what units and ingredients were, it could display amounts in whatever context the user wanted: metric or imperial, weight or volume.

To build this probject, I took a few concepts, like Bret Victor’s work with explorable explanations, abstraction, dynamic media, and brought them to the data in a cookbook.


The proof of concept for this idea is hosted it at It hosts a few recipes of mine, which I hope you’ll check out. is built on top of pepin, which is the JavaScript library I wrote to parse and process the ingredients. You can find pepin on GitHub. The site is rendered with Jekyll, and the data for each recipe is stored in simple YML files. (Currently all on the files for pepin and the Jekyll renderer are in the same repository. In the future, they might be separated.)

What does do? does a few things. First, it’s a home for my recipes. Cooks often tweak recipes, sometimes to work better for the altitude or humidity in their location and sometimes to accommodate the tools they have in their kitchens, like ovens that run too hot because their thermocouples are broken.

You can use pepin to make a home for your recipes, too. If you want to host your own version, fork it on GitHub, replace my recipes with yours, build with Jekyll, and host anywhere. The files are static and all of the logic is executed client-side.

I make apps during the day, but is a website. Why is that? A few reasons: first, making a website, especially a responsive one, works on tons of platforms out of the gate. Recipes benefit a lot from being easily shared with URLs, which don’t work nearly as well with native apps. Lastly, when prototyping, having a platform as flexible to develop for as the web really pays off. I was able to move a lot quicker to make stuff happen, although I did feel very hampered by the lack of a type system, especially later in the game, when I was refactoring a lot more to support new features. I wrote a comprehensive test suite to account for this, which I’ll discuss soon.

The primary way that takes advantage of its dynamic medium is that it allows you to scale recipes up and down easily. On any recipe page, grab the number next to the word “Scale” and slide it up or down. You can scale up to 10, and down to 1/6 of a normal serving.

Scaling recipes is unit-aware. That means if you scale 1 teaspoon of salt, to 2x, you’ll get 2 teaspoons of salt (note the pluralization). If you then scale to 3x, you’ll get 1 tablespoon of salt, because 3 teaspoons is one tablespoon, and nobody wants to measure out 3 teaspoons if they can just use one tablespoon. It’s needless to make that conversion in your head. This is the kind of thing computers is good at: taking mundane tasks, doing them for you, and giving you the data when you need it. As far as I can tell, no other recipe tool on any computer does unit-aware scaling. It’s a feature I’m pretty proud of.

This unit-aware scaling happens on larger units, too. 4 tablespoons becomes 1/4 cup, and so on. Each unit knows what the smallest form it can be represented in: for example, there’s no such thing as a 1/2 tablespoon, but there is a 1/2 teaspoon. Teaspoons go all the way down to 1/8. Cups go down to to 1/4. Gallons only go down to 1/2, because 1/4 gallon is just a quart. And so on.

To make this happen, pepin uses a unit reducer (link to code). It works by brute force: it converts an amount (say, 1/2 tablespoon) to every other unit that it could be represented by: 1 1/2 teaspoons, 1/32 of a cup, etc. It then finds the unit with the smallest corresponding number that is considered valid (bigger than the smallest acceptable amount for that unit). Since 1/2 tablespoon and 1/32 cup are invalid measures, it displays 1 1/2 teaspoons.

pepin also scales servings and yields for a recipe; it doesn’t scale the prep time or cooking time, because it’s never clear how scaling the recipe will affect those times.

A lot of’s usefulness comes from its stylesheets. I’d like to call out two particularly useful features. First, the site is responsive. Recipes need to render on my phone, my iPad, and my laptop, because I might have any of them in the kitchen with me at any time. I also want it to work well on a TV-sized screen, because if my apartment had a layout that let me see the TV from the kitchen, I’d want to work there too. renders nicely in all those formats. I also blew up the font for viewports bigger than 1200px, which is useful for a laptop that’s a few feet away.

The other feature that the stylesheet provides is custom fraction rendering. Unicode provides support for what they call vulgar fractions, like ¼ or ½. In the beginning, I started by rendering these values. As I added custom fonts to the project, I learned that many fonts don’t support these codepoints. Since making it look nice was an important piece of, I rendered my own fractions, using the techniques described on this page. The final css I ended up with was:

.frac {
  font-size: 75%

sup.frac {
  line-height: 10px;
  vertical-align: 120%

Having proper fractions adds a nice bit of shine to the project. pepin also converts decimals to fractions, so if your original recipe has “0.25 cups of flour”, it’ll render as “1/4 cup of flour”.

The last fun HTMLy component of this project was conform to the Recipe schema so that other sites can parse the information on The recipe schema is what allows Google’s search results to show prep times or star ratings when linking to other recipe sites, like Epicurious or Allrecipes.

Conforming to the schema on this site is simple. For the HTML element that encloses the item, you can declare:

<div class="recipe" itemscope itemtype="">

From there, each HTML element that holds data gets an itemprop attribute describing the data. For example, this span’s content would be the yield of the recipe:

<span id="serving-amount" itemprop="recipeYield">

To test your schema, you can use a testing tool that Google provides.

It’s not clear what conforming to a schema does for you, other than a slightly nicer display in Google’s search results, but I think they represent the promise of the semantic web. Web developers have always been willing to put in the slight extra work of using semantically correct tags for their content, and these schemata seem like a natural extension of that, so I’m happy to support them.

There are also a few properties of the code that deserve mention. pepin is entirely served in static files; no code at all runs on the server. This was a useful quality of the project, since it means anyone can deploy it pretty much anywhere. It doesn’t need a database to run, or a Redis instance, or a Rabbit message queue, or anything like that. Just generate the HTML files and stick ‘em on any server. All the data for each recipe is stored in HTML. All the data for processing and converting units is stored in JavaScript. All the logic for parsing and presenting the data happens in the browser.

This frees you up in a lot of ways. There’s no crazy Docker configurations, no worries about scaling limited server resources, and no expensive hosting. Another weird benefit of all the processing being done client-side is that it’s open source by default. Because I know anyone will be able to check under the hood to see how the scaling and parsing logic works, I might as well just make it open source.

A few friends asked if I was going to try to make any money off of, but because a) nobody pays for stuff like this and b) all JavaScript is already open source anyway, the answer was clear. I open sourced it early on, and I had the side benefit of being able to show my friends the code easily and ask them what they thought about a particular piece of code.

The second interesting property of the code is that pepin is the first project I’ve ever made that was truly test-driven. In past projects, a lot of the logic was poorly factored-out, or asynchronous, making it tough to test. In the cases where the logic well-separated, I’d usually write the tests after the unit was more or less completed, or I would write them for a unit with particularly complex logic and lots of edge cases.

In the case of pepin, the entire domain is data-in-data-out, making it super easy to test. Also, as you make changes to support new patterns of ingredients (“1 cup milk” vs “1 cup of milk”), you have to make sure not to break existing patterns, and TDD was perfect for this case. iOS apps don’t have much logic in them, but where they do, I’m going to try to structure them to take advantage of testing.

Because there’s no API component and no database to hit, my tests are blindingly fast. The entire test suite (50 tests) runs in 30 milliseconds. It’s very easy to run the entire suite after even the smallest change. (To be honest, the test runner should probably watch the folder and run after any file is changed, the same way that jekyll serve regenerates your site every time you save.)

I finally understand what people like Uncle Bob mean when they say that unit tests need to be fast. If your tests are hitting the API or the database, they’re going to be way to slow to run often. Isolate your logic, and run your tests a lot.

Where is going

There are a few interesting problems in the domain that I would have liked to solve before launching, but they are unfortunately quite complicated.

One problem is that measures like teaspoons, tablespoons, and cups can represent both dry and wet goods, whereas quarts and gallons can only represent liquid goods. Currently, pepin doesn’t have an understanding of what the ingredients part of an amount means. Ideally, it would know that flour is a dry good, and its density is 2.1 grams per teaspoon.

With that information, the user would get to choose whether they want display in metric or imperial, and in volumetric or weight measures, and get exactly what they want to see. This is the dream of you shouldn’t have to do any conversions you don’t want to do.

Knowing what the ingredients are and how many calories are in each gram would also let us generate a nutritional facts table for each recipe. Yummly currently does this, and it would be a great feature to support.

Another great small feature I’d like to steal are the clickable timers Basil and Paprika have. These would detect times in the instructions, like “15 minutes” or “for an hour”, and turn them into timers that the user can activate with a tap. This is a feature that works better in an app than on the web, since for an app you can fire a UILocalNotification when the timer is over, and the web has no such mechanism. I will probably take advantage of HTML local storage to store the timers, so that leaving the page and returning to it won’t destroy the timer’s state.

The last big feature that I’d love to build for is a good way to display images of the food. To really fill the roll of a cookbook, it needs to be beautiful as well as functional. This is a tough one for a few reasons: I need to have really beautiful pictures of my recipes, which are hard to get; the pictures need to go in the right places for each scale that the app supports; and the hosting of the pictures is an additional cost in complexity and hosting fees. I’m hoping to figure this out soon. isn’t not done; software projects never seem to be. Nevertheless, it’s cool, stable, and fun to use. I hope you enjoy it.

A side project I’m currently working needs an understanding of lots of different kinds of units. (I should probably be working on getting that off the ground instead of writing this blog post. Nevertheless.)

I’ve always found modeling units to be a fascinating programming problem. For time, for example, if you have an API that accepts a time, it’s probably going to accept seconds (or perhaps milliseconds! who can know!), but sometimes, you need to express a time like 2 hours. So instead of a magic number (7200, for the number of seconds in an hour), you write 2 * 60 * 60, perhaps adding spaces in between the operators to aid in “readability”.

7200, though, doesn’t mean anything. If you look at long enough and you have the freakish knack for manipulating mathematic symbols in your head, you might recognize it as two hours in seconds. If it weren’t a round number of hours, though, you never could.

And as that 7200 winds its way through the bowels of your application, it becomes less and less clear what units that mere integer is in.

A way to associate our integer with some metadata is what we need. Types have been described as units before, but can we bring that back to to units of measure, describing them with types? That can prevent us from adding 2 hours with 30 minutes and getting a meaningless result of 32.

(While it’s possible to handle this at the language level, most languages don’t have support for stuff like this.)

We still want to be able to add 2 hours to 30 minutes and get a meaningful result, so in our type system Time needs to be an entity, but Hours and Seconds do too.

Multiple things can be a Time, and each of those things must have a way to represented in seconds:

protocol Time {
    var inSeconds: Double { get }

Each unit of time will each be its own thing, but it will also be a Time.

struct Hours: Time {
    let value: Double
    var inSeconds: Double {
        return value * 3600

struct Minutes: Time {
    let value: Double
    var inSeconds: Double {
        return value * 60

We could add similar structs for Seconds, Days, Weeks, et cetera, understanding that we’ll lose some precision as we go up in scale.

Now that we have a shared understanding of how our units of measure can be represented, we can manipulate that unit.

func + (lhs: Time, rhs: Time) -> Time {
    return Seconds(value: lhs.inSeconds + rhs.inSeconds)

We can also add some handy conversions for ourselves:

extension Time {
    var inMinutes: Double {
        return inSeconds / 60
    var inHours: Double {
        return inMinutes / 60

And create a DSL-like extension onto Int, helpfully cribbed from ActiveSupport:

extension Int {
    var hours: Time {
        return Hours(value: Double(self))
    var minutes: Time {
        return Minutes(value: Double(self))

Which lets us write a short, simple, expressive line of code that leverages our type system.

let total = 2.hours + 30.minutes

(This result will of course be in Seconds so we will want some kind of presenter to reduce the units so that you can display this value in a meaningful way to the user. My side project has affordances for this. The side project is, unfortunately, in JavaScript, so no such type system fun will be had.)

I make a lot of hay about how to break view controllers up and how view controllers are basically evil, but today I’m going to approach the problem in a slightly different way. Instead of rejecting view controllers, what if we embraced them? We could make lots and lots of small view controllers, instead of lots of lots of small plain objects. After all, Apple gives us good ways to compose view controllers. What if we “leaned in” to view controllers? What benefits could we gain from such a setup?

I know a few people who do a subset of this. Any time there’s a meaningful collection of subviews, you can create a view controller out of those, and compose those view controllers together. This is a worthwhile technique, but today’s post will use a new type of view controller — one that defines a behavior — and show you how to compose them together.

Consider analytics. Often, I’ve seen analytics handled in a BaseViewController class:

@implementation BaseViewController: UIViewController


- (void)viewDidAppear:(BOOL)animated {
	[super viewDidAppear:animated];
	[AnalyticsSingleton registerImpression:NSStringFromClass(self)];



You could have a lot of different behaviors in this base class. I’ve seen base view controllers with a few thousand lines of shared behavior and helpers. (I’ve seen it in Rails ActionControllers too.) But we won’t always need all this behavior, and sticking this code in every class breaks encapsulation, draws in tons of dependencies, and generally just grosses everyone out.

We have a general principle that we like to follow: prefer composition instead. Luckily, Apple gives us a great way to compose view controllers, and we’ll get access to the view lifecyle methods too, for free! Even if your view controller’s view is totally invisible, it’ll still get the appearance callbacks, like -viewDidAppear: and -viewWillDisappear.

To add analytics to your existing view controllers as a composed behavior rather than something in your superclass, first, set up the behavior as a view controller:

@implementation AnalyticsViewController

- (instancetype)initWithName:(NSString *)name {
    self = [super init];
    if (!self) return nil;
    _name = name;
    self.view.alpha = 0.0;
    return self;

- (void)viewDidAppear:(BOOL)animated {
	[super viewDidAppear:animated];


Note that the alpha of this view controller’s view is set to 0. It won’t be rendered, but it will still exist. We now have a simple view controller that we can add as a child, we need a way to do so easily, to any view controller. Fortunately, for this, we can simply extend the UIViewController class:

@implementation UIViewController (Analytics)

- (void)configureAnalyticsWithName:(NSString *)name {
    AnalyticsViewController *analytics = [[AnalyticsViewController alloc] initWithName:name];
    [self addChildViewController:analytics];
    [analytics didMoveToParentViewController:self];
    [self.view addSubview:analytics.view];


We can call -configureAnalyticsWithName: anywhere in our primary view controller, and we’ll instantly get our view tracking with one line of code. It’s encapsulated in a very straightforward way. It’s easily composed into any view controller, including view controllers that we don’t own! Since the method -configureAnalyticsWithName: is available on every single view controller, we can easily add behavior without actually being inside of the class in question. It’s a very powerful technique, and it’s been hiding under our noses this whole time.

Let’s look at another example: loading indicators. This is something that’s typically handled globally, with something like SVProgressHUD. Because this is a singleton, every view controller (every object!) has the ability to add and remove the single global loading indicator. The loading indicator doesn’t have any state (besides visible and not-visible), so it doesn’t know to disappear when the current view is dismissed and the context changes. Ideally, we’d like the ability to have a loading indicator whenever we need one, but not otherwise, and to be able to turn it on and off with minimal code. We can approach this problem in the same way as the analytics view controller.

@implementation LoadingViewController

- (void)loadView {
    LoadingView *loadingView = [[LoadingView alloc] init];
    loadingView.hidden = YES;
    loadingView.label.text = @"Posting...";
    self.view = loadingView;

- (LoadingView *)loadingView {
    return (LoadingView *)self.view;

- (void)show {
    self.loadingView.alpha = 1.0;
    [self.loadingView startAnimating];

- (void)hide {
    self.loadingView.alpha = 0.0;
    [self.loadingView stopAnimating];


And our extension to UIViewController a little more complex this time. Since we don’t have any configuration information, like the name in the analytics example, we can lazily add the loader the first time it needs to be used.

@implementation UIViewController (Loading)

- (LoadingViewController *)createAndAddLoader {
    LoadingViewController *loading = [[LoadingViewController alloc] init];
    [self addChildViewController:loading];
    [loading didMoveToParentViewController:self];
    [self.view addSubview:loading.view];
    return loading;

- (LoadingViewController *)loader {
    for (UIViewController *viewController in self.childViewControllers) {
        if ([viewController isKindOfClass:[LoadingViewController class]]) {
            return (LoadingViewController *)viewController;
    return [self createAndAddLoader];


Again, we see similar benefits. The loader is no longer a global; instead, each view controller adds its own loader as needed. The loader can be shown and hidden with [self.loader show] and [self.loader hide]. You also don’t have to explicitly add the behavior (a loader) in this example.

We get the benefit of simple invocations and well-factored code. Other solutions to this problem require you to use globals or subclass from one common view controller, whereas this does not.

This example doesn’t need access to the view lifecycle methods like the other ones. It only needs access to the view, which it gets just from being a child view controller. (If you wanted, you could also add more state, like a incrementing and decrementing counter for the number of in-flight network requests.)

Another example of a common view controller behavior that we would love to factor out is error presentation. As of iOS 8, UIAlertView is deprecated in favor of UIAlertController, which requires access to a view controller. In the Backchannel SDK, I use a class called BAKErrorPresenter that is initialized with a view controller for presenting the error. Instead, what if the error presenter was a view controller?

@implementation ErrorPresenterViewController

- (void)viewDidLoad {
	[super viewDidLoad];
    self.view.alpha = 0.0

- (void)viewDidAppear:(BOOL)animated {
    [super viewDidAppear:animated];
    self.isVisible = YES;

- (void)viewWillDisappear:(BOOL)animated {
    [super viewWillDisappear:animated];
    self.isVisible = NO;

- (UIAlertAction *)okayAction {
    return [UIAlertAction actionWithTitle:@"OK" style:UIAlertActionStyleCancel handler:nil];

- (void)present:(NSError *)error {
    if (!self.isVisible) { return; }
    UIAlertController *alert = [UIAlertController alertControllerWithTitle:error.localizedDescription message:error.localizedFailureReason preferredStyle:UIAlertControllerStyleAlert];
    [alert addAction:self.okayAction];
    [self presentViewController:alert animated:YES completion:nil];


Note that the error presenter can maintain any state it needs, such as isVisible from the lifecycle methods, and this state doesn’t gunk up the primary view controller.

I’ll leave out the UIViewController extension here, but it would function similarly to the loading indicator, lazily loading an error presenter when one is needed. With this code, all you need to present an error is:

[self.errorPresenter present:error];

How much simpler could it be? And we didn’t even have to sacrifice any programming principles.

For our last example, I want to look at a reusable component that’s highly dependent on the view appearance callbacks. Keyboard management is something that’s typically that needs to know when the view is on screen. Normally, if you break this out into it’s own object, you’ll have to manually invoke the appearance methods. Instead, you get that for free!

@implementation KeyboardManagerViewController

- (instancetype)initWithScrollView:(UIScrollView *)scrollView {
    self = [super init];
    if (!self) return nil;
    _scrollView = scrollView;
    self.alpha = 0.0
    return self;

- (void)viewDidAppear:(BOOL)animated {
    [super viewDidAppear:animated];
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(keyboardAppeared:) name:UIKeyboardDidShowNotification object:nil];
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(keyboardDisappeared:) name:UIKeyboardWillHideNotification object:nil];

- (void)viewWillDisappear:(BOOL)animated {
    [super viewWillDisappear:animated];
    [[NSNotificationCenter defaultCenter] removeObserver:self name:UIKeyboardDidShowNotification object:nil];
    [[NSNotificationCenter defaultCenter] removeObserver:self name:UIKeyboardWillHideNotification object:nil];

- (void)keyboardAppeared:(NSNotification *)note {
    CGRect keyboardRect = [[note.userInfo objectForKey:UIKeyboardFrameEndUserInfoKey] CGRectValue];
    self.oldInsets = self.scrollView.contentInset;
    UIEdgeInsets contentInsets = UIEdgeInsetsMake(, 0.0f, CGRectGetHeight(keyboardRect), 0.0f);
    self.scrollView.contentInset = contentInsets;
    self.scrollView.scrollIndicatorInsets = contentInsets;

- (void)keyboardDisappeared:(NSNotification *)note {
    self.scrollView.contentInset = self.oldInsets;
    self.scrollView.scrollIndicatorInsets = self.oldInsets;


This is a simple and almost trivial implementation of a keyboard manager. Yours might be more robust. The principle, however, is sound. Encode your behaviors into tiny, reusable view controllers, and add them to your primary view controller as needed.

Using this technique, you can avoid the use of global objects, tangled view controllers, long inheritance heirarchies, and other code smells. What else can you make? A view controller responsible purely for refreshing network data whenever the view appears? A view controller for validating the data in the form view? The possiblities are endless.

Last week, I tweeted that “reading lots of new blog posts in rss makes me way happier than reading lots of new tweets”.

Opening my RSS reading and finding 30 unread items makes me happy. Opening Twitter and seeing 150 new tweets feels like work. I’m not sure why that is. I think Twitter has become more negative, and the ease of posting quick bursts makes posting negative stuff easy. With blogging, writing something long requires time, words, and an argument. Even the passing thought of “should I post this” creates a filter that lets only better stuff through.

And I find myself running out of blog posts to read more quickly than tweets. Even though the content is longer-form, there are much fewer sources total. I want to fix this.

That same day on Twitter, I put out a call for new blogs. I got a few recommendations (all great!): Priceonomics from Allen Pike, The Morning News from Patrick Gibson, and Matt Bischoff’s tour-de-force of a tweet.

I’m looking for more, though, and blogs of a specific type:

  • Written by a single person with a voice and interests of their own
  • I like programming but I’m happy with other stuff too
  • Longer-form is better than shorter, but both are good
  • Prefer original content to link blogs
  • Ideally fewer than 2 or 3 posts per week

Send me tweets and emails about the awesome blogs you love, please! And don’t be afraid to promote your own blog. I want to read it. Over the last year, while looking at my referrers, I’ve found some awesome blogs that my readers have been quietly working on, such as Christian Tietze and Benedict Cohen. I’m not kidding, I want to see your blog.

Here are some blogs of the type that I’ve found myself enjoying the most recently.

Slate Star Codex might be my favorite blog I’ve found. Scott has a contrarian angle on isuses that’s not always right but is always interesting. If you email me, I’m happy to recommend my favorites of his posts.

Sometimes, it can be inspiring to read people in other programming communities writing good stuff, like Pat Shaughnessy and his blog. Zack Davis’s An Algorithmic Lucidity is great. There are blogs like Mike Caulfield’s and Manton Reece’s that read like a journal for a new project. It’s awesome to be along with them for the ride.

Erica Sadun’s, Eli Schiff’s, Ben Sandofsky’s. Blogs I’ve found beause of Swift stuff, like Olivier Halligon’s, Airspeed Velocity, and Russ Bishop’s. An amazing blog from a co-worker at Genius, James Somers. He never posts, but when he does, it’s worth the wait.

I think I miss blogrolls, too. One of those will probably make an appearance on this blog soon.

Over winter break last year, I went on vacation for two weeks. I had lots of time and not as much internet. With the downtime, I wrote three posts of ideas I’d been having. I figured I would post one a week in the new year.

I posted the first one, Finite States of America, when I got back. It got a little traction and so I wrote a follow-up, State Machinery. The next two weeks saw posts about The Coordinator and Categories in Objective-C. After a month of posting, I found I really liked having a once-a-week posting schedule. I decided to see how long I could keep going.

At the end of the year, WordPress sent me a year end statistics retrospecive, and it included a graph.

Screen Shot 2015-12-29 at 5.38.18 PM

Each column is a week, and each green dot is a new post. This graph was coincidentally perfect for this project, because it clearly shows which weeks I post and which weeks i didn’t. (I missed three weeks in March for Ull and working on the Instant Cocoa release, two for WWDC, one for Thanksgiving, and one for NSSpain. I feel very guilty about missing those weeks and I’m sorry.)

Now, with the year over, I think I’m going to go a calmer posting schedule. Once a week, especially for the highly technical types of posts I write, is pretty extreme. I hope I can do twice a month. Time will tell.

Through the process, I learned a lot things.

The biggest thing I learned was that I could do this at all. In a roughly-mid-year retrospective, Throw It All Away, I wrote:

I’ve published 15 posts since January. It feels like a breakneck speed. If you asked me last year how long I could sustain such a pace, I think I would have answered, “maybe 4 weeks?”.

But I’m still going. And, somehow, even though back in December the list of potential topics had as many items on it as I’ve posted already, it’s still more or less the same length. I can’t explain it.

A lot of my friends asked me how I kept up such a crazy schedule. While it helped to have more people than usual reading my stuff and sending me positive feedback, the best thing was having a strict schedule and sticking to it. Making the blog a priority each week was the key. With 156 hours in each week, I of course had time to blog, it just needed to be prioritized over work, sleep, eating, social stuff, and binge watching the West Wing.

The second big thing I learned this year is that writing helps me figure out what I actually think. In this talk, Leslie Lamport quotes a cartoonist named Guindon in saying “Writing is nature’s way of letting you know how sloppy your thinking is.” I haven’t been able to source the quote any more specifically than that, but it’s a great quote.

When writing an argument down, it congeals into something more solid, and it’s so much easier to see the weak points and holes in the argument. For example, when I started writing A Structy Model Layer, my original intention was to show why structs didn’t make for good models. As I tried to flesh out my post and my thoughts, I realized that it was actually a more complicated issue than that, and sometimes structs are appropriate for model layers.

Writing so many posts helped me make clearer arguments and figure out what I really thought. I’m also glad that I have a repository of big, well-thought-out ideas that I can point people to. It was a great year, and since I’ve just started writing Swift for a client, more posts are just around the corner.

Over the course of the last year, I’ve blogged once a week. I’ve written about a broad range of ideas, but if there was one overriding concept, it was Massive View Controller. The idea of Massive View Controller all started from one simple tweet. It feels like the most obvious and pressing issue in terms of code quality in our industry.

Since my writing consists mostly of 1000-word chunks broken up by weeks, I wanted to assemble a compendium of different strategies I’ve written about.

8 Patterns to Help You Destroy Massive View Controller

The most important post I’ve written about the topic is 8 Patterns to Help You Destroy Massive View Controller. This contains lots of really practical advice for breaking up view controllers into composed units, like data sources, navigators, interactions, etc.

The thing that’s nice about writing code like this is not every view controller needs every one of these components. You just pick the ones you need and implement them. I also wrote about one of the patterns in its own blog post, called Smarter Views.

This post was written more than a year ago, so there’s lots of stuff I’d like to update in it, and patterns I would like to add and clarify.

Coordinators Redux

Coordinators Redux is a follow up post to The Coordinator from earlier this year. It’s a 3000-word treatise on why the view controller system is broken and how it leads to very messy entanglement between view controllers. The talk also comes in video form, which have some nice graphics breaking down what the benefits are here.

Coordinators make your view controllers simpler and more reusable. By taking the responsibility of flow management, it’s no longer necessary to have view controllers that know about each other. It also centralizes flow knowledge, instead of distributing it amongst many view controllers.

8 Patterns and Coordinators are my contributions to new ideas for the field. The other blog posts center around themes of how to think about this stuff.

Controllers are messed up

Mediating controllers, like Apple’s view controllers, which are sometimes known as adapters, have fundamental problems. In Model View Whatever, I examine the different overarching patterns that your app might use, such as model-view-controller (true MVC), model-view-adapter (which is more like what we call MVC today), model-view-viewmodel (MVVM), and a few others.

I went into further detail on MVVM last week in MVVM Is Not Very Good. The goal of this post isn’t to say that MVVM is bad. You’ll note the title explicitly says it’s just “not good”. Taking a huge chunk of code out of the view controller doesn’t help all that much if you just stick it all somewhere else. The goal of the post is to suggest that we could do way, way better.

In A Controller By Any Other Name, I analyze the harm caused by naming objects “Controller”. I wrote one of my favorite paragraphs ever in that post:

The harm caused by the “Controller” suffix is subtle, too. When you call something a Controller, it absolves you of the need to separate your concerns. Nothing is out of scope, since its purpose is to control things. Your code quickly devolves into a procedure, reaching deep into other objects to query their state and manipulate them from afar. Boundless, it begins absorbing responsibilities.

Lastly, in Emergence, I wrote about how the pain that we get from view controllers doesn’t happen by any malicious force. It happens purely by natural, emergent effects that happen as we’re working in the codebase “normally”.

Small Objects

The other big piece of keeping view controllers small is keeping all your objects small. If your view controller gets small, but some other view controller takes on the weight, that’s no solution at all. This is part of my complaint in the aforelinked “MVVM is not very good”.

I had a big revelation in Keeping Your Classes Shorter Than 250 Lines:

The critical epiphany for me was that if the same amount of code is spread among much smaller classes, there will have to be a lot more classes.

Examples of small objects, like those mentioned in 8 Patterns and the Cache object in Cache Me If You Can, help to break this stuff down.

In Anatomy of a Feature: Push Notifications, I take a look at what common feature, push notifications, looks like when broken down into many small objects. This same technique can be use with other subsystems of your app.

The two final techniques for making small objects that I’ve covered on this blog that I want to recap. The first is Pure Objects, which are an analog to functional programming’s pure functions. They don’t have access to any globals, don’t write to disk, don’t access the network. Their inputs completely define their outputs, and so they’re ripe for storing logic.

The other technique for making small projects that I’ve written about was in Graduation, which is step-by-step breakdown of the Method Object Pattern, a great way to turn your long nasty methods into beautiful simple objects.

These techniques won’t solve Massive View Controller on their own, but taken together, but they will take you a lot of way there. They also won’t do their work alone; as Smokey Bear once said, only you can prevent Massive View Controller.

I write a lot about making view controllers more digestable, and one very common pattern for doing that is called Model-View-ViewModel. I think MVVM is an anti-pattern that confuses rather than clarifies. View models are poorly-named and serve as only a stopgap in the road to better architecture. Our community would be better served moving on from the pattern.

MVVM is poorly-named

Names are important. Ideally, a name effectively messages what an object is for, what roles it fills, and how it’s used. “View model”, as a name, doesn’t do any of those things.

To make my case for me, the “view model”, on account how abstract it is, actually refers to two very different patterns.

The first type of “view model” is a “model for the view”. This is a dumb object (definitely a struct in Swift) that is passed to a view to populate its subviews. It shouldn’t contain any logic or even any methods. In the same way that a UILabel takes a string, or a UIImageView takes an image, your ProfileView can take a ProfileViewModel. It’s passed directly to your ProfileView, and it crucially allows you to make your subviews private, instead of exposing them to the outside world. This is a noble and worthwhile end. I’ve also seen this pattern called “view data”, which I like, because it removes itself from the baggage of the other definition of “view model”.

“View model” is also a name for a vague, abstract object that lies in between a model object and a view controller. It performs any data transformation necssary for presentation, as well as sometimes networking and database access, sometimes form validation, and any other tasks you feel like throwing in there. This jack-of-all-trades-style object is designed to transfer weight away from your controllers, but ultimately, creates a new kitchen sink for you to dump responsibilities into.

MVVM invites many responsibilities

The lack of concrete naming makes this class’s responsiblities grow endlessly. What functions should go in a view model? Nobody knows! Just do whatever.

Let’s look at some examples.

  • This post puts networking inside your view models, and recommends you add validations and presentation logic to it as well.
  • This post only shows how to put presentation logic in view models, raising the question of why it’s not called a Presenter instead.
  • This one says you should use them for uploading data and binding to ReactiveCocoa.
  • This one uses them for both form validation and fetching data.
  • This one takes the cake by specifically suggesting that you put “miscellaneous code” in a view model:

    The view model is an excellent place to put validation logic for user input, presentation logic for the view, kick-offs of network requests, and other miscellaneous code.

Nobody has any idea what the words “view model” mean, and so they can’t agree on what to put in them. The concept itself is too abstract. None of these writers would disagree about what goes in a Validator class, or a Presenter class, or a Fetcher class, and so on. Those are all better names with tightly defined roles.

Giving the same name to a wide variety of different objects with drastically different responsibilities only serves to confuse readers. If we can’t agree view models should do, what’s the benefit to giving them all the same name?

Our discipline has already faced a similar challenge, and we found that “controller” was too broad of a name to be able to contain a small set of responsbilities.

You’re totally free to give your classes whatever names you want! Pick good ones.

MVVM doesn’t change your structure

Finally, view models don’t fundamentally alter how you structure your app. What’s the difference between these two images? ( source)



You don’t need an advanced graph theory class to see that these are almost completely identical.

The most charitable thing that I can say about this pattern is that it changes your kitchen sink from a view controller, which is not an object that you own (because it’s a subclass of an Apple class), to a view model, an object that you do own. The view controller is now free to focus on view-lifecycle events, and is simpler for it. Still, though, we have a kitchen sink. It’s just been moved.

Because the view model is just one, poorly-defined layer added to your app, we haven’t solved the complexity problem. If you create a view model to prevent your view controller from getting too big, then what happens when your app’s code doubles in size again? Maybe at that point we can add a controller-model.

The view model solution doesn’t scale. It’s a band-aid over a problem that will continue to come up. We need a better solution, like a heuristic that allows you to continually divide objects as they get too big, like cells undergoing mitosis. View models are just a one-time patch.

Other communities have already been through this

The Rails community has gone through this problem some years ago, and they’ve come out on the other side. We could stand to learn from their story. First, they had fat controllers, and almost nothing but persistence in their models. They saw how untestable this was, and so they moved all the logic down into the model, and ended up with a skinny controller, and a fat model. The fat model, since it was reliant on ActiveRecord, and thus the database, was still too hard to test and needed to be broken down into many smaller components.

Blog posts like 7 Patterns to Refactor Fat ActiveRecord Models (which was the inspiration for my own 8 Patterns to Help You Destroy Massive View Controller) are an example of the product of this chain of thought. Eventually, you’re going to have to separate your concerns into small units, and moving your kitchen sink is only going to delay the reckoning.

View models are a solution that is totally unsuited to the challenges of modern programming. They are poorly-named constructs that don’t have a sense of what they should contain, which causes them to suffer the same problems as view controllers. They are only a temporary patch to a complicated problem, and if we don’t avoid them, we’ll have to deal with this problem again in the near future.

This week, I’d like to examine building your entire model layer out of Swift structs. Swift allows you to build your objects in two ways: values types and references types.

Reference types behave more like the objects we’re used to. They’re created with the class keyword, and they are pass-by-reference. This means that multiple other pieces of code could have a handle on the same object, which, when combined with mutable properties, can lead to issues of thread safety and data in an inconsistent state.

Value types, on the other hand, are pass-by-value. They’re created with the struct keyword. Passing things by value means that, in practice, two pieces of code can’t mutate the same struct at the same time. They’re commonly used for types that are mostly a bag-of-data.

In Objective-C, you didn’t have the ability to use value types at all. In Swift, we have a new thing called a struct which automatically copies itself every time it’s used in a new place. At first blush, this seems like exactly what we want for our model layer. It holds data and can’t be shared between threads, making it much safer. I want to know, can I write my whole model layer out of this?

Drew Crawford wrote a post called “Should I use a Swift struct or a class?”. The general idea is that with the advent of this new tool (structs), a lot of people have promoted writing as much of your code as possible in structs, rather than classes.

This leads to vaguely positive statements like Andy Matuschak’s in which he is “emphatically not suggesting that we build everything out of inert values” and yet we should “Think of objects as a thin, imperative layer” which presumably leaves a thick layer of values for everything else, and “As you make more code inert, your system will become easier to test and change over time” which is vaguely true but when taken to its logical conclusion seems to contradict his earlier statement that “not everything” should be a struct.

Drew’s general recommendation is that when it’s meaningful for the type to conform to Equatable, it can be a struct. If not, it should be a class. This suggests that our model objects maybe should be represented as a struct. My model objects usually do conform to Equatable, comparing the values that represent their identity.

Drew also quotes Apple’s book on Swift:

As a general guideline, consider creating a structure when one or more of these conditions apply:

  • The structure’s primary purpose is to encapsulate a few relatively simple data values.

This, on the other hand, suggests that structs are too simplistic for the model layer. Models are usually more than “a few relatively simple data values”.

Examples of good candidates for structures include:

  • The size of a geometric shape, perhaps encapsulating a width property and a height property, both of type Double.
  • A way to refer to ranges within a series, perhaps encapsulating a start property and a length property, both of type Int.
  • A point in a 3D coordinate system, perhaps encapsulating x, y and z properties, each of type Double.

These sound like parts of a model, rather than model layer itself.

Because the model object lie in a gray area between “things that need to be alive and responsive” and “things whose central job is to hold data”, I’m surprised to see models-as-structs question hasn’t been asked yet.

Models are primarily value data (like strings and integers) and there are going to be people who try to make their whole model layer out of structs, so I think it’s worth it to examining this approach. After all, the “models” in a purely functional programming language like Haskell has to be pass-by-value. Can’t it be done here?

The answer is, in short, yes. But there are a lot of caveats.

Persistence is much more difficult. You can’t conform to NSCoding without being an NSObject. To use NSCoding with a struct, you have to outsource your object encoding to a reference type, spelled out in this post. Core Data and Realm are totally not options, since those both have to subclass from NSManagedObject and Realm’s Object. To use them, you have to define your model twice and painstakingly copy properties from your structs to your persistence objects.

Changes in one model struct won’t be reflected elsewhere. In a lot of cases, changing a property of a object should be reflected in more than one place. For example, faving a tweet in a detail view should be reflected on the timeline as well. If you need behavior like that, be prepared to write a bunch of state maintenence code and notifications for shuttling those changes around to every corner of your app that needs them.

You can’t have circular references. Some people might consider this a pro rather than a con, but structs can’t describe circular relationships. If you want to get all the tags of a post, and then go find all the posts of one of those tags, you’re going to have to either duplicate the data or go through intermediate objects. Your data must be a hierarchal/tree structure, and can’t contain loops. This is what JSON looks like by default, so if your app’s model is a thin layer over a web service, this is less of a downside for you.

Not all types in your model will be representable by value types. Specifically, colors, URLs, data, fonts, and images, even though they act like values, can only be represented by class-based reference types, like UIColor, NSURL, and their companions. To make these truly value types, you’ll have to do work either wrapping that class in a struct, or defining a new data structure that represents the data and can be readily converted into a Foundation- and UIKit-friendly type.

If you want to make your model layer out of structs, it’s not impossible, but the downsides can be great. As with the rest of programming, it’s a trade-off that you must weigh to make a decision.