Ruby allows developers to easily make new namespaces using the module keyword. For example, the ActiveRecord namespace contains a class called Base, like so:

module ActiveRecord
	class Base
	
	end
end

Inside the ActiveRecord module, you can refer to this class as just Base, whereas outside the module, you’d refer to it as ActiveRecord::Base. You can add multiple classes, functions, and variables to any Ruby module.

(For those that are curious, ActiveRecord::Base is ActiveRecord’s equivalent to NSManagedObject. It’s the class you subclass from to make new ORM objects.)

I want this effect in Swift. Swift has something called modules, but those are really just frameworks. I know they can give you some potential compilation speedups, but making a Swift module is pretty high ceremony and involves a lot of configuration. I just want to type some code, and isolate some classes from some other classes.

This great Natasha the Robot post from a few months ago finally gave me the tools I needed to make Swift namespaces a reality.

She used an enum with no cases (because enums with no cases crucially can’t be initialized) and added static properties to the enum to give them a fake namespace.

enum ColorPalette {
    static let Red = UIColor(red: 1.0, green: 0.1491, blue: 0.0, alpha: 1.0)
    static let Green = UIColor(red: 0.0, green: 0.5628, blue: 0.3188, alpha: 1.0)
    static let Blue = UIColor(red: 0.0, green: 0.3285, blue: 0.5749, alpha: 1.0)
}

I want to take advantage of this syntax because I find that the features I write have a bunch of classes that all have the same prefix. For example, the authentication flow in an app might have lots of little components: AuthenticationCoordinator, AuthenticationData, AuthenticationFormView, AuthenticationFormConfiguration, AuthenticationFormField, and AuthenticationFormFieldConfiguration, as well other namespaced objects like SignupViewController and SignupView.

Some of these class names are starting to get pretty long! Since something just called FormField would pollute the global namespace, I have to tack on the “Authentication” prefix. I don’t want it to collide with any other form fields I might have in my app. If we wanted to take Natasha’s scoping pattern to the next level, we could make an Authentication enum.

enum Authentication { }

From there, we can extend it to add nested types:

extension Authentication {
	struct Data {
		let username: String
		let password: String
		//etc
	}

	class Coordinator {
		//implementation   	
	}
}

Inside the Coordinator class, we can just refer to Authentication.Data as simply Data (since they’re both inside the Authentication module). Outside the module, we’ll refer to it as Authentication.Data. This is exactly the behavior we want.

By adding one character of code (the period) for references outside the module, we get to drop a ton of characters when working with references inside the module. Further, having to add the prefix and the period explicitly will formalize all class names inside of the module. It’ll be super obvious when we’re trying to use these types from outside the module.

We can add more extensions across multiple files.

extension Authentication {
	class LoginViewController {
		//implementation   	
	}
}

A good way to use this pattern would be namespacing a view controller and its smarter view together.

enum Signup { }

//SignupViewController.swift
extension Signup {
	class ViewController {
		func loadView() {
			self.view = View()
		}
		
		//etc
	}
}

//SignupView.swift
extension Signup {
	class View {
	    //etc
	}
}

You can also nest these modules, like Natasha shows in her post.

extension Authentication {
	enum Form {
		class View {
		    
		}
	}
}

When nesting, if you want to move things to a different file, you’ll have to use a slightly different syntax:

// create the new triply-nested namespace
extension Authentication.Form {
	enum Field { }
}

// in a different file
extension Authentication.Form.Field {
	struct Data {
				
	}
				
	class View {
			
	}
}

Because Swift doesn’t allow you to use the extension keyword outside of the file scope, when nesting, you have to use the dot-syntax to access your nested namespace.

There’s one big downside to abusing an enum type like this. The protocol keyword, like the extension keyword, is limited to the file scope only, so you can’t declare any protocols inside your namespace.

It’s a bit of a heretical approach to structuring code, but many new ideas seem heretical at first. Ideally, Swift would add a module keyword. But barring that, they could make protocols work in enums and I’d be happy to use the enum trick to fake namespaces.

Last year, in the run up to WWDC, I published a post called Why I don’t write Swift. Since then, I’ve shipped two major contracting projects, both in Swift, over the last six months. With this newfound experience in small to medium Swift codebases, I have some new observations and comments on writing codebases or large portions thereof in Swift.

It’s been a big year. Besides Swifts 2.0, 2.1, and 2.2, we’ve also seen the open-sourcing of Swift, a roadmap with public timelines, and plenty of open discussion about the future of the language. I’m hopeful about it for sure, and I have little doubt that it’ll be the only language that we write iOS and Mac apps in.

Primarily, the big thing I got wrong last year was that Swift is just plain fun. This weekend, I had the pleasure of helping a friend with code-review. His project is completely in Objective-C, and our code review session reminded me how rough the rough edges of that language are, especially after spending any time in Swift. Listening to myself try to explain the difference between the NSNumber, int, and NSInteger types made me long for Swift. While helping him write new code, I kept forgetting to include semicolons, missed typed arrays greatly, and wish I’d had map and filter handy. Swift has plenty of its own quirks and complexities, but it rounds off so many of Objective-C’s rough edges that writing code in it becomes joyful. I love enums, I love protocols, and I love optionals.

Despite this newfound revelation, I still think the prudent choice is to continue writing apps in Objective-C. Those who are fully on board with Swift commonly reply: every line of Objective-C you write is a legacy. While this is true, Objective-C code will still be viable for at least another five years, and probably won’t stop working for closer to 10 or 15. This is based the fact that every one of Apple’s OSes is written in it, as well as the frameworks themselves. (How long did Carbon code work for? How long before Finder was rewritten in Cocoa?)

However, your Swift 2.2 project is also going to become legacy code, and that transition will happen this September. Your migration will be a bloodbath, and it will have to happen in one fell swoop (including your dependencies!) because of a lack of ABI compatibility. Latent bugs will creep in from the migrator, and you’ll have to keep your swift-3 branch updated with regular, messy merges. In the end, git bisect will stop working, and your compile times will still be awful.

Despite this brutal reality, every client I’ve talked to wants their app in pure Swift. I’m happy to give them my professional advice, which is that it’ll cause more problems that it’ll solve. I don’t fight that hard, though, because I know the industry is moving in a Swifty direction, and there’s not really anything that one person can do about it. When I’m done on the project, the next developer that comes on board would just scoff and overrule any restraint I brought to the project. He or she would probably also curse me while they’re at it.

My post-WWDC takeaway last year was:

My new plan is that once Swift is source compatible with previous versions, the next new app I write after that will be all Swift. I don’t want my code to break when I go back in git, and I don’t want to deal with the context switching of having Objective-C and Swift in one app.

This basically still seems right to me. As clients want it, I’m happy to write Swift. That decision isn’t hard, given that I enjoy it so much, and given that it won’t hurt to keep my skills sharp. My own projects will probably stay in Objective-C, at least until ABI compatibility, and probably until source compatibility.

Unlike last year, we already know what’s coming in Swift 3, and what else is in the pipeline. Despite my short-term hesitance and despite the language’s seemingly slow pace of advancement, I think that Swift in general is on the right track. Getting ABI resilience right is important, and shouldn’t be rushed. The features in the pipeline will solve real pain points that users have (you’ll note I’m currently ignoring the static vs dynamic brouhaha, but I would love to write a post about it soon). The long-term future of Swift is bright, and I can’t wait to get there.

In Protocol-Oriented Networking, I laid out a protocol for defining requests, and I used default implementations to add defaults and extra behavior to each Request. First, let’s take a look at that protocol.

protocol Request {
    var baseURL: NSURL? { get }
    var method: String { get }
    var path: String { get }
    var parameters: Dictionary<String, String> { get }
    var headers: Dictionary<String, String> { get }
}

I used a protocol extension with default implementations to prevent the user from having to provide the obvious defaults.

extension Request {
    var method : String { return "GET" }
    var path : String { return "" }
    var parameters : Dictionary<String, String> { return Dictionary() }
    var headers : Dictionary<String, String> { return Dictionary() }
}

The only thing that’s necessary to implement a complete Request is a baseURL. Everything else has a default. When I wrote the blog post, I (correctly) identified this as “pretty much the template method pattern”.

With Swift 2, however, while we could continue to use decoration to wrap our data with new functionality, we’ve been given a new power with protocol extensions. Protocol extensions let us add concrete methods to a protocol that are dependent on the abstract methods in that protocol. It’s a form of the template method pattern, but one that doesn’t rely on inheritance.

Protocols with default implementations are a much nicer and more compiler-friendly version of the template method pattern. Functionally, protocols with default implementations let you provide defaults, mark the “abstract” methods as abstract, and override the non-abstract ones as needed in the concrete classes. The compiler gives you an affordance for everything that you would normally user documentation or run-time errors for.

After this point, I feel like the post went awry. I added two more functions, one for building an NSURLRequest and one for initiating that request.

extension Request {
    func buildRequest() -> NSURLRequest? {
        // build a URL for the request
        // encode the parameters as JSON
        // etc
        // return the request
    }
    
    func sendRequest(success success: (result: AnyObject) -> (), failure: (error: ErrorType) -> ()) {
        // send the request
        // parse the result
        // fire the blocks on success and failure
    }
}

This extension is different and fundamentally worse than the previous extension, for a few reasons.

It locks me into building requests a specific way. What if I don’t want to encode the parameters in JSON? What if I have extra parameters to encode in the body, like in a multi-part request? What if I want to add a parameter, like in a paginatable request? What if I want to add headers for the authorization? What if I want to have the sendRequest() function method return a Promise, or conditionally handle the sending through another library?

I could override the buildRequest() method for specific requests, and do custom stuff in them on a per-request basis. I don’t want to do that, primarily because of the static dispatch of the methods buildRequest() and sendRequest(). Their exeuction is dependent on which type the compiler thinks they are at compile time. This issue is laid out well in this post. The behavior is highly counter-unintuitive.

I could also add more methods on Request, like buildJSONRequest(), buildURLEncodedRequest(), buildMultipartRequest(), but this would be very unelegant and unwieldy.

Ultimately, I want Request to be dumb data. It shouldn’t know how to build an NSURLRequest. That’s why the request construction code is in an extension. With Swift, however, if you put something in a protocol extension, you’re locked into it. You should use only extensions with default behavior when you expect to need that data or behavior every time.

Decoration in Swift

To solve our problem, we need something like decoration for Swift. We could use normal decoration, but Swift fortunately lets us do a similar thing without having to define new concrete types.

Taking inspriation from the Swift standard library (and from Olivier Halligon), we can just add a lot more protocols. The implementing object then decides which of those it wants to “conform to” and take behavior from. Let’s take a look at an example.

Let’s define a protocol called ConstructableRequest. It itself conforms to Request, and provides a method called buildRequest()

protocol ConstructableRequest: Request {
    func buildRequest() -> NSURLRequest?
}

From that, we can add another, more “concrete” protocol for constructing a request with a JSON body.

protocol JSONConstructableRequest: ConstructableRequest { }
extension JSONConstructableRequest {
    func buildRequest() -> NSURLRequest? {
        // build a URL for the request
        // encode the parameters as JSON
        // etc
        // return the request
    }
}

This protocol would actually contain an implementation of buildRequest for requests that will have a JSON body. Then, we take advantage of this new protocol in a third one called SendableRequest:

protocol SendableRequest: ConstructableRequest { }
extension SendableRequest {
    func sendRequest(success success: (string: String) -> (), failure: (error: ErrorType) -> ()) {
        // send the request
        // parse the result
        // fire the blocks on success and failure
    }
}

SendableRequest relies only on ConstructableRequest. It doesn’t know how the request will be constructed, just that it will have access to a function that will build an NSURLRequest. When we go to define our request, we just mark which protocols (and thus which behaviors) we want:

struct ZenRequest: Request, JSONConstructableRequest, SendableRequest {
    let baseURL = NSURL(string: "https://api.github.com/")
    let path: String = "zen"
}

By choosing which protocols to conform to, we can add behavior in a dynamic fashion. Protocols can also be conformed to after the fact as well, so if you had a different sending mechanism, you could adapt your concrete request to that protocol after its definition:

extension ZenRequest: PromisedRequest { }

where PromisedRequest is some protocol that takes a SendableRequest and makes it return a Promise instead of having completion blocks.

When I started this post, I thought I would end up with regular decoration again. I thought I would end up with some code like the Objective-C version:

SendableRequest(request: ZenRequest()).sendRequest(...

Writing the code this way would require us to to use the SendableRequest keyword at every call-site, which is definitely worse than the protocol way: just declare it once at the definition of the request struct. Swift’s protocols let you do decoration in a weird and new way, and I think I like it.

Type Safety

If you want to add type-safe parsing to this scheme, sometimes you hit the annoying “Protocol X can only be used as a constraint because it has Self or associated type requirements” compiler error. I’m still wrapping my head around why it happens sometimes and doesn’t other times. I will have to read Russ Bishop’s post a few more times, I think.

To avoid the compiler error, make a new protocol with an associated type:

protocol ResultParsing {
    associatedtype ParsedType
    func parseData(data: NSData) -> ParsedType?
}

Like JSON parsing, you can now make a “concrete version” of this protocol, using a specific type in place of ParsedType:

protocol StringParsing: ResultParsing { }

extension StringParsing {
    func parseData(data: NSData) -> String? {
        return NSString(data: data, encoding: NSUTF8StringEncoding) as? String
    }
}

Then, make SendableRequest use the abstract version of the new protocol:

protocol SendableRequest: ConstructableRequest, ResultParsing { }
extension SendableRequest {
    func sendRequest(success success: (result: ParsedType) -> (), failure: (error: ErrorType) -> ()) {
        // send the request
        // parse the result
        // fire the blocks on success and failure
    }
}

Notice how SendableRequest returns ParsedType now. (You can take a look at the playground at the bottom of the post for an exact implementation.)

Finally, in your request, just declare which parser you want to use, as a protocol conformance.

struct ZenRequest: Request, JSONConstructableRequest, SendableRequest, StringParsing {
    let baseURL = NSURL(string: "https://api.github.com/")
    let path: String = "zen"
}

Small, reusable components.

JSON

You can make your ResultParsing types do anything. For example, given some JSONConstructable protocol:

protocol JSONConstructable {
    static func fromData(data: NSData) -> Self?
}

struct User: JSONConstructable {
    static func fromData(data: NSData) -> User? {
        return User()
    }
}

struct Tweet: JSONConstructable {
    static func fromData(data: NSData) -> Tweet? {
        return Tweet()
    }
}

You can then create a JSONParsing protocol that works much in the same way as the StringParsing protocol, but with an associated type parameter, JSONType.

protocol JSONParsing: ResultParsing {
    associatedtype JSONType: JSONConstructable
    func parseData(data: NSData) -> JSONType?
}

extension JSONParsing {
    func parseData(data: NSData) -> JSONType? {
        return JSONType.fromData(data)
    }
}

To use it, just conform to JSONParsing and add a typealias in your request:

struct UserRequest: Request, JSONConstructableRequest, SendableRequest, JSONParsing {
    let baseURL = NSURL(string: "https://api.khanlou.com/")
    let path: String = "users/1"
    
    typealias JSONType = User
}

Playground

I’ve made a playground with all the code from this post.

Protocols in Swift are very powerful, especially because they can include default implementations as of Swift 2.0. However, because of its static nature, protocol-oriented programming can still lock you in to certain patterns. Protocols should be very small, ideally containing only one responsibility each.

Swift’s standard library itself is built on a lot of the these concepts, with protocols like SequenceType and IntegerLiteralConvertible. The standard library uses these protocols to manage its internals. Conforming to your own structs and classes to these protocols nets you syntax features and functions like map for free. Taking inspriation from the standard library in our own protocol design helps us get to protocol nirvana.

Grand Central Dispatch, or GCD, is an extremely powerful tool. It gives you low level constructs, like queues and semaphores, that you can combine in interesting ways to get useful multithreaded effects. The C-based API used to be a bit arcane, but it’s been cleaned up as of Swift 3. It isn’t always immediately obvious how to combine the low-level components into higher level behaviors, so in this guide, I hope to describe the behaviors that you can create with the low-level components that GCD gives you.

GCD, after many years of loyal service, has started to show its age a little. In pathological cases, it can cause explosions in the number of threads it creates, and can have other, more subtle issues as well. You can read about some of these issues here and here. In this guide, I’ll call out a few things that can help avoid the larger issues, and as always, test the speed of your code before applying concurrency.

Work In The Background

Perhaps the simplest of behaviors, this one lets you do do some work on a background queue, and then come back to the main queue to continue processing, since components like those from UIKit can (mostly) be used only with the main queue.

(In this guide, I’ll use functions like doSomeExpensiveWork() to represent some long running task that returns a value. You can imagine this a processing an image, or some other task that takes long enough to slow down the user interface.)

This pattern can be set up like so:

let concurrentQueue = DispatchQueue(label: "com.backgroundqueue", attributes: .concurrent)

// ...

concurrentQueue.async(execute: {
	let result = doSomeExpensiveWork()
	DispatchQueue.main.async(execute: {
		//use `result` somehow
	})
})

While DispatchQueue.global() is a quick way to get access to a concurrent queue, it can easily lead to an explosion of threads. If you create a lot of background tasks all on the .global() queue, it will create a ton of threads which can counter-intuitively slow your app down, so it is not recommended. In some cases, a serial queue can be effective as well. (If the attributes are not set, it will default the queue to serial.)

Note that each call uses async, not sync. async returns before the block is executed, and sync waits until the block is finished executing before returning. The inner call can use sync (because it doesn’t matter when it returns), but the outer call must be async (otherwise the calling thread, usually the main thread, will be blocked).

Deferring Work

One of the next common usages of Dispatch is performing some work after a specific amount of time. DispatchQueue has a special method called .asyncAfter for this.

let delay = DispatchTime.now() + DispatchTimeInterval.seconds(2)
DispatchQueue.main.asyncAfter(deadline: delay, execute: { print("Hello!") })

Work is guaranteed to execute after the deadline, but not necessarily at that exact moment. In other words, it may take longer than the deadline, but it will never happen before the deadline.

Erica Sadun wrote more about the DispatchTime API here.

Creating singletons

dispatch_once is an API that can be used to create singletons. It’s no longer necessary in Swift, since there is a simpler way to create singletons. For posterity, however, I’ve included it here (in Objective-C).

+ (instancetype) sharedInstance {  
	static dispatch_once_t onceToken;  
	static id sharedInstance;  
	dispatch_once(&onceToken, ^{  
		sharedInstance = [[self alloc] init];  
	});  
	return sharedInstance;  
}

To create lazily-constructed, thread-safe singletons in Swift, you can simply use static let:

class MyType {
	static let global = MyType()
}

Flatten a completion block

This is where GCD starts to get interesting. Using a semaphore, we can block a thread for an arbitrary amount of time, until a signal from another thread is sent. Semaphores, like the rest of GCD, are thread-safe, and they can be triggered from anywhere.

Semaphores can be used when there’s an asynchronous API that you need to make synchronous, but you can’t modify it.

// on a background queue
let semaphore = DispatchSemaphore(value: 0)
doSomeExpensiveWorkAsynchronously(completionBlock: {
	semaphore.signal()
})
semaphore.wait()
//the expensive asynchronous work is now done

Calling .wait() will block the thread until .signal() is called. This means that .signal() must be called from a different thread, since the current thread is totally blocked. Further, you should never call .wait() from the main thread, only from background threads.

You can also pass a timeout to wait(), but I’ve never needed to use one in practice.

It might not be totally obvious why would you want to flatten code that already has a completion block, but it does come in handy. One case where I’ve used it recently is for performing a bunch of asynchronous tasks that must happen serially. A simple abstraction for that use case could be called AsyncSerialWorker:

typealias DoneBlock = () -> ()
typealias WorkBlock = (DoneBlock) -> ()

class AsyncSerialWorker {
    private let serialQueue = DispatchQueue(label: "com.khanlou.serial.queue")

    func enqueueWork(work: @escaping WorkBlock) {
        serialQueue.async(execute: {
            let semaphore = DispatchSemaphore(value: 0)
            work({
                semaphore.signal()
            })
            semaphore.wait()
        })
    }
}

This small class creates a serial queue, and then allows you enqueue work onto the block. The WorkBlock gives you a DoneBlock to call when your work is finished, which will trip the semaphore, and allow the serial queue to continue.

Limiting the number of concurrent blocks

In the previous example, the semaphore is used as a simple flag, but it can also be used as a counter for finite resources. If you want to only open a certain number of connections to a specific resource, you can use something like the code below:

class LimitedWorker {
    private let serialQueue = DispatchQueue(label: "com.khanlou.serial.queue")
    private let concurrentQueue = DispatchQueue(label: "com.khanlou.concurrent.queue", attributes: .concurrent)
    private let semaphore: DispatchSemaphore

    init(limit: Int) {
        semaphore = DispatchSemaphore(value: limit)
    }

    func enqueue(task: @escaping () -> ()) {
        serialQueue.async(execute: {
            self.semaphore.wait()
            self.concurrentQueue.async(execute: {
                task()
                self.semaphore.signal()
            })
        })
    }
}

This example is pulled from Apple’s Concurrency Programming Guide. They can explain what’s happening here better than me:

When you create the semaphore, you specify the number of available resources. This value becomes the initial count variable for the semaphore. Each time you wait on the semaphore, the dispatch_semaphore_wait function decrements that count variable by 1. If the resulting value is negative, the function tells the kernel to block your thread. On the other end, the dispatch_semaphore_signal function increments the count variable by 1 to indicate that a resource has been freed up. If there are tasks blocked and waiting for a resource, one of them is subsequently unblocked and allowed to do its work.

The effect is similar to maxConcurrentOperationCount on NSOperationQueue. If you’re using raw GCD queues instead of NSOperationQueue, you can use semaphores to limit the number of blocks that execute simultaneously.

Thanks to Mike Rhodes, this code has been improved from its previous version. He writes:

We use a concurrent queue for executing the user’s tasks, allowing as many concurrently executing tasks as GCD will allow us in that queue. The key piece is a second GCD queue. This second queue is a serial queue and acts as a gatekeeper to the concurrent queue. We wait on the semaphore in the serial queue, which means that we’ll have at most one blocked thread when we reach maximum executing blocks on the concurrent queue. Any other tasks the user enqueues will sit inertly on the serial queue waiting to be executed, and won’t cause new threads to be started.

Wait for many concurrent tasks to finish

If you have many blocks of work to execute, and you need to be notified about their collective completion, you can use a group. DispatchQueue.async lets you associate a group with the work you’re adding to that queue. The group keeps track of how many items have been associated with it. The work in the block should be synchronous. Note that the same dispatch group can used to track work on multiple different queues. When all of the tracked work is complete, it fires a block passed to .notify(), kind of like a completion block.

let backgroundQueue = DispatchQueue(label: "com.khanlou.concurrent.queue", attributes: .concurrent)
let group = DispatchGroup()
for item in someArray {
    backgroundQueue.async(group: group, execute: {
        performExpensiveWork(item: item)
    })
}
group.notify(queue: DispatchQueue.main, execute: {
    // all the work is complete
})

This is a great case for flattening a function that has a completion block. The dispatch group considers the block to be completed when it returns, so you need the block to wait until the work is complete.

There’s a more manual way to use dispatch groups, especially if your expensive work is already async:

// must be on a background thread
let group = DispatchGroup()
for item in someArray {
    group.enter()
    performExpensiveAsyncWork(item: item, completionBlock: {
        group.leave()
    })
}
group.wait()
// all the work is complete

This snippet is more complex, but stepping through it line-by-line can help in understanding it. Like the semaphore, groups also maintain a thread-safe, internal counter that you can manipulate. You can use this counter to make sure multiple long running tasks are all completed before executing a completion block. Using “enter” increments the counter, and using “leave” decrements the counter. Passing it to .async() on a queue handles all these details for you, so I prefer to use it where possible.

The last thing in this snippet is the wait call: it blocks the thread and waits for the counter to reach 0 before continuing. Note that you can queue a block with .notify() even if you use the enter/leave APIs. The reverse is also true: you can use the wait if you use the async API.

DispatchGroup.wait(), like DispatchSemaphore.wait(), accepts a timeout. Again, I’ve never had a need for anything other than the default. Also, just like DispatchSemaphore.wait(), never call DispatchGroup.wait() on the main queue.

The biggest difference between the two styles is that the example using notify can be called entirely from the main queue, whereas the example using wait must happen on a background queue (at least the wait part, because it will fully block the current queue).

Isolation Queues

Swift’s Dictionary (and Array) types are value types. When they’re modified, their reference is fully replaced with a new copy of the structure. However, because updating instance variables on Swift objects is not atomic, they are not thread-safe. Two threads can update a dictionary (for example by adding a value) at the same time, and both attempt to write at the same block of memory, which can cause memory corruption. We can use isolation queues to achieve thread-safety.

Let’s build an identity map. An identity map is a dictionary that maps items from their ID property to the model object.

class IdentityMap<T: Identifiable> {
	var dictionary = Dictionary<String, T>()
	
	func object(forID id: String) -> T? {
		return dictionary[id] as T?
	}
	
	func addObject(object: T) {
		dictionary[object.id] = object
	}
}

This object basically acts as a wrapper around a dictionary. If our function addObject is called from multiple threads at the same time, it could corrupt the memory, since the threads would be acting on the same reference. This is known as the readers-writers problem. In short, we can have multiple readers reading at the same time, and only one thread can be writing at any given time.

Our ideal case is that reads happen synchronously and concurrently, whereas writes can be asynchronous and must be the only thing happening to the reference. Fortunately, GCD gives us great tools for this exact scenario. GCD’s .barrier flag does something special when set: it will wait until the queue is totally empty before executing the block. Using the .barrier flag for our writes will limit access to the dictionary and make sure that we can never have any writes happening at the same time as a read or another write.

class IdentityMap<T: Identifiable> {
    var dictionary = Dictionary<String, T>()
    private let accessQueue = DispatchQueue(label: "com.khanlou.isolation.queue", attributes: .concurrent)

    func object(withID ID: String) -> T? {
        return accessQueue.sync(execute: {
            return dictionary[ID] as T?
        })
    }

    func addObject(object: T) {
        accessQueue.async(flags: .barrier, execute: {
            self.dictionary[object.ID] = object
        })
    }
}

DispatchQueue.sync() will dispatch the block to our isolation queue and wait for it to be executed before returning. This way, we will have the result of our read synchronously. (If we didn’t make it synchronous, our getter would need a completion block.) Because accessQueue is concurrent, these synchronous reads will be able to occur simultaneously. Also, note that we’re using a special form of DispatchQueue.sync() which allows you to return a T from the inner block and will return that T from the outer function call. This is new in Swift 3.

accessQueue.async(flags: .barrier, execute: { }) will dispatch the block to the isolation queue. The async part means it will return before actually executing the block (which performs the write), which means we can continue processing. The .barrier flag means that it will wait until every currently running block in the queue is finished executing before it executes. Other blocks will queue up behind it and be executed when the barrier dispatch is done.

While this code will definitely work, using async for writes can be counter-intuitively be less performant. async calls will create a new thread, if there isn’t one immediately available to run the block. If your write call is faster than 1ms, it might be worth making it .sync, according to Pierre Habouzit, who worked on this code at Apple. As always, profile before optimizing!

Cancelling blocks

A little known feature of GCD is that blocks can actually be cancelled. Per Matt Rajca, by wrapping a block in a DispatchWorkItem and using the cancel() API, you can cancel it.

let work = DispatchWorkItem(block: { print("Hello!") })

let delay = DispatchTime.now() + DispatchTimeInterval.seconds(10)
DispatchQueue.main.asyncAfter(deadline: delay, execute: work)
work.cancel()

After execution of the block starts, it can’t be cancelled. This makes sense, becuase the queue doesn’t have a sense of what’s going on inside your block, or how to cancel it. You can write your own checks into the block, by using work.isCancelled:

var work: DispatchWorkItem?
work = DispatchWorkItem(block: {
    expensiveWorkPart1()
    if work?.isCancelled ?? false { return }
    expensiveWorkPart2()
})

This is similar to checking isCancelled within an NSOperation. Note that you have to declare the work variable as an optional first, even if you don’t initialize the block itself. This is because you will have to use the work reference inside the block, and Swift won’t let you do it all in one line. This, unlike the other Dispatch APIs, was cleaner in Swift 2, and now requires dealing with optionals to make it work.

Queue Specific Data

The NSThread object has a threadDictionary property. You can use this dictionary to store any interesting data. You can do the same with a dispatch queue, using the DispatchQueue.setSpecific and DispatchQueue.getSpecific methods. I haven’t thought of any clever ways to use this yet, excepting Benjamin Encz’s method of determining if you’re on the main queue:

struct MainQueueValue { }

DispatchQueue.main.setSpecific(key: DispatchSpecificKey<MainQueueValue>(), value: MainQueueValue())

Now, instead of using [NSThread isMainThread], you can instead check DispatchQueue.getSpecific(key: DispatchSpecificKey<MainQueueValue>()) != nil to determine if you’re on the main queue (as opposed to the main thread, which is subtly different). If you’re on a background queue, you’ll get nil, and if you’re on the main queue, you’ll get an instance of the MainQueueValue object.

Timer Dispatch Sources

Dispatch sources are a weird thing, and if you’ve made it this far in the handbook, you’ve reached some pretty esoteric stuff. With dispatch sources, you set up a callback up when initializing the dispatch source, and which in triggered when specific events happen. The simplest of these events is a timed event. A simple dispatch timer could be set up like so:

class Timer {
    let timer = DispatchSource.makeTimerSource(queue: .main)

    init(onFire: @escaping () -> Void, interval: DispatchTimeInterval, leeway: DispatchTimeInterval = .milliseconds(500)) {
        timer.schedule(deadline: DispatchTime.now(), repeating: interval, leeway: leeway)

        timer.setEventHandler(handler: onFire)

        timer.resume()
    }
}

Dispatch sources must be explicitly resumed before they will start working.

Custom Dispatch Sources

Another useful type of dispatch source is a custom dispatch source. With a custom dispatch source, you can trigger it any time you want. The dispatch source will coalesce the signals that you send it, and periodically call your event handler. I couldn’t find anything in the documentation defining the policy that guides this coalescing. Here’s an example of an object that adds up data sent in from different threads:

class DataAdder {
    let source = DispatchSource.makeUserDataAddSource(queue: .main)

    init(onFire: @escaping (UInt) -> Void) {
        source.setEventHandler(handler: {
            onFire(self.source.data)
        })
        source.resume()
    }

    func mergeData(data: UInt) {
        source.add(data: data)
    }
}

This dispatch source is initialized with a block that will give you the result of all the data that’s been added up so far. You can call addData from any thread with some amount of data, and the source will manage adding that data up and calling the callback.

You can also use makeUserDataOrSource instead of makeUserDataAddSource, which will apply a binary OR to the data:

class DataOrer {
    let source = DispatchSource.makeUserDataOrSource()

    init(onFire: @escaping (UInt) -> Void) {
        source.setEventHandler(handler: {
            onFire(self.source.data)
        })
        source.resume()
    }

    func mergeData(data: UInt) {
        source.or(data: data)
    }
}

You could use this to trip a flag from multiple threads. Crucially, the dispatch source’s data is reset to 0 after every each time the block is triggered.

These are the strange depths of GCD. I don’t know how or when I’d use this stuff, but I suspect that when I need it, I’ll be glad that it exists.

Wrap Up

Grand Central Dispatch is a framework with a lot of low-level primitives. Using them, these are the higher-level behaviors I’ve been able to build. If there are any higher-level things you’ve used GCD to build that I’ve left out here, I’d love to hear about them and add them to the list.

This article is also available in Chinese.

Decoding JSON in Swift is a huge pain in the ass. You have to deal with optionality, casting, primitive types, constructed types (whose initializers can also be optional), stringly-typed keys, and a whole bevy of other issues.

Especially in a well-typed Swift world, it makes sense to use a well-typed wire format. For the next project that I start from scratch, I’ll probably use Google’s protocol buffers (great blog post about their benefits here). I hope to have a report on how well it works with Swift when I have a little bit more experience with it, but for now, this post is about the realities of parsing JSON, which is the most commonly used wire format by far.

There are a few states-of-the-art when it comes to JSON. First, a library like Argo, which uses functional operators to curry an initializer.

extension User: Decodable {
  static func decode(j: JSON) -> Decoded<User> {
    return curry(User.init)
      <^> j <| "id"
      <*> j <| "name"
      <*> j <|? "email" // Use ? for parsing optional values
      <*> j <| "role" // Custom types that also conform to Decodable just work
      <*> j <| ["company", "name"] // Parse nested objects
  }
}

Argo is a very good solution. It’s concise, flexible, and expressive. The currying and strange operators, however, are somewhat opaque. (The folks at Thoughtbot have written a great post explaining it here.)

Another common solution is to manually guard let every non-optional. This is a little more manual, and results in two lines for each property: once as to create non-optional local variable in the guard statement, and a second line to actually set the property. Using the same properties from above, this might look like:

class User {
  init?(dictionary: [String: AnyObject]?) {
    guard
      let dictionary = dictionary,
      let id = dictionary["id"] as? String,
      let name = dictionary["name"] as? String,
      let roleDict = dictionary["role"] as? [String: AnyObject],
      let role = Role(dictionary: roleDict)
      let company = dictionary["company"] as? [String: AnyObject],
      let companyName = company["name"] as? String,
        else {
          return nil
    }
    
    self.id = id
    self.name = name
    self.role = role
    self.email = dictionary["email"] as? String
    self.companyName = companyName
  }
}

This code has the benefit of being pure Swift, but it is quite a mess and very hard to read. The chains of dependent variables is not obvious from looking at it. For example, roleDict has to be defined before role, since it’s used in role’s definition, but since the code is so hairy, it’s hard to see that dependency clearly.

(I’m not even going to mention the pyramid-of-doom nested if let situation for parsing JSON from Swift 1. It was bad, and I’m glad we have multi-line if lets and the guard let construct now.)


When Swift’s error handling was announced, I was convinced it was terrible. It seemed like it was worse than the Result enum in every way.

  • You can’t use it directly: it essentially adds required language syntax around a Result type (that does exist, under the hood!), and users of the language can’t even access it.
  • You can’t chain Swift’s error model the way you can with Result. Result is a monad, allowing it to be chained with flatMap in useful ways.
  • Swift’s error model can’t be used in an asynchronous way (without hacking it, like providing an inner function that does throw that you can call to get the result), whereas Result can be.

Despite all of these seemingly obvious flaws with Swift’s error model, a blog post came out describing a use case where Swift’s error model is clearly more concise than the Objective C version and easier to read than the Result version. What gives?

The trick here is that using Swift’s error model, with do/catch, is really good when you have lots of try calls that happen in sequence. This is because setting up something to be error-handled in Swift requires a bit of boilerplate. You need to include throws when declaring the function, or else set up the do/catch structure, and handle all your errors explicitly. For a single try, this is a frustrating amount of work. For multiple try statements, however, the up-front cost becomes worth it.


I was trying to find a way to get missing JSON keys to print out some kind of warning, when I realized that getting an error for accessing missing keys would solve the problem. Because the native Dictionary type doesn’t throw errors when keys are missing, some object is going to have to wrap that dictionary. Here’s the code I want to be able to write:

struct MyModel {
    let aString: String
    let anInt: Int
    
    init?(dictionary: [String: AnyObject]?) {
        let parser = Parser(dictionary: dictionary)
        do {
            self.aString = try parser.fetch("a_string")
            self.anInt = try parser.fetch("an_int")
        } catch let error {
            print(error)
            return nil
        }
    }
}

Ideally, with type inference, I won’t even have to include any types here. Let’s take a crack at writing it. Let’s start with ParserError:

struct ParserError: ErrorType {
    let message: String
}

Next, let’s start Parser. It can be a struct or a class. (It doesn’t get passed around, so its reference semantics don’t really matter.)

struct Parser {
    let dictionary: [String: AnyObject]?
    
    init(dictionary: [String: AnyObject]?) {
        self.dictionary = dictionary
    }

Our parser will have to take a dictionary and hold on to it.

Our fetch function is the first complex bit. We’ll go through it line by line. Each method on a class can be type-parameterized, to take advantage of the type inference. Also, this function will throw errors, which will let us get the failure data back:

    func fetch<T>(key: String) throws -> T {

The next step is to grab the object at the key, and make sure it’s not nil. If it is, we will throw.

        let fetchedOptional = dictionary?[key]
        guard let fetched = fetchedOptional else  {
            throw ParserError(message: "The key \"\(key)\" was not found.")
        }

The final step is add type information to our value.

        guard let typed = fetched as? T else {
            throw ParserError(message: "The key \"\(key)\" was not the correct type. It had value \"\(fetched).\"")
        }

Finally, return the typed, non-optional value.

        return typed
	}

(I’ll include a gist and a playground at the end of the post with all the code.)

This works! The type inference from the type parameterization handles everything for us, and the “ideal” code that we wrote above works perfectly:

self.aString = try parser.fetch("a_string")

There are a few things that I want to add. First, a way to parse out values that are actually optional. Because this one won’t need to throw, we can write a simpler method. It unfortunately can’t have the same name as the above method, because the compiler won’t know which one to use, so let’s call it fetchOptional. This one is pretty simple.

func fetchOptional<T>(key: String) -> T? {
    return dictionary?[key] as? T
}

(You could make it throw an error if the key exists but is not the expected type, but I’ve left that out for brevity’s sake.)

Another thing we sometimes want to do is additional transformation to the object after its pulled out of the dictionary. We might have an enum’s rawValue that we want to build, or a nested dictionary that needs to turn into its own object. We can take a block in the fetch function that will let us process the object further, and throw error if the transformation block fails. Adding a second type parameter U allows us to assert that the product of the dictionary fetch is the same thing that goes into the transformation function.

func fetch<T, U>(key: String, transformation: (T) -> (U?)) throws -> U {
    let fetched: T = try fetch(key)
    guard let transformed = transformation(fetched) else {
        throw ParserError(message: "The value \"\(fetched)\" at key \"\(key)\" could not be transformed.")
    }
    return transformed
}

Lastly, we want a version of fetchedOptional that also takes a block.

func fetchOptional<T, U>(key: String, transformation: (T) -> (U?)) -> U? {
    return (dictionary?[key] as? T).flatMap(transformation)
}

Behold: the power of flatMap! Note that the tranformation block has the same form as the block flatMap accepts: T -> U?.

We can now parse objects that have nested items or enums.

class OuterType {
    let inner: InnerType
    
    init?(dictionary: [String: AnyObject]?) {
        let parser = Parser(dictionary: dictionary)
        do {
            self.inner = try parser.fetch("inner") { InnerType(dictionary: $0) }
        } catch let error {
            print(error)
            return nil
        }
    }
}

Note again how Swift’s type inference handles everything for us magically, and doesn’t require us to write any as? logic at all!

We can also handle arrays with a similar method. For arrays of primitive types, the fetch method we already will work fine:

let stringArray: [String]

//...
do {
	self.stringArray = try parser.fetch("string_array")
//...

For arrays of domain types that we want to construct, Swift’s type inference doesn’t seem to be able to infer the types this deep, so we’ll have to add one type annotation:

self.enums = try parser.fetch("enums") { (array: [String]) in array.flatMap( {SomeEnum(rawValue: $0) })}

Since this line is starting to get gnarly, let’s make a new method on Parser specifically for handling arrays:

func fetchArray<T, U>(key: String, transformation: T -> U?) throws -> [U] {
	let fetched: [T] = try fetch(key)
	return fetched.flatMap(transformation)
}

This will abuse the poorly-named-but-extremely-useful flatMap that removes nils on SequenceType, and reduce our incantation at the call site to:

self.enums = try parser.fetchArray("enums") { SomeEnum(rawValue: $0) }

The block at the end is what should be done to each element, instead of the whole array. (You could also modify fetchArray to throw an error if any value couldn’t be constructed.)

I like this general pattern a lot. It’s simple, pretty easy to read, and doesn’t rely on complex dependencies (the only one is a 50-line Parser type). It uses Swifty constructs, and will give you very specific errors describing how your parsing failed, useful when trying to dredge through the morass of JSON that you’re getting back from your API server. Lastly, another benefit of parsing this way is that it works on structs as well as classes, making it easy to switch from reference types to value types or vice versa at will.

Here’s a gist with all the code, and here’s a Playground ready to tinker with.

I made a side project. I wanted to reimagine what a cookbook might look like if it were reinvented for a dynamic medium. In this world, recipes wouldn’t be fixed to a specific scale. The recipe would be well-laid out on any size screen it was rendered on.

Printed cookbooks are a static medium: the author decides at printing time how the recipes will be formatted and presented. The units, measures, font-size and even the language of the recipe is fixed. The promise computers and programming afford us is that separating the rendering and display from the data grants the user ultimate control. Because the system could understand what units and ingredients were, it could display amounts in whatever context the user wanted: metric or imperial, weight or volume.

To build this probject, I took a few concepts, like Bret Victor’s work with explorable explanations, abstraction, dynamic media, and brought them to the data in a cookbook.

Introducing culinary.af

The proof of concept for this idea is hosted it at culinary.af. It hosts a few recipes of mine, which I hope you’ll check out.

culinary.af is built on top of pepin, which is the JavaScript library I wrote to parse and process the ingredients. You can find pepin on GitHub. The site is rendered with Jekyll, and the data for each recipe is stored in simple YML files. (Currently all on the files for pepin and the Jekyll renderer are in the same repository. In the future, they might be separated.)

What does culinary.af do?

culinary.af does a few things. First, it’s a home for my recipes. Cooks often tweak recipes, sometimes to work better for the altitude or humidity in their location and sometimes to accommodate the tools they have in their kitchens, like ovens that run too hot because their thermocouples are broken.

You can use pepin to make a home for your recipes, too. If you want to host your own version, fork it on GitHub, replace my recipes with yours, build with Jekyll, and host anywhere. The files are static and all of the logic is executed client-side.

I make apps during the day, but culinary.af is a website. Why is that? A few reasons: first, making a website, especially a responsive one, works on tons of platforms out of the gate. Recipes benefit a lot from being easily shared with URLs, which don’t work nearly as well with native apps. Lastly, when prototyping, having a platform as flexible to develop for as the web really pays off. I was able to move a lot quicker to make stuff happen, although I did feel very hampered by the lack of a type system, especially later in the game, when I was refactoring a lot more to support new features. I wrote a comprehensive test suite to account for this, which I’ll discuss soon.

The primary way that culinary.af takes advantage of its dynamic medium is that it allows you to scale recipes up and down easily. On any recipe page, grab the number next to the word “Scale” and slide it up or down. You can scale up to 10, and down to 1/6 of a normal serving.

Scaling recipes is unit-aware. That means if you scale 1 teaspoon of salt, to 2x, you’ll get 2 teaspoons of salt (note the pluralization). If you then scale to 3x, you’ll get 1 tablespoon of salt, because 3 teaspoons is one tablespoon, and nobody wants to measure out 3 teaspoons if they can just use one tablespoon. It’s needless to make that conversion in your head. This is the kind of thing computers is good at: taking mundane tasks, doing them for you, and giving you the data when you need it. As far as I can tell, no other recipe tool on any computer does unit-aware scaling. It’s a feature I’m pretty proud of.

This unit-aware scaling happens on larger units, too. 4 tablespoons becomes 1/4 cup, and so on. Each unit knows what the smallest form it can be represented in: for example, there’s no such thing as a 1/2 tablespoon, but there is a 1/2 teaspoon. Teaspoons go all the way down to 1/8. Cups go down to to 1/4. Gallons only go down to 1/2, because 1/4 gallon is just a quart. And so on.

To make this happen, pepin uses a unit reducer (link to code). It works by brute force: it converts an amount (say, 1/2 tablespoon) to every other unit that it could be represented by: 1 1/2 teaspoons, 1/32 of a cup, etc. It then finds the unit with the smallest corresponding number that is considered valid (bigger than the smallest acceptable amount for that unit). Since 1/2 tablespoon and 1/32 cup are invalid measures, it displays 1 1/2 teaspoons.

pepin also scales servings and yields for a recipe; it doesn’t scale the prep time or cooking time, because it’s never clear how scaling the recipe will affect those times.

A lot of culinary.af’s usefulness comes from its stylesheets. I’d like to call out two particularly useful features. First, the site is responsive. Recipes need to render on my phone, my iPad, and my laptop, because I might have any of them in the kitchen with me at any time. I also want it to work well on a TV-sized screen, because if my apartment had a layout that let me see the TV from the kitchen, I’d want culinary.af to work there too. culinary.af renders nicely in all those formats. I also blew up the font for viewports bigger than 1200px, which is useful for a laptop that’s a few feet away.

The other feature that the stylesheet provides is custom fraction rendering. Unicode provides support for what they call vulgar fractions, like ¼ or ½. In the beginning, I started by rendering these values. As I added custom fonts to the project, I learned that many fonts don’t support these codepoints. Since making it look nice was an important piece of culinary.af, I rendered my own fractions, using the techniques described on this page. The final css I ended up with was:

.frac {
  font-size: 75%
}

sup.frac {
  line-height: 10px;
  vertical-align: 120%
}

Having proper fractions adds a nice bit of shine to the project. pepin also converts decimals to fractions, so if your original recipe has “0.25 cups of flour”, it’ll render as “1/4 cup of flour”.

The last fun HTMLy component of this project was conform to the Recipe schema so that other sites can parse the information on culinary.af. The recipe schema is what allows Google’s search results to show prep times or star ratings when linking to other recipe sites, like Epicurious or Allrecipes.

Conforming to the schema on this site is simple. For the HTML element that encloses the item, you can declare:

<div class="recipe" itemscope itemtype="https://schema.org/Recipe">

From there, each HTML element that holds data gets an itemprop attribute describing the data. For example, this span’s content would be the yield of the recipe:

<span id="serving-amount" itemprop="recipeYield">

To test your schema, you can use a testing tool that Google provides.

It’s not clear what conforming to a schema does for you, other than a slightly nicer display in Google’s search results, but I think they represent the promise of the semantic web. Web developers have always been willing to put in the slight extra work of using semantically correct tags for their content, and these schemata seem like a natural extension of that, so I’m happy to support them.

There are also a few properties of the code that deserve mention. pepin is entirely served in static files; no code at all runs on the server. This was a useful quality of the project, since it means anyone can deploy it pretty much anywhere. It doesn’t need a database to run, or a Redis instance, or a Rabbit message queue, or anything like that. Just generate the HTML files and stick ‘em on any server. All the data for each recipe is stored in HTML. All the data for processing and converting units is stored in JavaScript. All the logic for parsing and presenting the data happens in the browser.

This frees you up in a lot of ways. There’s no crazy Docker configurations, no worries about scaling limited server resources, and no expensive hosting. Another weird benefit of all the processing being done client-side is that it’s open source by default. Because I know anyone will be able to check under the hood to see how the scaling and parsing logic works, I might as well just make it open source.

A few friends asked if I was going to try to make any money off of culinary.af, but because a) nobody pays for stuff like this and b) all JavaScript is already open source anyway, the answer was clear. I open sourced it early on, and I had the side benefit of being able to show my friends the code easily and ask them what they thought about a particular piece of code.

The second interesting property of the code is that pepin is the first project I’ve ever made that was truly test-driven. In past projects, a lot of the logic was poorly factored-out, or asynchronous, making it tough to test. In the cases where the logic well-separated, I’d usually write the tests after the unit was more or less completed, or I would write them for a unit with particularly complex logic and lots of edge cases.

In the case of pepin, the entire domain is data-in-data-out, making it super easy to test. Also, as you make changes to support new patterns of ingredients (“1 cup milk” vs “1 cup of milk”), you have to make sure not to break existing patterns, and TDD was perfect for this case. iOS apps don’t have much logic in them, but where they do, I’m going to try to structure them to take advantage of testing.

Because there’s no API component and no database to hit, my tests are blindingly fast. The entire test suite (50 tests) runs in 30 milliseconds. It’s very easy to run the entire suite after even the smallest change. (To be honest, the test runner should probably watch the folder and run after any file is changed, the same way that jekyll serve regenerates your site every time you save.)

I finally understand what people like Uncle Bob mean when they say that unit tests need to be fast. If your tests are hitting the API or the database, they’re going to be way to slow to run often. Isolate your logic, and run your tests a lot.

Where culinary.af is going

There are a few interesting problems in the domain that I would have liked to solve before launching, but they are unfortunately quite complicated.

One problem is that measures like teaspoons, tablespoons, and cups can represent both dry and wet goods, whereas quarts and gallons can only represent liquid goods. Currently, pepin doesn’t have an understanding of what the ingredients part of an amount means. Ideally, it would know that flour is a dry good, and its density is 2.1 grams per teaspoon.

With that information, the user would get to choose whether they want display in metric or imperial, and in volumetric or weight measures, and get exactly what they want to see. This is the dream of culinary.af: you shouldn’t have to do any conversions you don’t want to do.

Knowing what the ingredients are and how many calories are in each gram would also let us generate a nutritional facts table for each recipe. Yummly currently does this, and it would be a great feature to support.

Another great small feature I’d like to steal are the clickable timers Basil and Paprika have. These would detect times in the instructions, like “15 minutes” or “for an hour”, and turn them into timers that the user can activate with a tap. This is a feature that works better in an app than on the web, since for an app you can fire a UILocalNotification when the timer is over, and the web has no such mechanism. I will probably take advantage of HTML local storage to store the timers, so that leaving the page and returning to it won’t destroy the timer’s state.

The last big feature that I’d love to build for culinary.af is a good way to display images of the food. To really fill the roll of a cookbook, it needs to be beautiful as well as functional. This is a tough one for a few reasons: I need to have really beautiful pictures of my recipes, which are hard to get; the pictures need to go in the right places for each scale that the app supports; and the hosting of the pictures is an additional cost in complexity and hosting fees. I’m hoping to figure this out soon.

culinary.af isn’t not done; software projects never seem to be. Nevertheless, it’s cool, stable, and fun to use. I hope you enjoy it.

This article is also available in Chinese.

A side project I’m currently working needs an understanding of lots of different kinds of units. (I should probably be working on getting that off the ground instead of writing this blog post. Nevertheless.)

I’ve always found modeling units to be a fascinating programming problem. For time, for example, if you have an API that accepts a time, it’s probably going to accept seconds (or perhaps milliseconds! who can know!), but sometimes, you need to express a time like 2 hours. So instead of a magic number (7200, for the number of seconds in an hour), you write 2 * 60 * 60, perhaps adding spaces in between the operators to aid in “readability”.

7200, though, doesn’t mean anything. If you look at long enough and you have the freakish knack for manipulating mathematic symbols in your head, you might recognize it as two hours in seconds. If it weren’t a round number of hours, though, you never could.

And as that 7200 winds its way through the bowels of your application, it becomes less and less clear what units that mere integer is in.

A way to associate our integer with some metadata is what we need. Types have been described as units before, but can we bring that back to to units of measure, describing them with types? That can prevent us from adding 2 hours with 30 minutes and getting a meaningless result of 32.

(While it’s possible to handle this at the language level, most languages don’t have support for stuff like this.)

We still want to be able to add 2 hours to 30 minutes and get a meaningful result, so in our type system Time needs to be an entity, but Hours and Seconds do too.

Multiple things can be a Time, and each of those things must have a way to represented in seconds:

protocol Time {
    var inSeconds: Double { get }
}

Each unit of time will each be its own thing, but it will also be a Time.

struct Hours: Time {
    let value: Double
    
    var inSeconds: Double {
        return value * 3600
    }
}

struct Minutes: Time {
    let value: Double
    
    var inSeconds: Double {
        return value * 60
    }
}

We could add similar structs for Seconds, Days, Weeks, et cetera, understanding that we’ll lose some precision as we go up in scale.

Now that we have a shared understanding of how our units of measure can be represented, we can manipulate that unit.

func + (lhs: Time, rhs: Time) -> Time {
    return Seconds(value: lhs.inSeconds + rhs.inSeconds)
}

We can also add some handy conversions for ourselves:

extension Time {
    var inMinutes: Double {
        return inSeconds / 60
    }
    
    var inHours: Double {
        return inMinutes / 60
    }
}

And create a DSL-like extension onto Int, helpfully cribbed from ActiveSupport:

extension Int {
    var hours: Time {
        return Hours(value: Double(self))
    }
    
    var minutes: Time {
        return Minutes(value: Double(self))
    }
}

Which lets us write a short, simple, expressive line of code that leverages our type system.

let total = 2.hours + 30.minutes

(This result will of course be in Seconds so we will want some kind of presenter to reduce the units so that you can display this value in a meaningful way to the user. My side project has affordances for this. The side project is, unfortunately, in JavaScript, so no such type system fun will be had.)

I make a lot of hay about how to break view controllers up and how view controllers are basically evil, but today I’m going to approach the problem in a slightly different way. Instead of rejecting view controllers, what if we embraced them? We could make lots and lots of small view controllers, instead of lots of lots of small plain objects. After all, Apple gives us good ways to compose view controllers. What if we “leaned in” to view controllers? What benefits could we gain from such a setup?

I know a few people who do a subset of this. Any time there’s a meaningful collection of subviews, you can create a view controller out of those, and compose those view controllers together. This is a worthwhile technique, but today’s post will use a new type of view controller — one that defines a behavior — and show you how to compose them together.

Consider analytics. Often, I’ve seen analytics handled in a BaseViewController class:

@implementation BaseViewController: UIViewController

//...

- (void)viewDidAppear:(BOOL)animated {
	[super viewDidAppear:animated];
	[AnalyticsSingleton registerImpression:NSStringFromClass(self)];
}

//...

@end

You could have a lot of different behaviors in this base class. I’ve seen base view controllers with a few thousand lines of shared behavior and helpers. (I’ve seen it in Rails ActionControllers too.) But we won’t always need all this behavior, and sticking this code in every class breaks encapsulation, draws in tons of dependencies, and generally just grosses everyone out.

We have a general principle that we like to follow: prefer composition instead. Luckily, Apple gives us a great way to compose view controllers, and we’ll get access to the view lifecyle methods too, for free! Even if your view controller’s view is totally invisible, it’ll still get the appearance callbacks, like -viewDidAppear: and -viewWillDisappear.

To add analytics to your existing view controllers as a composed behavior rather than something in your superclass, first, set up the behavior as a view controller:

@implementation AnalyticsViewController

- (instancetype)initWithName:(NSString *)name {
    self = [super init];
    if (!self) return nil;
    
    _name = name;
    self.view.alpha = 0.0;
    
    return self;
}

- (void)viewDidAppear:(BOOL)animated {
	[super viewDidAppear:animated];
	[AnalyticsSingleton registerImpression:self.name];
}

@end

Note that the alpha of this view controller’s view is set to 0. It won’t be rendered, but it will still exist. We now have a simple view controller that we can add as a child, we need a way to do so easily, to any view controller. Fortunately, for this, we can simply extend the UIViewController class:

@implementation UIViewController (Analytics)

- (void)configureAnalyticsWithName:(NSString *)name {
    AnalyticsViewController *analytics = [[AnalyticsViewController alloc] initWithName:name];
    [self addChildViewController:analytics];
    [analytics didMoveToParentViewController:self];
    [self.view addSubview:analytics.view];
}

@end

We can call -configureAnalyticsWithName: anywhere in our primary view controller, and we’ll instantly get our view tracking with one line of code. It’s encapsulated in a very straightforward way. It’s easily composed into any view controller, including view controllers that we don’t own! Since the method -configureAnalyticsWithName: is available on every single view controller, we can easily add behavior without actually being inside of the class in question. It’s a very powerful technique, and it’s been hiding under our noses this whole time.

Let’s look at another example: loading indicators. This is something that’s typically handled globally, with something like SVProgressHUD. Because this is a singleton, every view controller (every object!) has the ability to add and remove the single global loading indicator. The loading indicator doesn’t have any state (besides visible and not-visible), so it doesn’t know to disappear when the current view is dismissed and the context changes. Ideally, we’d like the ability to have a loading indicator whenever we need one, but not otherwise, and to be able to turn it on and off with minimal code. We can approach this problem in the same way as the analytics view controller.

@implementation LoadingViewController

- (void)loadView {
    LoadingView *loadingView = [[LoadingView alloc] init];
    loadingView.hidden = YES;
    loadingView.label.text = @"Posting...";
    self.view = loadingView;
}

- (LoadingView *)loadingView {
    return (LoadingView *)self.view;
}

- (void)show {
    self.loadingView.alpha = 1.0;
    [self.loadingView startAnimating];
}

- (void)hide {
    self.loadingView.alpha = 0.0;
    [self.loadingView stopAnimating];
}

@end

And our extension to UIViewController a little more complex this time. Since we don’t have any configuration information, like the name in the analytics example, we can lazily add the loader the first time it needs to be used.

@implementation UIViewController (Loading)

- (LoadingViewController *)createAndAddLoader {
    LoadingViewController *loading = [[LoadingViewController alloc] init];
    [self addChildViewController:loading];
    [loading didMoveToParentViewController:self];
    [self.view addSubview:loading.view];
    return loading;
}

- (LoadingViewController *)loader {
    for (UIViewController *viewController in self.childViewControllers) {
        if ([viewController isKindOfClass:[LoadingViewController class]]) {
            return (LoadingViewController *)viewController;
        }
    }
    return [self createAndAddLoader];
}

@end

Again, we see similar benefits. The loader is no longer a global; instead, each view controller adds its own loader as needed. The loader can be shown and hidden with [self.loader show] and [self.loader hide]. You also don’t have to explicitly add the behavior (a loader) in this example.

We get the benefit of simple invocations and well-factored code. Other solutions to this problem require you to use globals or subclass from one common view controller, whereas this does not.

This example doesn’t need access to the view lifecycle methods like the other ones. It only needs access to the view, which it gets just from being a child view controller. (If you wanted, you could also add more state, like a incrementing and decrementing counter for the number of in-flight network requests.)

Another example of a common view controller behavior that we would love to factor out is error presentation. As of iOS 8, UIAlertView is deprecated in favor of UIAlertController, which requires access to a view controller. In the Backchannel SDK, I use a class called BAKErrorPresenter that is initialized with a view controller for presenting the error. Instead, what if the error presenter was a view controller?

@implementation ErrorPresenterViewController

- (void)viewDidLoad {
	[super viewDidLoad];
    self.view.alpha = 0.0
}

- (void)viewDidAppear:(BOOL)animated {
    [super viewDidAppear:animated];
    self.isVisible = YES;
}

- (void)viewWillDisappear:(BOOL)animated {
    [super viewWillDisappear:animated];
    self.isVisible = NO;
}

- (UIAlertAction *)okayAction {
    return [UIAlertAction actionWithTitle:@"OK" style:UIAlertActionStyleCancel handler:nil];
}

- (void)present:(NSError *)error {
    if (!self.isVisible) { return; }
    UIAlertController *alert = [UIAlertController alertControllerWithTitle:error.localizedDescription message:error.localizedFailureReason preferredStyle:UIAlertControllerStyleAlert];
    [alert addAction:self.okayAction];
    [self presentViewController:alert animated:YES completion:nil];
}

@end

Note that the error presenter can maintain any state it needs, such as isVisible from the lifecycle methods, and this state doesn’t gunk up the primary view controller.

I’ll leave out the UIViewController extension here, but it would function similarly to the loading indicator, lazily loading an error presenter when one is needed. With this code, all you need to present an error is:

[self.errorPresenter present:error];

How much simpler could it be? And we didn’t even have to sacrifice any programming principles.

For our last example, I want to look at a reusable component that’s highly dependent on the view appearance callbacks. Keyboard management is something that’s typically that needs to know when the view is on screen. Normally, if you break this out into it’s own object, you’ll have to manually invoke the appearance methods. Instead, you get that for free!

@implementation KeyboardManagerViewController

- (instancetype)initWithScrollView:(UIScrollView *)scrollView {
    self = [super init];
    if (!self) return nil;
    
    _scrollView = scrollView;
    self.alpha = 0.0
    
    return self;
}

- (void)viewDidAppear:(BOOL)animated {
    [super viewDidAppear:animated];
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(keyboardAppeared:) name:UIKeyboardDidShowNotification object:nil];
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(keyboardDisappeared:) name:UIKeyboardWillHideNotification object:nil];
}

- (void)viewWillDisappear:(BOOL)animated {
    [super viewWillDisappear:animated];
    [[NSNotificationCenter defaultCenter] removeObserver:self name:UIKeyboardDidShowNotification object:nil];
    [[NSNotificationCenter defaultCenter] removeObserver:self name:UIKeyboardWillHideNotification object:nil];
}

- (void)keyboardAppeared:(NSNotification *)note {
    CGRect keyboardRect = [[note.userInfo objectForKey:UIKeyboardFrameEndUserInfoKey] CGRectValue];
    
    self.oldInsets = self.scrollView.contentInset;
    UIEdgeInsets contentInsets = UIEdgeInsetsMake(self.oldInsets.top, 0.0f, CGRectGetHeight(keyboardRect), 0.0f);
    self.scrollView.contentInset = contentInsets;
    self.scrollView.scrollIndicatorInsets = contentInsets;
}

- (void)keyboardDisappeared:(NSNotification *)note {
    self.scrollView.contentInset = self.oldInsets;
    self.scrollView.scrollIndicatorInsets = self.oldInsets;
}

@end

This is a simple and almost trivial implementation of a keyboard manager. Yours might be more robust. The principle, however, is sound. Encode your behaviors into tiny, reusable view controllers, and add them to your primary view controller as needed.

Using this technique, you can avoid the use of global objects, tangled view controllers, long inheritance heirarchies, and other code smells. What else can you make? A view controller responsible purely for refreshing network data whenever the view appears? A view controller for validating the data in the form view? The possiblities are endless.

Last week, I tweeted that “reading lots of new blog posts in rss makes me way happier than reading lots of new tweets”.

Opening my RSS reading and finding 30 unread items makes me happy. Opening Twitter and seeing 150 new tweets feels like work. I’m not sure why that is. I think Twitter has become more negative, and the ease of posting quick bursts makes posting negative stuff easy. With blogging, writing something long requires time, words, and an argument. Even the passing thought of “should I post this” creates a filter that lets only better stuff through.

And I find myself running out of blog posts to read more quickly than tweets. Even though the content is longer-form, there are much fewer sources total. I want to fix this.

That same day on Twitter, I put out a call for new blogs. I got a few recommendations (all great!): Priceonomics from Allen Pike, The Morning News from Patrick Gibson, and Matt Bischoff’s tour-de-force of a tweet.

I’m looking for more, though, and blogs of a specific type:

  • Written by a single person with a voice and interests of their own
  • I like programming but I’m happy with other stuff too
  • Longer-form is better than shorter, but both are good
  • Prefer original content to link blogs
  • Ideally fewer than 2 or 3 posts per week

Send me tweets and emails about the awesome blogs you love, please! And don’t be afraid to promote your own blog. I want to read it. Over the last year, while looking at my referrers, I’ve found some awesome blogs that my readers have been quietly working on, such as Christian Tietze and Benedict Cohen. I’m not kidding, I want to see your blog.

Here are some blogs of the type that I’ve found myself enjoying the most recently.

Slate Star Codex might be my favorite blog I’ve found. Scott has a contrarian angle on isuses that’s not always right but is always interesting. If you email me, I’m happy to recommend my favorites of his posts.

Sometimes, it can be inspiring to read people in other programming communities writing good stuff, like Pat Shaughnessy and his blog. Zack Davis’s An Algorithmic Lucidity is great. There are blogs like Mike Caulfield’s and Manton Reece’s that read like a journal for a new project. It’s awesome to be along with them for the ride.

Blogs I’ve found beause of Swift stuff, like Olivier Halligon’s Erica Sadun’s, Ben Cohen’s, and Russ Bishop’s. An amazing blog from a co-worker at Genius, James Somers. He never posts, but when he does, it’s worth the wait.

I think I miss blogrolls, too. One of those will probably make an appearance on this blog soon.

Over winter break last year, I went on vacation for two weeks. I had lots of time and not as much internet. With the downtime, I wrote three posts of ideas I’d been having. I figured I would post one a week in the new year.

I posted the first one, Finite States of America, when I got back. It got a little traction and so I wrote a follow-up, State Machinery. The next two weeks saw posts about The Coordinator and Categories in Objective-C. After a month of posting, I found I really liked having a once-a-week posting schedule. I decided to see how long I could keep going.

At the end of the year, WordPress sent me a year end statistics retrospecive, and it included a graph.

Screen Shot 2015-12-29 at 5.38.18 PM

Each column is a week, and each green dot is a new post. This graph was coincidentally perfect for this project, because it clearly shows which weeks I post and which weeks i didn’t. (I missed three weeks in March for Ull and working on the Instant Cocoa release, two for WWDC, one for Thanksgiving, and one for NSSpain. I feel very guilty about missing those weeks and I’m sorry.)

Now, with the year over, I think I’m going to go a calmer posting schedule. Once a week, especially for the highly technical types of posts I write, is pretty extreme. I hope I can do twice a month. Time will tell.

Through the process, I learned a lot things.

The biggest thing I learned was that I could do this at all. In a roughly-mid-year retrospective, Throw It All Away, I wrote:

I’ve published 15 posts since January. It feels like a breakneck speed. If you asked me last year how long I could sustain such a pace, I think I would have answered, “maybe 4 weeks?”.

But I’m still going. And, somehow, even though back in December the list of potential topics had as many items on it as I’ve posted already, it’s still more or less the same length. I can’t explain it.

A lot of my friends asked me how I kept up such a crazy schedule. While it helped to have more people than usual reading my stuff and sending me positive feedback, the best thing was having a strict schedule and sticking to it. Making the blog a priority each week was the key. With 156 hours in each week, I of course had time to blog, it just needed to be prioritized over work, sleep, eating, social stuff, and binge watching the West Wing.

The second big thing I learned this year is that writing helps me figure out what I actually think. In this talk, Leslie Lamport quotes a cartoonist named Guindon in saying “Writing is nature’s way of letting you know how sloppy your thinking is.” I haven’t been able to source the quote any more specifically than that, but it’s a great quote.

When writing an argument down, it congeals into something more solid, and it’s so much easier to see the weak points and holes in the argument. For example, when I started writing A Structy Model Layer, my original intention was to show why structs didn’t make for good models. As I tried to flesh out my post and my thoughts, I realized that it was actually a more complicated issue than that, and sometimes structs are appropriate for model layers.

Writing so many posts helped me make clearer arguments and figure out what I really thought. I’m also glad that I have a repository of big, well-thought-out ideas that I can point people to. It was a great year, and since I’ve just started writing Swift for a client, more posts are just around the corner.