One of the selling points of server-side Swift is that you get to use the same tools that you’re used to on the client. One of these tools is Grand Central Dispatch. Dispatch is one of the best asynchronous toolboxes in terms of its API and abstractions, and getting to use it for the Beacon server is an absolute pleasure.

While there’s a broader discussion to be had about actors in Swift in the future, spurred by Chris Lattner’s concurrency manifesto, and perhaps in the future some of the patterns for asynchronous workers will change, for now, Dispatch is the best tool that we have.

On the client, we rely on Dispatch for a few reasons. Key among them, and notably irrelevant on the server, we use Dispatch to get expensive work off the main thread to keep our UIs responsive. While the server does not have this need specifically, services with faster response times (under 250ms per request) are used more often than those that are slower. (Other uses of Dispatch, like synchronization of concurrent tasks and gating access to resources, is similarly valuable on both platforms.)

To make requests faster, a lot of nonessential work can be deferred until after the consumer’s request has been responded to. Examples of this are expensive calculations or external side effects, like sending email or push notifications. Further, some code should be executed on a regular basis: hourly or daily or weekly.

Dispatch is well-suited for these types of tasks, and in this post, we’ll discuss how Dispatch usage is similar relative to the client. My experience here is with the framework Vapor, though I suspect much of this advice holds true for other frameworks as well.

Your server app is long running. Some web frameworks tear down the whole process between requests, to clear out any old state. Vapor doesn’t work like this. While each request is responded to in a synchronous fashion, Vapor will respond to multiple requests at the same time. The same instance of the application handles these requests. This means that if you want something to happen, but don’t want to block returning a response for the current request, you can follow your intuition and use DispatchQueue.async to kick that block to another queue for execution, and return the response immediately.

A concrete example of this is firing off a push notification in reaction to some request the user makes: the user makes a new event and the user’s friends need to be notified. If you don’t use Dispatch for this, then the response to the user will be delayed by however long it takes to successfully send the push notification payload to an APNS server. In particular, if you have many push notifications to send, this can greatly delay the user’s request. By deferring this until after the user’s request is responded to, the request will return faster. Once the side effect is deferred, it can take as long it needs to without affect the user’s experience.

Lastly, sometimes you want to delay push notifications by a few seconds so that if the user deletes the resource in question, the user’s friends aren’t notified about an object that doesn’t exist. To accomplish this, you can swap async for asyncAfter, just as you would expect from your client-side experience.

You can’t use the main queue. The “main” queue is blocked, constantly spinning, in order to prevent the progam from ending. Unlike in iOS apps, there’s no real concept of a run loop, so the main thread has no way to execute blocks that are enqueued to it. Therefore, every time you want to async some code, you must dispatch it to a shared, concurrent .global() queue or to a queue of your own creation. Because there is no UI code, there’s no reason to prefer the main thread over any other thread.

Thread safety is still important. Vapor handles many requests at once, each on their own global queue. Any global, mutable data needs to be isolated behind some kind of synchronization pattern. While you can use Foundation’s locks, I find isolation queues to be an easier solution to use. They’re slightly more performant than locks, since they enable concurrent reads, and they work exactly the same way on the server as they do on iOS.

Semaphores are good for making async code synchronous. Other Swift server frameworks work differently, but Vapor expects the responses to requests to be synchronous. Therefore, there’s no sense in using code with completion blocks. APIs like URLSession’s dataTask(with:completionHandler:) can be made synchronous using semaphores:

extension URLSession {
    public func data(with request: URLRequest) throws -> (Data, HTTPURLResponse) {
        var error: Error?
        var result: (Data, HTTPURLResponse)?
        let semaphore = DispatchSemaphore(value: 0)

        self.dataTask(with: request, completionHandler: { data, response, innerError in
            if let data = data, let response = response as? HTTPURLResponse {
                result = (data, response)
            } else {
                error = innerError
            }
            semaphore.signal()
        }).resume()

        semaphore.wait()

        if let error = error {
            throw error
        } else if let result = result {
            return result
        } else {
            fatalError("Something went horribly wrong.")
        }
	}
}

This code kicks off a networking request and blocks the calling thread with semaphore.wait(). When the data task calls the completion block, the result or error is assigned, and we can call semaphore.signal(), which allows the code to continue, either returning a value or a throwing an error.

Dispatch timers can perform regularly scheduled work. For work that needs to occur on a regular basis, like database cleanup, maintenance, and events that need to happen at a particular time, you can create a dispatch timer.

let timer = DispatchSource.makeTimerSource()
timer.scheduleRepeating(deadline: .now(), interval: .seconds(60))

timer.setEventHandler(handler: {
	//fired every minute
})

timer.resume()

The only thing of note here is that, like on the client, this timer won’t retain itself, so you have to store it somewhere. Because it’s pretty easy to build your own behaviors on top of something like a Dispatch timer, I think we won’t see job libraries, like Rails’s ActiveJob, have quite the uptake in Swift that they have had in other environments. Nevertheless, I think it’s worth linking to the job/worker queue libraries I’ve found on GitHub:

Dispatch is a useful library with tons of awesome behaviors that can be built with its lower-level primitives. When setting out, I wasn’t sure how it would work in a Linux/server environment, and I’m pleased to report that working with it on the server is about as straightforward as you would want it to be. It’s a real delight to use, and it makes writing server applications that much easier.

This is a post I’ve been trying to write for a long time — literally years — and have struggled for want of the perfect example. I think I’ve finally found the one, courtesy of David James, Tim Vermeulen, Dave DeLong, and Erica Sadun.

Once upon a time, Erica came up with a way for constraints to install themselves. This code was eventually obviated by isActive in UIKit, but the code moved from Objective-C to Swift. It wasn’t perfect or particularly efficient, but it got the job done.

The following code comes from a rote Swift migration. It calculates the nearest common ancestor between two items in a tree of views. This was an early stab at this concept, abandoned after isActive was added.

Back when I worked at Rap Genius, we would often say the first cut is the deepest. Your first attempt at something, while it might not be the cleanest or most polished, involves the most work because it provides the superstructure for what you’re building. Erica’s version is that superstructure. It’s a solution and it works, but it’s ripe for cleaning up.

public extension UIView {

	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with otherView: UIView) -> UIView? {
		
		// Two equal views are each other's NCA
		guard self != otherView else { return self }
		
		// Compute superviews
		let mySuperviews = sequence(first: self.superview, next: { $0?.superview }).flatMap({ $0 })
		let theirSuperviews = sequence(first: otherView.superview, next: { $0?.superview }).flatMap({ $0 })	 
		
		// Check for direct ancestry
		guard !mySuperviews.contains(otherView)
			else { return otherView }
		guard !theirSuperviews.contains(self)
			else { return self }
		
		// Check for indirect ancestry
		for view in mySuperviews {
			guard !theirSuperviews.contains(view)
				else { return view }
		}
		
		// No shared ancestry
		return nil
	}
}

There’s a lot wrong with this code. It’s complex. There’s lots of cases to think about. It’s a simple piece of functionality, and yet there’s four guards and three different lookups. Simplifying this code will make it easier to read, understand, and maintain.

Perhaps you’re fine with this code in your codebase. The old saying goes, “if it ain’t broke, don’t fix it”. However, my experience has shown me that when there’s an inelegant algorithm like this, there’s a pearl in the center of it that wants to come out. Even a function this long is too hard to keep in your brain all at once. If you can’t understand it all, things slip through the cracks. I’m not confident that there aren’t any bugs in the above code; as a friend said, “every branch is a place for bugs to hide”. (This concept is known more academically as cyclomatic complexity.) And bugs or no bugs, with the power of retrospection, I can now see a few performance enhancements hiding in there, obscured by the current state of the code.

The refactoring process helps eliminate these potential bugs and expose these enhancements by iteratively driving the complex towards the simple. Reducing the algorithm down to its barest form also helps you see how it’s similar to other algorithms in your code base. These are all second-order effects, to be sure, but second-order effects pay off.

To kick off our refactoring, let’s look at the sequence(first:next:) function. Erica’s version started with self.superview, which is an optional value. This creates a sequence of optionals, which then forced Erica to flatMap them out. If we can remove this optionality from the sequence, we can remove the flatMap too. We changed the sequence to start from self instead (which isn’t optional), and added dropFirst() to remove that self:

let mySuperviews = sequence(first: self, next: { $0.superview }).flatMap({ $0 }).dropFirst()

Next, we killed the flatMap({ $0 }), because there are no nils to remove any more:

sequence(first: self, next: { $0.superview }).dropFirst()

This change led to this intermediate state, with plenty of code still left to trim:

public extension UIView {

	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with otherView: UIView) -> UIView? {
		
		// Two equal views are each other's NCA
		guard self != otherView else { return self }
		
		// Compute superviews
		let mySuperviews = sequence(first: self, next: { $0.superview }).dropFirst()
		let theirSuperviews = sequence(first: otherView, next: { $0.superview }).dropFirst()

		// Check for direct ancestry
		guard !mySuperviews.contains(otherView)
			else { return otherView }
		guard !theirSuperviews.contains(self)
			else { return self }

		// Check for indirect ancestry
		for view in mySuperviews {
			guard !theirSuperviews.contains(view)
				else { return view }
		}
		
		// No shared ancestry
		return nil
	}
}

At this point, we looked at the indirect ancestry component.

// Check for indirect ancestry
for view in mySuperviews {
	guard !theirSuperviews.contains(view)
		else { return view }
}

A for loop with an embedded test is a signal to use first(where:). The code simplified down to this, removing the loop and test:

if let view = mySuperviews.first(where: { theirSuperviews.contains($0) }) { return view }

Function references make this more elegant, readable, and clear:

if let view = mySuperviews.first(where: theirSuperviews.contains) { return view }

Our function now looks like this:

public extension UIView {

	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with otherView: UIView) -> UIView? {
		
		// Two equal views are each other's NCA
		guard self != otherView else { return self }

		// Compute superviews
		let mySuperviews = sequence(first: self, next: { $0.superview }).dropFirst()
		let theirSuperviews = sequence(first: otherView, next: { $0.superview }).dropFirst()
		
		// Check for direct ancestry
		guard !mySuperviews.contains(otherView)
			else { return otherView }
		guard !theirSuperviews.contains(self)
			else { return self }

		// Check for indirect ancestry
		if let view = mySuperviews.first(where: theirSuperviews.contains) { return view }
		if let view = theirSuperviews.first(where: mySuperviews.contains) { return view }
		
		// No shared ancestry
		return nil
	}
}

After this point, we stepped back and looked at the algorithm as a whole. We realized that if we include self and otherView in their respective superview sequences, the “direct ancestry” check and the “two equal views” check at the top would be completely subsumed by the first(where:) “indirect ancestry” checks. To perform this step, we first dropped the dropFirst():

sequence(first: self, next: { $0.superview })

And then we could kill the “direct ancestry” check:

// Check for direct ancestry
guard !mySuperviews.contains(otherView)
	else { return otherView }
guard !theirSuperviews.contains(self)
	else { return self }

And finally we could remove the first guard as well:

guard self != otherView else { return self }

After deleting them both, the function now looked like this:

public extension UIView {

	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with otherView: UIView) -> UIView? {
		
		// Compute superviews
		let mySuperviews = sequence(first: self, next: { $0.superview })
		let theirSuperviews = sequence(first: otherView, next: { $0.superview })
		
		if let view = mySuperviews.first(where: theirSuperviews.contains) { return view }
		if let view = theirSuperviews.first(where: mySuperviews.contains) { return view }
		
		// No shared ancestry
		return nil
	}
}

That was a major turning point in our understanding of the function. At this point, this code was starting to reveal its own internal structure. Each step clarifies the next potential refactoring to perform, to get closer to the heart of the function. For the next refactoring, Tim realized we could simplify the tail end of the function by applying a nil-coalescing operator:

return mySuperviews.first(where: theirSuperviews.contains) ?? theirSuperviews.first(where: mySuperviews.contains)

But here the first test before the nil-coalescing operator has covered all the views in both hierarchies. Because we’re looking for the first intersection between mySuperviews and theirSuperviews, there’s no reason to test both it and its opposite. We can drop everything after the ??:

public extension UIView {
	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with otherView: UIView) -> UIView? {
		let mySuperviews = sequence(first: self, next: { $0.superview })
		let theirSuperviews = sequence(first: otherView, next: { $0.superview })
		
		return mySuperviews.first(where: theirSuperviews.contains)
	}
}

The algorithm has revealed its beautiful internal symmetry now. Very clear intent, very clear algorithm, and each component is simple. It’s now more obvious how to tweak and modify this algorithm. For example,

  • If you don’t want the views self and otherView to be included in the calculation of ancestry, you can restore dropFirst() to the superview sequences.
  • If you want to know if the views have a common ancestor (rather than caring about which ancestor it is), you can replace the first(where:) with a contains(where:).
  • If you want to know all the common ancestors, you could replace the first(where:) with a filter(_:).

With the code in its original state, I couldn’t see before that these kinds of transformations were possible; now, they’re practically trivial.

From here, there are two potential routes.

First, there’s a UIView API for determining if one view is a descendant of another, which makes for a super readable solution:

extension UIView {
	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with other: View) -> View? {
		return sequence(first: self, next: { $0.superview })
			.first(where: { other.isDescendant(of: $0) })
	}
}

The second option is to explore performance. We noticed that theirSuperviews was only used for a contains check. If we wrap that sequence in a Set, existence lookup becomes O(1), and this whole algorithm gets blisteringly fast.

public extension UIView {
	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with otherView: UIView) -> UIView? {
		let mySuperviews = sequence(first: self, next: { $0.superview })
		let theirSuperviews = Set(sequence(first: otherView, next: { $0.superview }))
		return mySuperviews.first(where: theirSuperviews.contains)
	}
}

For view hierarchies that are pathologically deep (10,000 or so levels), this solution leaves the other one in the dust. Almost no view hierarchies contain that many layers, so this isn’t really a necessary optimization. However, if it were necessary, it would have been very hard to find without this refactoring process. Once we performed it, it become obvious what to tweak to speed things up.

Thomas Aquinas writes:

Properly speaking, truth resides in the intellect composing and dividing; and not in the senses; nor in the intellect knowing “what a thing is.”

This quote reflects the process of refactoring. If you’re doing it right, you don’t need to understand what the original code actually does. In the best of cases, you won’t even need to compile the code. You can operate on the code, composing and dividing, through a series of transformations that always leave the code in a correctly working state.

Perhaps you could have written the final version of this code from the very start. Perhaps it was obvious to you that this combination of APIs would yield the correct behavior in all cases. I don’t think I could have predicted that the original code would end up as an elegant one-line solution that handles all edge cases gracefully. I definitely couldn’t have predicted that there was a big performance optimization that changes this algorithm from O(n²) to O(n). Refactoring is an iterative process, and continual refinement reveals the code’s true essence.

This article is also available in Chinese.

Part of the promise of Swift is the ability to write simple, correct, and expressive code. Swift’s error system is no exception, and clever usage of it vastly improves the code on the server. Our app Beacon uses Vapor for its API. Vapor provides a lot of the fundamental components to building an API, but more importantly, it provides the extension points for adding things like good error handling yourself.

The crucial fact is that pretty much every function in your server app is marked as throws. At any point, you can throw an error, and that error will bubble all the way through any functions, through the response handler that you registered with the router, and through any registered middlewares.

Vapor typically handles errors by loading an HTML error page. Because the Beacon’s server component is a JSON API, we need some middleware that will translate an AbortError (Vapor’s error type, which includes a message and a status code) into usable JSON for the consumer. This middleware is pretty boilerplate-y, so I’ll drop it here without much comment.

public final class JSONErrorMiddleware: Middleware {
    	    
    public func respond(to request: Request, chainingTo next: Responder) throws -> Response {
        do {
            return try next.respond(to: request)
        } catch let error as AbortError {
            let response = Response(status: error.status)
            response.json = try JSON(node: [
                "error": true,
                "message": error.message,
                "code": error.code,
                "metadata": error.metadata,
            ])
            return response
        }
    }
}

In Vapor 1.5, you activate this middleware by adding it to the droplet, which is an object that represents your app.

droplet.middleware.append(JSONErrorMiddleware())

Now that we have a way to present errors, we can start exploring some useful errors. Most of the time when something on the server fails, that failure is represented by a nil where there shouldn’t be one. So, the very first thing I added was the unwrap() function:

struct NilError: Error { }

extension Optional {
    func unwrap() throws -> Wrapped {
        guard let result = self else { throw NilError() }
        return result
    }
}

This function enables you to completely fail the request whenever a value is nil and you don’t want it to be. For example, let’s say you want to find an Event by some id.

let event = Event.find(id)

Unsurprisingly, the type of event is Optional<Event>. Because an event with the given ID might not exist when you call that function, it has to return an optional. However, sometimes this doesn’t make for the best code. For example, in Beacon, if you try to attend an event, there’s no meaningful work we can do if that event doesn’t exist. So, to handle this case, I call unwrap() on the value returned from that function:

let event = Event.find(id).unwrap()

The type of event is now Event, and if the event doesn’t exist, function will end early, and bubble the error up until it hits the aforementioned JSONErrorMiddleware, ultimately resulting in error JSON for our user.

The problem with unwrap() is that it lacks any context. What failed to unwrap? If this were Ruby or Java, we’d at least have a stack trace and we could figure out what series of function calls led to our error. This is Swift, however, and we don’t have that. The most we can really do is capture the file and line of the faulty unwrap, which I’ve done in this version of NilError.

In addition, because there’s no context, Vapor doesn’t have a way to figure out what status code to use. You’ll notice that our JSONErrorMiddleware pattern matches on the AbortError protocol only. What happens to other errors? They’re wrapped in AbortError conformant-objects, but the status code is assumed to be 500. This isn’t ideal. While unwrap() works great for quickly getting stuff going, it quickly begins to fall apart once your clients start expecting correct status codes and useful error messages. To this end, we’ll be exploring a few useful custom errors that we built for this project.

Missing Resources

Let’s tackle our missing object first. This request should probably 404, especially if our ID comes from a URL parameter. Making errors in Swift is really easy:

struct ModelNotFoundError: AbortError {
    
    let status = Status.notFound
    	    
    var code: Int {
        return status.statusCode
    }
    	    
    let message: String
    
    public init<T>(type: T) {
        self.message = "\(typeName) could not be found."
    }
}

In future examples, I’ll leave out the computed code property, since that will always just forward the statusCode of the status.

Once we have our ModelNotFoundError, we can guard and throw with it.

guard let event = Event.find(id) else {
	throw ModelNotFoundError(type: Event)
}

But this is kind of annoying to do every time we want to ensure that a model is found. To solve that, we package this code up into an extension on every Entity:

extension Entity {
	static func findOr404(_ id: Node) throws -> Self {
		guard let result = self.find(id) else {
			throw ModelNotFoundError(type: Self.self)
		}
		return result
	}
}

And now, at the call site, our code is simple and nice:

let event = try Event.findOr404(id)

Leveraging native errors on the server yields both more correctness (in status codes and accurate messages) and more expressiveness.

Authentication

Our API and others use require authenticating the user so that some action can be performed on their behalf. To execute this cleanly, we use a middleware to fetch to the user from some auth token that the client passes us, and save that user data into the request object. (Vapor includes a handy dictionary on each Request called storage that you can use to store any additional data of your own.) (Also, Vapor includes some authentication and session handling components, but it was easier to write this than to try to figure out how to use Vapor’s built-in thing.)

final class CurrentSession {

	init(user: User? = nil) {
		self.user = user
	}
    
	var user: User?
    
	@discardableResult
	public func ensureUser() throws -> User {
		return user.unwrap()
	}
}

Every request will provide a Session object like the one above. If you want to ensure that a user has been authenticated (and want to work with that user), you can call:

let currentUser = try request.session.ensureUser()

However, this has the same problem as our previous code. If the user isn’t correctly authed, the consumer of this API will see a 500 with a meaningless error about nil objects, instead of a 401 Unauthorized code and a nice error message. We’re going to need another custom error.

struct AuthorizationError: AbortError {
	let status = Status.unauthorized

	var message = "Invalid credentials."
}

Vapor actually has a shorthand for this kind of simple error:

Abort.custom(status: .unauthorized, message: "Invalid credentials.")

Which I used until I needed the error to be its own object, for reasons that will become apparent later.

Our function ensureUser() now becomes:

@discardableResult
public func ensureUser() throws -> User {
	guard let user = user else {
		throw AuthorizationError()
	}
	return user
}

Bad JSON

Vapor’s JSON handling leaves much to be desired. Let’s say you want a string from the JSON body that’s keyed under the name “title”. Look at all these question marks:

let title = request.json?["title"]?.string

At the end of this chain, of course, title is an Optional<String>. Even throwing an unwrap() at the end of this chain doesn’t solve our problem: because of Swift’s optional chaining precedence rules, it will only unwrap the last component of the chain, .string. We can solve this in two ways. First, by wrapping the whole expression in parentheses:

let title = try (request.json?["title"]?.string).unwrap()

or unwrapping at each step:

let title = try request.json.unwrap()["title"].unwrap().string.unwrap()

Needless to say, this is horrible. Each unwrap represents a different error: the first represents a missing application/json Content-Type (or malformed data), the second, the absence of the key, and the third, the expectation of the key’s type. All that data is thrown away with unwrap(). Ideally, our API would have a different error message for each error.

enum JSONError: AbortError {

	var status: Status {
		return .badRequest
	}
	
	case jsonMissing
	case missingKey(keyName: String)
	case typeMismatch(keyName: String, expectedType: String, actualType: String)
}

These cases represent the three different errors from above. We need to add a function to generate a message depending on the case, but that’s really all this need. We have errors that are a lot more expressive, and ones that help the client debug common errors (like forgetting a Content-Type).

These errors, combined with NiceJSON (which you can read more about it in this post), result in code like this:

let title: String = try request.niceJSON.fetch("title")

Much easier on the eyes. title is also usually an instance variable (of a command) with a pre-set type, so the : String required for type inference can be omitted as well.

By making the “correct way” to write code the same as the “nice way” to write code, you never have to make a painful trade-off between helpful error messages or type safety, and short easy-to-read code.

Externally Visible Errors

By default, Vapor will wrap an error that fails into an AbortError. However, many (most!) errors reveal implementation details that users shouldn’t see. For example, the PostgreSQL adapter’s errors reveal details about your choice of database and the structure of your tables. Even NilError includes the file and line of the error, which reveals that the server is built on Swift and is therefore vulnerable to attacks targeted at Swift.

In order to hide some errors and allow others to make it through the user, I made a new protocol.

public protocol ExternallyVisibleError: Error {
    
    var status: Status { get }
    
    var externalMessage: String { get }
}

Notice that ExternallyVisibleError doesn’t inherit from AbortError. Once you conform your AbortError to this protocol, you have to provide one more property, externalMessage, which is the message that will be shown to users.

Once that’s done, we need a quick modification to our JSONErrorMiddleware to hide the details of the error if it’s not an ExternallyVisibleError:

public func respond(to request: Request, chainingTo next: Responder) throws -> Response {
    do {
        return try next.respond(to: request)
    } catch let error as ExternallyVisibleError {
        let response = Response(status: error.status)
        response.json = try JSON(node: [
            "error": true,
            "message": error.externalMessage,
            "code": error.status.statusCode,
        ])
        return response
    } catch let error as AbortError {
        let response = Response(status: error.status)
        response.json = try JSON(node: [
            "error": true,
            "message": "There was an error processing this request.",
            "code": error.code,
        ])
        return response
    }
}

I also added some code that would send down the AbortError’s message as long as the environment wasn’t .production.

Swift’s errors are a powerful tool that can store additional data, metadata, and types. A few simple extensions to Vapor’s built-in types will enable you to write better code along a number of axes. For me, the ability to write terse, expressive, and correct code is the promise that Swift offered from the beginning, and this compact is maintained on the server as much as it is on the client.

Beacon is built with Swift on the server. Since we have all of the niceties of Swift in this new environment, we can use our knowledge and experience from building iOS app to build efficient server applications. Today, we’ll look at two examples of working with sequences on the server to achieve efficiency and performance.

Over the network

For its social graph, Beacon needs to find your mutual Twitter followers — that is, the people you follow that follow you back. There’s no Twitter API for this, so we have to get the list of follower IDs and the list of following IDs, and intersect them. The Twitter API batches these IDs into groups of 5,000. While people rarely follow more than 5,000 people, some users on Beacon have a few hundred thousand Twitter followers, so these will have to be batched. Because of these contraints, this problem provides a pretty interesting case study for advanced sequence usage.

We do this on the server instead of the client, because there will be a lot of requests to the Twitter API, and it doesn’t make much sense to perform those on a user’s precarious cellular connection. For our backend, we use the Vapor framework, and Vapor’s request handling is completely synchronous. Because of this, there’s no sense in using completion blocks for network requests. You can just return the result of the network request as the result of your function (and throw if anything goes wrong). For an example, let’s fetch the IDs of the first 5,000 people that someone follows:

let following = try client.send(request: TwitterFollowingRequest())

To perform the batching, the Twitter API uses the concept of cursors. To get the first batch, you can leave off the cursor, or pass -1. Each request returns a new next_cursor, which you give back to Twitter when you want the next batch. This concept of cursors fits nicely into Swift’s free function sequence(state:next:). Let’s examine this function’s signature:

func sequence<T, State>(state: State, next: @escaping (inout State) -> T?) -> UnfoldSequence<T, State>

This function is generic over two types: T and State. We can tell from the signature that we need provide an initial State as a parameter, and we also provide a closure that takes an inout State and returns an optional T. inout means we can mutate the state, so this is how we update the state for the next iteration of the sequence. The T that we return each time will form our sequence. Returning nil instead of some T ends the sequence.

Because the Fibonacci sequence is the gold standard for stateful sequences, let’s take a look at using sequence(state:next:) to create a Fibonacci sequence:

let fibonacci = sequence(state: (1, 1), next: { (state: inout (Int, Int)) -> Int? in
    let next = state.0 + state.1
    state = (state.1, next)
    return next
})

The state in this case has type (Int, Int) and represents the last two numbers in the sequence. First, we figure out the next number by adding the two elements in the tuple together; then, we update the state variable with the new last two values; finally, we return the next element in the sequence.

(Note that this sequence never returns nil, so it never terminates. It is lazy, however, so none of this code is actually evaluated until you ask for some elements. You can use .prefix(n) to limit to the first n values.)

To build our sequence of Twitter IDs, we start with the state "-1", and build our sequence from there.

let lazyFollowerIDs = sequence(state: "-1", next: { (state) -> [Int]? in

})

We need to send the request in this block, and return the IDs from the result of the request. The request itself looks a lot like the TwitterFollowingRequest from above, except it’s now for followers instead.

let lazyFollowerIDs = sequence(state: "-1", next: { (state) -> [Int]? in
    let result = try? self.client.send(request: TwitterFollowersRequest(cursor: state))
    return result?.ids
})

Right now, this request never updates its state, so it fetches the same page over and over again. Let’s fix that.

let lazyFollowerIDs = sequence(state: "-1", next: { (state) -> [Int]? in
    let result = try? self.client.send(request: TwitterFollowersRequest(cursor: state))
    state = result?.nextCursor ?? "0"
    return result?.ids
})

For the last page, Twitter will return "0" for the next_cursor, so we can use that for our default value if the request fails. (If the request fails, result?.ids will also be nil, so the sequence will end anyway.)

Lastly, let’s put a guard in place to catch the case when Twitter has shown us the last page.

let lazyFollowerIDs = sequence(state: "-1", next: { (state) -> [Int]? in
    guard state != "0" else { return nil }
    let result = try? self.client.send(request: TwitterFollowersRequest(cursor: state))
    state = result?.nextCursor ?? "0"
    return result?.ids
})

(If we added a little more error handling here, it would look almost identical to the actual code that Beacon uses.)

This sequence is getting close. It’s already lazy, like our Fibonacci sequence, so it won’t fetch the second batch of 5,000 items until the 5,001st element is requested. It needs one more big thing: it’s not actually a sequence of IDs yet. It’s still a sequence of arrays of IDs. We need to flatten this into one big sequence. For this, Swift has a function called joined() that joins a sequence of sequences into a big sequence. This function (mercifully) preserves laziness, so if the sequence was lazy before, it’ll stay lazy. All we have to do is add .joined() to the end of our expression.

To get our mutual follows from this lazyFollowerIDs sequence, we need something to intersect the followers and the following. To make this operation efficient, let’s turn the following IDs into a set. This will make contains lookup really fast:

let followingIDSet = Set(following.ids)

We make sure to filter over the lazyFollowerIDs since that sequence is lazy and we’d like to iterate over it only once.

let mutuals = lazyFollowerIDs.filter({ id in followingIDSet.contains(id) })

This reads “keep only the elements from lazyFollowerIDs that can be be found in followingIDSet”. Apply a little syntactic sugar magic to this, and you end up with a pretty terse statement:

let mutuals = lazyFollowerIDs.filter(followingIDSet.contains)

Off the disk

A similar technique can be used for handling batches of items from the database.

Vapor’s ORM is called Fluent. In Fluent, all queries go through the Query type, which is type parameterized on T, your entity, e.g, User. Queries are chainable objects, and you can call methods like filter and sort on them to refine them. When you’re done refining them, you can call methods like first(), all() , or count() to actually execute the Query.

While Fluent doesn’t have the ability to fetch in batches, its interface allows us to build this functionality easily, and Swift’s lazy sequence mechanics let us build it efficiently.

We know we’ll need a function on every Query. We don’t know what kind of Sequence we’ll be returning, but we’ll use Sequence<T> as a placeholder for now.

extension Query {
	func inBatches(of batchSize: Int) throws -> Sequence<T> {
		
	}
}

First, we need to know how many items match our query, so we can tell how many batches we’ll be fetching. Because the object we’re inside already represents the query that we’re going to be fetching with, and it already has all the relevant filters and joins, we can just call count() on self, and get the number of objects that match the query.

extension Query {
	func inBatches(of batchSize: Int) throws -> Sequence<T> {
		let count = try self.count()
		
	}
}

Once we have the count, we can use Swift’s stride(from:to:by:) to build a sequence that will step from 0 to our count with a stride of our batchSize.

extension Query {
	func inBatches(of batchSize: Int) throws -> Sequence<T> {
		let count = try self.count()
		stride(from: 0, to: self.count(), by: batchSize)
		
	}
}

Next, we want to transform each step of this stride (which represents one batch) into a set of the objects in question.

extension Query {
	func inBatches(of batchSize: Int) throws -> Sequence<T> {
		let count = try self.count()
		stride(from: 0, to: self.count(), by: batchSize)
			.map({ offset in
				return (try? self.limit(batchSize, withOffset: offset).all()) ?? []
			})
	}
}

Because .all() is a throwing function, we need to handle its error somehow. This will be a lazy sequence, so the map block will get stored and executed later. It is @escaping. This means that we can’t just throw, because we can’t guarantee that we’d be in a position to catch that error. Therefore, we just discard the error and return an empty array if it fails.

If we try to execute this as-is, the map will run instantly and fetch all of our batches at once. Not ideal. We have to add a .lazy to our chain to ensure that that each fetch doesn’t happen until an item from that batch is requested.

extension Query {
	func inBatches(of batchSize: Int) throws -> Sequence<T> {
		let count = try self.count()
		stride(from: 0, to: self.count(), by: batchSize)
			.lazy
			.map({ offset in
				return (try? self.limit(batchSize, withOffset: offset).all()) ?? []
			})
	}
}

The last step here, like the Twitter example, is to call .joined() to turn our lazy sequence of arrays into one big lazy sequence.

extension Query {
	func inBatches(of batchSize: Int) throws -> Sequence<T> {
		let count = try self.count()
		return stride(from: 0, to: self.count(), by: batchSize)
			.lazy
			.map({ offset in
				return (try? self.limit(batchSize, withOffset: offset).all()) ?? []
			})
			.joined()
	}
}

When we run this code, we see that the our big Sequence chain returns a LazySequence<FlattenSequence<LazyMapSequence<StrideTo<Int>, [T]>>>. This type is absurd. We can see all the components of our sequence chain in there, but we actually don’t care about those implementation details. It would be great if we could just erase the type and be left with something simple. This technique is called a type erasing and it will hide these details. AnySequence is a type eraser that the Swift standard library provides for this exact purpose. AnySequence also will become our return type.

extension Query {
    func inBatches(of batchSize: Int) throws -> AnySequence<T> {
		let count = try self.count()
        return AnySequence(stride(from: 0, to: count, by: batchSize)
            .lazy
            .map({ (offset) -> [T] in
                return (try? self.limit(batchSize, withOffset: offset).all()) ?? []
            })
            .joined())
    }
}

We can now write the code we want at the callsite:

try User.query().sort("id", .ascending)
	.inBatches(of: 20)
	.forEach({ user in
		//do something with user
	})

This is reminiscent of Ruby’s find_in_batches or the property fetchBatchSize on NSFetchRequest, which returns a very similar lazy NSArray using the NSArray class cluster.

This is not the first time I’ve said this, but Swift’s sequence handling is exceptionally robust and fun to work with. Understanding the basics of Swift’s sequences enable you to compose those solutions to tackle bigger and more interesting problems.

This article is also available in Chinese.

When working with Swift on the server, most of the routing frameworks work by associating a route with a given closure. When we wrote Beacon, we chose the Vapor framework, which works like this. You can see this in action in the test example on their home page:

import Vapor

let droplet = try Droplet()

droplet.get("hello") { req in
    return "Hello, world."
}

try droplet.run()

Once you run this code, visiting localhost:8080/hello will display the text “Hello, world.”.

Sometimes, you also want to return a special HTTP code to signal to consumers of the API that a special action happened. Take this example endpoint:

droplet.post("devices", handler: { request in
	let apnsToken: String = try request.niceJSON.fetch("apnsToken")
	let user = try request.session.ensureUser()
    
	var device = try Device(apnsToken: apnsToken, userID: user.id.unwrap())
	try device.save()
	return try device.makeJSON()
})

(I’ve written more about NiceJSON here, if you’re curious about it.)

This is a perfectly fine request and is similar to code from the Beacon app. There is one problem: Vapor will assume a status code of 200 when you return objects like a string (in the first example in this blog post) or JSON (in the second example). However, this is a POST request and a new Device resource is being created, so it should return the HTTP status code “201 Created”. To do this, you have to create a full response object, like so:

let response = Response(status: .created)
response.json = try device.makeJSON()
return response

which is a bit annoying to have to do for every creation request.

Lastly, endpoints will often have side effects. Especially with apps written in Rails, managing and testing these is really hard, and much ink has been spilled in the Rails community about it. If signing up needs to send out a registration email, how do you stub that while still testing the rest of the logic? It’s a hard thing to do, and if everything is in one big function, it’s even harder. In Beacon’s case, we don’t have don’t have many emails to send, but we do have a lot of push notifications. Managing those side effects is important.

Generally speaking, this style of routing, where you use a closure for each route, has been used in frameworks like Flask, Sinatra, and Express. It makes for a pretty great demo, but a project in practice often has complicated endpoints, and putting everything in one big function doesn’t scale.

Going even further, the Rails style of having a giant controller which serves as a namespace for vaguely related methods for each endpoint is borderline offensive. I think we can do better than both of these. (If you want to dig into Ruby server architecture, I’ve taken a few ideas from the Trailblazer project.)

Basically, I want a better abstraction for responding to incoming requests. To this end, I’ve started using an object that I call a Command to encapsulate the work that an endpoint needs to do.

The Command pattern starts with a protocol:

public protocol Command {

	init(request: Request, droplet: Droplet) throws
    
	var status: Status { get }

	func execute() throws -> JSON
	
}

extension Command: ResponseRepresentable {
    
	public func makeResponse() throws -> Response {
		let response = Response(status: self.status)
		response.json = try execute()
		return response
	}
    
}

We’ll add more stuff to it as we go, but this is the basic shell of the Command protocol. You can see see just from the basics of the protocol how this pattern is meant to be used. Let’s rewrite the “register device” endpoint with this pattern.

droplet.post("devices", handler: { request in
	return RegisterDeviceCommand(request: request, droplet: droplet)
})

Because the command is ResponseRepresentable, Vapor accepts it as a valid result from the handler block for the route. It will automatically call makeResponse() on the Command and return that Response to the consumer of the API.

public final class RegisterDeviceCommand: Command {

	let apnsToken: String
	let user: User

	public init(request: Request, droplet: Droplet) throws {
		self.apnsToken = try request.niceJSON.fetch("apnsToken")
		self.user = try request.session.ensureUser()
	}

	public let status = Status.created

	public func execute() throws -> JSON {
		var device = try Device(apnsToken: apnsToken, userID: user.id.unwrap())
		try device.save()
		return try device.makeJSON()
	}
}

There are a few advantages conferred by this pattern already.

  1. Maybe the major appeal of using a language like Swift for the server is to take advantage of things like optionals (and more pertinently, their absence) to be able to define the absolute requirements for a request to successfully complete. Because apnsToken and user are non-optional, this file will not compile if the init function ends without setting all of those values.
  2. The status code is applied in a nice declarative way.
  3. Initialization is separate from execution. You can write a test that checks to that the initialization of the object (e.g., the extraction of the properties from the request) that is completely separate from the test that checks that the actual save() works correctly.
  4. As a side benefit, using this pattern makes it easy to put each Command into its own file.

There are two more important components to add to a Command like this. First, validation. We’ll add func validate() throws to the Command protocol and give it a default implementation that does nothing. It’ll also be added to the makeResponse() function, before execute():

public func makeResponse() throws -> Response {
	let response = Response(status: self.status)
	try validate()
	response.json = try execute()
	return response
}

A typical validate() function might look like this (this comes from Beacon’s AttendEventCommand):

public func validate() throws {
	if attendees.contains(where: { $0.userID == user.id }) {
		throw ValidationError(message: "You can't join an event you've already joined.")
	}
	if attendees.count >= event.attendanceLimit {
		throw ValidationError(message: "This event is at capacity.")
	}
	if user.id == event.organizer.id {
		throw ValidationError(message: "You can't join an event you're organizing.")
	}
}

Easy to read, keeps all validations localized, and very testable as well. While you can construct your Request and Droplet objects and pass them to the prescribed initializer for the Command, you’re not obligated to. Because each Command is your own object, you can write an initializer that accepts fully fledged User, Event, etc objects and you don’t have to muck about with manually constructing Request objects for testing unless you’re specifically testing the initialization of the Command.

The last component that a Command needs is the ability to execute side effects. Side effects are simple:

public protocol SideEffect {
	func perform() throws
}

I added a property to the Command protocol that lists the SideEffect-conforming objects to perform once the command’s execution is done.

var sideEffects: [SideEffect] { get }

And finally, the side effects have to be added to the makeResponse() function:

public func makeResponse() throws -> Response {
	let response = Response(status: self.status)
	try validate()
	response.json = try execute()
	try sideEffects.forEach({ try $0.perform() })
	return response
}

(In a future version of this code, side effects may end up being performed asynchronously, i.e., not blocking the response being sent back to the user, but currently they’re just performed synchronously.) The primary reason to decouple side effects from the rest of the Command is to enable testing. You can create the Command and execute() it, without having to stub out the side effects, because they will never get fired.

The Command pattern is a simple abstraction, but it enables testing and correctness, and frankly, it’s pleasant to use. You can find the complete protocol in this gist. I don’t knock Vapor for not including an abstraction like this: Vapor, like the other Swift on the server frameworks, is designed to be simple and and that simplicity allows you to bring abstractions to your own taste.

There are a few more blog posts coming on server-side Swift, as well as a few more in the Coordinator series. Beacon and WWDC have kept me busy, but rest assured! More posts are coming.

Ashley Nelson-Hornstein and I built an app for hanging at WWDC. It took 5 weeks to build. It’s called Beacon, and you can get it on the App Store today.

Beacon is a way to signal to your friends that you’re down to hang out. You can set up an event, and your friends will be able to see those events, and let you know that they want to come. Beacon answers questions like “Who’s free?” “Who likes persian food?”, “We have 2 spots for dinner, and who would want to come?” without the messiness of having to text your entire address book. Each event has a big chat room for organizing, and honestly, goofing around in those chat rooms has been some of the most fun of the beta. Beacon is, at its heart, a very social app.

Ashley and I did a ton of work in these few weeks, trying to get this app from concept to production. I never built an app this fast before, and it’s been an exhilarating ride. In addition to being a stellar dev, Ashley’s got a great eye for the holes in the product and the user loop, which let us tighten up the experience before putting the app in the hands of all of our friends. This project absolutely wouldn’t have worked without her.

Linda Dong also contributed a considerable amount of design work, giving the app life and personality. Before her touch, the “design” was the output of two developers, and you can imagine what a horror show that was.

From a technical perspective, one of the things I’m most excited about is the server side of this project. Chris and I got to talk about this on the last episode of Fatal Error season 2 (Patreon link). Beacon finally gave me the chance to build an application for the server using Swift. We chose Vapor for the framework, purely for the quality of support (mostly a friendly Slack channel) and the size of the community using it.

Swift on the server is a budding project. Builds are slow, test targets are hard to set up, there’s no Xcode (which means no autocompletion or command/option-clicking), Foundation isn’t complete, there’s almost no library support, documentation is god-awful, and everything is changing extremely quickly. Nevertheless, it’s fun as hell to write Swift for the server, and I don’t regret the decision. I think it’s most comparable to writing Swift 1 or 1.1 in a production iOS app. Potentially a problematic decision, but the language was so fun that everyone who did it had no complaints. I think in 2 or 3 years, Swift on the server will be where Swift in the client is now, and that will be a great time indeed.

I’ve written web apps in Node, Rails, and various PHP frameworks, and while it’s possible to take advantage of their dynamically typed features for certain patterns, I often felt like I was programming without a safety net. I felt forced to write tests to make sure that various code paths were getting hit and the right methods were being called.

With Swift on the server, you get all the Swift niceties you’re used to: enums, generics, protocols, sequences, and everything else. All of the other tiny pieces of knowledge of Swift that you’ve built up over the last weeks and months are valid and useful.

A few scattered thoughts on Swift on the server:

  • Because you have a type system, building up little abstractions is much easier, and you can change those abstractions without worrying that protocol conformances down the line will be broken. Optionals are excellent. It’s so nice to know that you have something. For example, in the Event model, I have a non-optional User called organizer, and I have total confidence that through any code path in the app, if I have an event, I will have an organizer.
  • I definitely want Linux support for Sourcery. There’s a lot of boilerplate in model code on the server (sometimes even more than the client) and Sourcery would help with that pain a lot.
  • Because everything in Vapor is synchronous, I rewrote my networking library to simply return a value (or throw) for each request. This makes writing network code so simple, and I find it quite a shame that we can’t take advantage of this on the client as well. I hold out hope that Swift’s async/await implementation will be the answer to some of these woes.

We don’t know if Beacon is a viable product for the broader market, but we think it’ll be a lot of fun at WWDC and we look forward to organizing lots of ad hoc events with all of you awesome people. Find me on the app, and let’s hang out.

This is a post in a series on Advanced Coordinators. If you haven’t read the original post or the longer follow-up, make sure to check those out first. The series will cover a few advanced coordinator techniques, gotchas, FAQs, and other trivia.

When working with coordinators, all flow events should travel through the coordinator. Any time a view controller intends to change flow state, it informs the coordinator, and the coordinator can handle side effects and make decisions about how to proceed.

There is one glaring exception to this rule: when a navigation controller navigates “back”. That back button is not a traditional button, so you can’t add handlers to it to send messages up to the coordinator. Further, its associated behavior is performed directly by the navigation controller itself. If you need to do any work in a coordinator when a view controller is dismissed, you need some way to hook into that behavior.

While there are other less common examples, the primary use case is when you have a sub-flow that takes place entirely within the context of another navigation controller. Coordinators typically own one navigation controller exclusively, but sometimes, a subset of the flow with in a navigation controller stack needs to be broken out into its own coordinator, usually for reuse purposes. That separate coordinator shares the navigation controller with its parent coordinator. If the user enters the child coordinator (entering the sub-flow) and then taps the back button, that child coordinator needs to be cleaned up. If it’s not cleaned up, that coordinator’s memory will effectively be leaked. Further, if they enter that flow a second time, we might have two of the same coordinator, potentially reacting to similar events and executing code twice.

So, we need a way to know that the navigation’s back button has been tapped. The UINavigationControllerDelegate is the easiest way to get access to this event. (You could subclass or swizzle, but let’s not.)

There are a few ways to use this delegate to solve this problem, and I’d like to highlight two of them. The first is Bryan Irace’s approach to tackling this problem. He makes a special view controller called NavigationController that allows you to push coordinators in addition to pushing view controllers.

I’ll elide some of the details and give an overview of the approach, but if you want a full details, I recommend reading his whole post. The main thing to note in his code is:

final class NavigationController: UIViewController {

	// ...

	private let navigationController: UINavigationController = //..
	
	private var viewControllersToChildCoordinators: [UIViewController: Coordinator] = [:]
  
	// ...

}

This shows the way that this class works. When you add a new coordinator to this class, it creates an entry in this dictionary. The entry maps the root view controller of a coordinator to the coordinator itself. Once you have that, you can conform to the UINavigationControllerDelegate.

extension NavigationController: UINavigationControllerDelegate {    
	func navigationController(navigationController: UINavigationController,
		didShowViewController viewController: UIViewController, animated: Bool) {
		// ...
	}
}

At that point, if the popped view controller is found in the coordinator dictionary, it will remove it, allowing it to correctly deallocate.

There’s a lot to like about this approach. Coordinator deallocation is handled automatically for you, when you use this class instead of a UINavigationController. However, it comes with a few downsides, as well. My primary concern is that the NavigationController class, which is a view controller, knows about and has to deal with coordinators. This is tantamount to a view having a reference to a view controller.

I think there are some goopy bits on the inside of UIKit where views know about their view controllers. I haven’t seen the source code, but the stack trace for -viewDidLayoutSubviews suggests that there’s some voodoo going on here. Sometimes, components in a library may be coupled together more tightly, in order to make the end user’s code cleaner. This is the tradeoff that Bryan is making here.

If you don’t want to make that tradeoff, you can bring the navigation controller delegate methods to the parent coordinator, where they can live with all the other flow events. This is my preference. By making the coordinator into the delegate of the navigation controller, you can maintain the structure of the coordinator: namely that it is the parent of the navigation controller. When you get the delegate messages that a view controller was popped off, you can manually clean up any coordinators that need to be dealt with.

extension Coordinator: UINavigationControllerDelegate {    
	func navigationController(navigationController: UINavigationController,
		didShowViewController viewController: UIViewController, animated: Bool) {
		
		// ensure the view controller is popping
		guard
		  let fromViewController = navigationController.transitionCoordinator?.viewController(forKey: .from),
		  !navigationController.viewControllers.contains(fromViewController) else {
			return
    	}
		
		// and it's the right type
		if fromViewController is FirstViewControllerInCoordinator) {
			//deallocate the relevant coordinator
		}
	}
}

This approach is slightly more manual, with the up- and downsides that come with that: more control and more boilerplate. If you don’t like the direct type check, you can replace it with a protocol.

You’ll also need to re-enable the interactivePopGestureRecognizer by conforming to UIGestureRecognizerDelegate and returning true for the shouldRecognizeSimultaneouslyWithGestureRecognizer delegate method.

Both approaches are good ways of handling decommisioned coordinators and ensuring that they correctly deallocate, and these techniques are crucial for breaking out your subflows out into their own coordinators so they can be reused.

Update: Ian MacCallum provides another approach to this problem. He essentially provides a onPop block for a weak coupling between the coordinator and navigation controller (which he wraps up in an object called a Router). It’s a good approach.

This is a post in a series on Advanced Coordinators. If you haven’t read the original post or the longer follow-up, make sure to check those out first. The series will cover a few advanced coordinator techniques, gotchas, FAQs, and other trivia.

When splitting up the responsibilities of a view controller, I do a curious thing. While I leave reading data (for example, a GET request, or reading from a database or cache) in the view controller, I move writing data (such as POST requests, or writing to a database) up to the coordinator. In this post, I’ll explore why I separate these two tasks.

Coordinators are primarily in charge of one thing: flow. Why sully a beautiful single responsibility object with a second responsibility?

I make this distinction because I think flow is the wrong way to think about this object’s responsibility. The correct responsibility is “handle the user’s action”. The reason to draw this distinction is so that the knowledge of when to “do a thing” (mutate the model) and when to “initiate a flow step” can be removed from the view controller. I don’t want a view controller to know what happens when it passes the user’s action up to a coordinator.

You can imagine a change to your app’s requirements that would make this distinction clear. For example, let’s say you have an app with an authentication flow. The old way the app worked was that the user typed their username and password into one screen, and then the signup request could be fired. Now, the product team wants the user to be able to fill out the profile on the next screen, before firing off the signup request. If you keep model mutation in the view controller and the flow code in the coordinator, you’ll have to make a change to both the view controller and the coordinator to make this work.

It gets even worse if you’re A/B testing this change, or slowly rolling it out. The view controller would need an additional component to tell it how to behave (not just how to present its data), which means either a delegate method back up to the coordinator or another object entirely, which would help it decide if it should call inform the coordinator to present the next screen or if it should just post the signup call itself.

If you keep model mutation and flow actions together, the view controller doesn’t have to change at all. The view controller gets to mostly act like it’s in the view layer, and the coordinator, with its fullness of knowledge, gets to make the decision about how to proceed.

Another example: imagine your app has a modal form for posting a message. If the “Close” button is tapped, it should dismiss the modal and delete the draft from the database (which, let’s say, is saved for crash protection). If your designer decides that they want an alert view that asks “Are you sure?” before deleting the draft, your flow and your database mutation are again intertwined. Showing the dialog is presenting a view controller, which is a flow change, and deleting an item from the database is a model mutation. Keeping these responsibilities in the same place will ease your pain when you have to make changes to your app.

One additional, slightly related note: the coordinator’s mutative effect on the model should happen via a collaborator. In other words, your coordinator shouldn’t touch URLSession directly, nor any database handle, like an NSManagedObjectContext. If you like thinking about view models, you might consider a separation between read-only view models (which you could call a Presenter) and write-only view models (which you could call an Interactor or a Gateway). Read-only view models can go down into the view controller, and write-only view models stay at the coordinator level.

The line between model mutation and flow step is thinner than you’d expect. By treating those two responsibilities as one (responding to user action), you can make your app easier to change.

This article is also available in Chinese.

This the going to be the first post in a series on Advanced Coordinators. If you haven’t read the original post or the longer follow-up, make sure to check those out first. The series will cover a few advanced coordinator techniques, gotchas, FAQs, and other trivia. Let’s dig in.

I’m often asked how to migrate an app from using storyboards or per-view controller code-based flow to an app using coordinators. When done right, this refactoring can be done piecemeal. You will continuously be able to deploy your app, even if the refactoring isn’t complete.

To acheive this, the best thing to do is start from the root, which for coordinators is called the “app coordinator”. The app delegate holds on to the app coordinator, which is the coordinator that sets up all the view controllers for your app’s launch.

To understand why we start from the root of the app, consider the opposite. If we started from some leaf flow (like, say, a CheckoutCoordinator), then something needs to maintain a strong reference to the coordinator so that it doesn’t deallocate. If the coordinator deallocates, none of its code can run. So, deep in an app, if we create a coordinator, something will have to hold on to it.

There are two ways to prevent this deallocation. The first option is to make a static reference. Because there will likely only ever be one CheckoutCoordinator, it’ll be easy to stuff it in to a global variable. While this works, this isn’t an ideal choice, since globals hinder testability and flexibility. The second option is to have the presenting view controller maintain a reference to the coordinator. This will force a little complexity onto the presenting view controller, but will allow us to remove more complexity from all the view controllers that are managed by that coordinator. However, this relationship is fundamentally flawed. View controllers are usually “children” to coordinators, and when programming, children shouldn’t know who their parents are. I would liken this to a UIView having a reference to a UIViewController: it shouldn’t happen.

If you have a situation where you’ve decided that you absolutely must start with some child flow in your app, then you can make it work with one of the two methods above. However, if you have the power to start from the root, that’s my recommendation.

One other benefit to starting from the root is that the authentication flow is often close to the root of the app. Authentication is a great flow to isolate away into its own object, and a nice testbed for proving coordinators in your app.

Once you’ve moved the root view controller of the app to your AppCoordinator, you can commit/pull request/code review/etc the code. Because every other view controller transition continues to work, the app will still be fully functional in this halfway state. At this point, working one-by-one, you can start to move more view controller transitions over to the coordinator. After each “flow step” is moved to your coordinator, you can commit or make a pull request, since the app will continue to work. Like the best refactorings, each of these steps are mostly just moving code around, sometimes creating new coordinators as needed.

Once all of your transitions have been moved over to coordinators, you can do further refactorings, like separating iPhone and iPad coordinators into individual objects (instead of one coordinator that switches on some state), making child flows reusable, and better dependency injection, all of which are enabled by your new architecture.

Swift is commonly described as a “safe” language. Indeed, the About page of swift.org says:

Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns.

and

  • Safe. The most obvious way to write code should also behave in a safe manner. Undefined behavior is the enemy of safety, and developer mistakes should be caught before software is in production. Opting for safety sometimes means Swift will feel strict, but we believe that clarity saves time in the long run.

  • Fast. Swift is intended as a replacement for C-based languages (C, C++, and Objective-C). As such, Swift must be comparable to those languages in performance for most tasks. Performance must also be predictable and consistent, not just fast in short bursts that require clean-up later. There are lots of languages with novel features — being fast is rare.

  • Expressive. Swift benefits from decades of advancement in computer science to offer syntax that is a joy to use, with modern features developers expect. But Swift is never done. We will monitor language advancements and embrace what works, continually evolving to make Swift even better.

For example, when working with things like the Optional type, its clear how Swift increases safety. Before, you would never know which variables could be null and which couldn’t. With this new nullability information, you’re forced to handle the null case explicitly. When working with these “nullable” types, you can opt to crash, usually using an operator that involves an exclamation point (!). What is meant by safety here is apparent. It’s a seatbelt that you can choose to unbuckle, at your own risk.

However, in other cases, the safety seems to be lacking. Let’s take a look at an example. If we have a dictionary, grabbing the value for some given key returns an optional:

let person: [String: String] = //...
type(of: person["name"]) // => Optional<String>

But if we do the same with an array, we don’t get an optional:

let users: [User] = //...
type(of: users[0]) // => User

Why not? The array could be empty. If the users array were empty, the program would have no real option but to crash. That hardly seems safe. I want my money back!

Well, okay. Swift has an open development process. Perhaps we can suggest a change to the swift evolution mailing list, and—

Nope, that won’t work either. The “commonly rejected” proposals page in the swift-evolution GitHub repo says that they won’t accept such a change:

  • Make Array<T> subscript access return T? or T! instead of T: The current array behavior is intentional, as it accurately reflects the fact that out-of-bounds array access is a logic error. Changing the current behavior would slow Array accesses to an unacceptable degree. This topic has come up multiple times before but is very unlikely to be accepted.

What gives? The stated reason is that speed is too important in this particular case. But referring back to the About page linked above, “safe” is listed as a description of the language before “fast”. Shouldn’t safety be more important than speed?

There is a fundamental contention here, and the solution lies in the definitions of the word “safe”. While the common understanding of “safe” is more or less “doesn’t crash”, the Swift core members usually use the same word to mean “will never access incorrect memory unintentionally”.

In this way, Swift’s Array subscript is “safe”. It’ll never return data in memory beyond the bounds allocated for the array itself. It will crash before giving you a handle on memory that doesn’t contain what it should. In the same way that the Optional type prevents whole classes of bugs (null dereferencing) from existing, this behavior prevents a different class of bugs (buffer overflows) from existing.

You can hear Chris Lattner make this distinction at 24:39 in his interview with ATP:

We said the only way that this can make sense in terms of the cost of the disruption to the community is if we make it a safe programming language: not “safe” as in “you can have no bugs,” but “safe” in terms of memory safety while also providing high performance and moving the programming model forward.

Perhaps “memory-safe” is a better term than just “safe”. The idea is that, while some application programmers might prefer getting back an optional instead of trapping on out-of-bounds-array access, everyone can agree that they’d prefer to crash their program rather than let it continue with a variable that contains invalid data, a variable that could potentially be exploited in a buffer overflow attack.

While this second tradeoff (crashing instead of allowing buffer overflows) may seem obvious, some languages don’t give you this guarantee. In C, accessing an array out-of-bounds gives you undefined behavior, meaning that anything could happen, depending on the implementation of the compiler that you were using. Especially in cases when the programmer can quickly tell that they made a mistake, such as with out-of-bounds array access, the Swift team has shown that they feel like this is an acceptable place to (consistently!) crash, instead of returning an optional, and definitely instead of returning junk memory.

Using this definition of “safe” also clarifies what the “unsafe” APIs are designed for. Because they muck about in memory directly, the programmer herself has to take special care to ensure that she’ll never allow access to invalid memory. This is extremely hard, and even experts get it wrong. For an interesting read on this topic, check out Matt Gallagher’s post on bridging C to Swift in a safe fashion.

Swift and the core team’s definition of “safe” may not line up 100% with yours, but they do prevent classes of bugs so that programmers like you don’t have to think about them day-to-day. It can often help to replace their usage of “safe” with “memory safe” to help understand what they mean.