Vapor’s JSON handling leaves something to be desired. Pulling something out of the request’s JSON body looks like this:

var numberOfSpots: Int?

init(request: Request) {
	self.numberOfSpots = request.json?["numberOfSpots"]?.int
}

There’s a lot of things I don’t like about this code.

  1. The int hanging off the end is extraneous: The type system already knows self.numberOfSpots is an int; ideally, I wouldn’t have to tell it twice.
  2. The optional situation is out of control. The JSON property might not exist on the request, the title property might not exist in the JSON, and the property might not be a int. Each of those branches is represented by an Optional, which is flattened using the optional chaining operator. At the end of the expression, if the value is nil, there’s no way to know which one of the components failed.
  3. At the end of the ridiculous optional chain, the resulting numberOfSpots value must be optional. If I need the numberOfSpots property to be required, I need to add an extra variable and a guard.

     guard let numberOfSpots = request.json?["numberOfSpots"]?.int else {
         throw Abort(status: .badRequest, reason: "Missing 'numberOfSpots'.")
     }
     self.numberOfSpots = numberOfSpots
    

    Needless to say, this is bad code made worse. The body of the error doesn’t contain any information besides the key “numberOfSpots”, so there’s a little more duplication there, and that error isn’t even accurate in many cases. If the json property of the request is nil, that means that either the Content-Type header was wrong or that the JSON failed to parse, neither of which are communicated by the message “Missing ‘numberOfSpots’.” If the “numberOfSpots” key was present, but stored a string (instead of an int), the .int conversion would fail, resulting in an optional, and the error message would be equally useless.

Probably more than half of the requests in the Beacon API have JSON bodies to parse and dig values out of, so this is an important thing to get right.

The broad approach here is to follow the general model for how we parse JSON on the client. We can use type inference to deal with the extraneous conversions, and errors instead of optionals.

Let’s look at the errors first. We’ve discussed three possible errors: missing JSON, missing keys, and mismatched types. Perfect for an error enum:

enum JSONError: AbortError {

	var status: Status {
		return .badRequest
	}
	
	case missingJSON
	case missingKey(keyName: String)
	case mismatchedType(keyName: String, expectedType: String, actualType: String)
	
	var reason: String {
		switch self {
		case .missingJSON:
			return "The endpoint requires a JSON body and a \"Content-Type\" of \"application/json\"."
		case let .missingKey(keyName):
			return "This endpoint expects the JSON key \(missingKey), but it wasn't present."
		case let .mismatchedType(keyName, expectedType, actualType):
			return "This endpoint expects the JSON key '\(key)'. It was present, but did not have the expected type \(expectedType). It had type '\(actualType).'"
		}
	}
}

Once the errors have been laid out, we can begin work on the rest of this implementation. Using a similar technique to the one laid out Decoding JSON in Swift, we can begin to build things up. (I call it NiceJSON because Vapor provides a json property on the Request, and I’d like to not collide with that.)

class NiceJSON {
	let json: JSON?

	public init(json: JSON?) {
		self.json = json
	}

	public func fetch<T>(_ key: String) throws -> T {
		// ...
	}

}

However, here, we run into the next roadblock. I typically store JSON on the client as a [String: Any] dictionary. In Vapor, it’s stored as a StructuredData, which is an enum that can be stored in one of many cases: .number, .string, .object, .bool, .date, and so on.

While this is strictly more type-safe (a JSON object can’t store any values that aren’t representable in one of those basic forms — even though .date and .data are a cases of StructuredData, ignore them for now), it stands in the way of this technique. You need a way to bridge between compile-time types (like T and Int), and run-time types (like knowing to call the computed property .string). One way to handle this is to check the type of T precisely.

public func fetch<T>(_ key: String) throws -> T {

	guard let json = self.json else { throw JSONError.missingJSON }
	
	guard let untypedValue = json[key] else { throw JSONError.missingKey(key) }
	
	if T.self == Int.self {
		guard let value = untypedValue.int else {
			throw JSONError.mismatchedType(key, String.self)
		}
		return value
	}
	// handle bools, strings, arrays, and objects
}

While this works, it has one quality that I’m not crazy about. When you access the .int computed property, if the case’s associated value isn’t a int but can be coerced into a int, it will be. For example, if the consumer of the API passes the string “5”, it’ll be silently converted into a number. (Strings have it even worse: numbers are converted into strings, boolean values become the strings "true" and "false", and so on. You can see the code that I’m referring to here.)

I want the typing to be a little stricter. If the consumer of the API passes me a number and I want a string, that should be a .mismatchedType error. To accomplish this, we need to destructure Vapor’s JSON into a [String: Any] dictionary. Digging around the vapor/json repo a little, we can find code that lets us do this. It’s unfortunately marked as internal, so you have to copy it into your project.

extension Node {
	var backDoorToRealValues: Any {
		return self.wrapped.backDoorToRealValues
	}
}

extension StructuredData {
	internal var backDoorToRealValues: Any {
		switch self {
		case .array(let values):
			return values.map { $0.backDoorToRealValues }
		case .bool(let value):
			return value
		case .bytes(let bytes):
			return bytes
		case .null:
			return NSNull()
		case .number(let number):
			switch number {
			case .double(let value):
				return value
			case .int(let value):
				return value
			case .uint(let value):
				return value
			}
		case .object(let values):
			var dictionary: [String: Any] = [:]
			for (key, value) in values {
				dictionary[key] = value.backDoorToRealValues
			}
			return dictionary
		case .string(let value):
			return value
		case .date(let value):
			return value
		}
	}
}

The code is pretty boring, but it essentially converts from Vapor’s JSON type to a less well-typed (but easier to work with) object. Now that we have this, we can write our fetch method:

var dictionary: [String: Any]? {
	return json?.wrapped.backDoorToRealValues as? [String: Any]
}

public func fetch<T>(_ key: String) throws -> T {
	guard let dictionary = dictionary else {
		throw JSONError.jsonMissing()
	}
	
	guard let fetched = dictionary[key] else {
		throw JSONError.missingKey(key)
	}
	
	guard let typed = fetched as? T else {
		throw JSONError.typeMismatch(key: key, expectedType: String(describing: T.self), actualType: String(describing: type(of: fetched)))
	}
	
	return typed
}

It’s pretty straightforward to write implementations of fetchOptional(_:), fetch(_, transformation:), and the other necessary functions. I’ve gone into detail on them in the Decoding JSON in Swift post and in the Parser repo for that post, so I won’t dwell on those implementations here.

For the final piece, we need a way to access our new NiceJSON on a request. For that, I added a computed property to the request:

extension Request {
    public var niceJSON: NiceJSON {
        return NiceJSON(json: json)
    }
}

This version of the code creates a new NiceJSON each time you access the property, which can be optimized a little bit by constructing it once and sticking it in the storage property of the Request.

Finally, we can write the nice code that we want at the call site.

var numberOfSpots: Int

init(request: Request) throws {
	self.numberOfSpots = try request.niceJSON.fetch("numberOfSpots")
}

This code has provides a non-optional value, no duplication, and generates descriptive errors.

There’s on last gotcha that I want to go over: numbers in JSON. As of Vapor 2, all JSON numbers are stored as Double values. This means fetching numbers will only work if you fetch them as a Double. This doesn’t appear to be documented anywhere, so I’m not sure how much we should rely on it, but it appears to currently work this way. I think the reason for it is NSNumber subtyping weirdness. On Mac Foundation, numbers in JSON are stored in the NSNumber type, which can be as casted to a Double, Bool, Int, or UInt. Because of the different runtime, that stuff doesn’t work the same way in Linux Foundation, so everything is stored in the least common denominator format, Double, which can (more or less) represent Int and UInt types.

I have a small workaround for this, which at the top of the fetch method, add a special case for Int:

public func fetch<T>(_ key: String) throws -> T {
    if T.self == Int.self {
        return try Int(self.fetch(key) as Double) as! T
    }

Bool works correctly without having to be caught in a special way, and you shouldn’t use UInt types regularly in your code. The note is copied below for posterity and link rot:

Use UInt only when you specifically need an unsigned integer type with the same size as the platform’s native word size. If this isn’t the case, Int is preferred, even when the values to be stored are known to be nonnegative. A consistent use of Int for integer values aids code interoperability, avoids the need to convert between different number types, and matches integer type inference, as described in Type Safety and Type Inference.

NiceJSON is a small abstraction that lets you work with JSON in clean way, without having to litter your code with guards, expectations, and hand-written errors about the presence of various keys in the JSON body.

Sourcery is a code generation tool for Swift. While we’ve talked about code generation on the podcast, I haven’t really talked much about it on this blog. Today, I’d like to go in-depth on a concrete example of how we’re using Sourcery in practice for an app.

The app in question uses structs for its model layer. The app is mostly read-only, and data comes down from JSON, so structs work well. However, we do need to persist the objects so they load faster the next time, and so we need NSCoding conformance.

Swift 4 will bring Codable, a new protocol that supports JSON encoding and decoding, as well as NSCoding. Using Codable with NSKeyedArchiver is a little different than you’re used to, but it basically works. I’ve written up a small code sample here that you can drop into a playground to test. While that will obviate this specific use case for codegen eventually, the technique is still useful in the abstract. The new Codable protocol works by synthesizing encoding and decoding implementations for your types, and until we get access to this machinery directly, Sourcery is the best way to steal this power for ourselves. (Update, August 2018: We have moved away from this specific approach and towards Codable. It’s pretty good.)

To persist structs, I’m using the technique that I lay in this blog post. Essentially, each struct that needs to be encodable will get a corresponding NSObject wrapper that conforms it to NSCoding. If you haven’t read that post, now is a good time. The background in that post is necessary for the approach detailed here.

The technique in that blog post somewhat pedantically subscribes to the single responsiblity principle. One type (the struct) stores the data in memory, and another type (the NSObject wrapper) adds the conformance to NSCoding. The downside to this seperation is that you have to maintain a second type: if you add a new property to one of your structs, you need to add it to the init(coder:) and encode(with:) methods manually. The upside is that the separate type can be really easily generated.

This is where Sourcery comes in. With Sourcery, you use a templating language called Stencil to define templates. When the app is built, one of the build phases “renders” these templates into actual Swift code that is then compiled into your app. Other blog posts go into detail about how to set Sourcery up, so I won’t go into detail here, except to say that we check the Sourcery binary into git (so that everyone is on the same version) and the generated source files as well. Like CocoaPods, it’s just easier if those files are checked in.

Let’s discuss the actual technique. First, you need a protocol called AutoCodable.

protocol AutoCodable { }

This protocol doesn’t have anything in its definition — it purely serves as a signal to Sourcery that this file should have an encoder generated for it.

In a new file called AutoCodable.stencil, you can enumerate the objects that conform to this protocol.

{% for type in types.implementing.AutoCodable %}
	
// ...

{% endfor %}

Inside this for loop, we have access to a variable called type that has various properties that describe the type we’re working on this on this iteration of the for loop.

Inside the for loop, we can begin generating our code:

class Encodable{{ type.name }}: NSObject, NSCoding {
    
    var {{ type.name | lowerFirstWord }}: {{ type.name }}?
    
    init({{ type.name | lowerFirstWord }}: {{ type.name }}?) {
        self.{{ type.name | lowerFirstWord }} = {{ type.name | lowerFirstWord }}
    }
	 
    required init?(coder decoder: NSCoder) {
        // ...
    }
    	    
    func encode(with encoder: NSCoder) {
        // ...
    }
}

Sourcery’s templates mix Swift code (regular code) with stencil code (meta code, code that writes code). Anything inside the double braces ({{ }}) will be printed out, so

class Encodable{{ type.name }}: NSObject, NSCoding

will output something like

class EncodableCoordinate: NSObject, NSCoding

Stencil and Sourcery also provide useful “filters”, like lowerFirstWord. That filter turns an upper-camel-case identifier into a lower-camel-case identifier. For example, it will convert DogHouse to dogHouse.

Thus, the line

var {{ type.name | lowerFirstWord }}: {{ type.name }}?

converts to

var coordinate: Coordinate?

which is exactly what we are after.

At this point, we can run our app, have the build phase generate our code, and take a look at the file AutoCodable.generated.swift file and ensure everything is generating correctly.

Next, let’s take a look at the init(coder:) function that we will have to generate. This is tougher. Let’s lay out the groundwork:

required init?(coder decoder: NSCoder) {
    {% for variable in type.storedVariables %}
    
    // ...
    
    {% endfor %})
    
    {{ type.name | lowerFirstWord }} = {{ type.name }}(
        {% for variable in type.storedVariables %}
        {{ variable.name }}: {{ variable.name }}{% if not forloop.last %},{% endif %}
        {% endfor %}
     )

}

We will loop through all of the variables in order to pull something useful out of the decoder. The last 4 lines here use an initializer to actually initialize the type from all the variables we will create. It will generate code something like:

coordinate = Coordinate(
    latitude: latitude,
    longitude: longitude
)

This corresponds to the memberwise initializer that Swift provides for structs. While working on this feature, I became worried that the user of this template could become confused if the memberwise initailizer disappeared or if they re-implemented it with the parameters in a different order. At the end of the post, we’ll take a look at a second Sourcery template for generating these initializers.

Because our model objects (“encodables”) can contain properties which are also encodables, we have to make sure to convert those to and from their encodable representations. For something like a coordinate, where the only values are two doubles (for latitude and longitude), we don’t need to do much. For more interesting objects, there are three cases of encodables (array, optional, and regular) to handle in a special way, so in total, we have 5 situations to handle: an encodable array, an encodable optional, a regular encodable, a regular optional, and a regular value.

required init?(coder decoder: NSCoder) {
    {% for variable in type.storedVariables %}
    {% if variable.typeName.name|hasPrefix:"[" %} // note, this doesn't support Array<T>, only `[T]`

    // handle arrays

    {% elif variable.isOptional and variable.type.implements.AutoCodable %}

    // handle encodable optionals

    {% elif variable.isOptional and not variable.type.implements.AutoCodable %}
    
    // handle regular optionals
    
    {% elif variable.type.implements.AutoCodable %}

    // handle regular encodables

    {% else %}
    
    // handle regular values

    {% endif %}
    
    {% endfor %}

}

This sets up our branching logic.

Next, let’s look at each of the 5 cases. First, arrays of encodables:

guard let encodable_{{ variable.name }} = decoder.decodeObject(forKey: "{{ variable.name }}") as? [Encodable{{ variable.typeName.name|replace:"[",""|replace:"]","" }}] else { return nil }
let {{ variable.name }} = encodable_{{ variable.name }}.flatMap({ $0.{{ variable.typeName.name|replace:"[",""|replace:"]",""| lowerFirstWord}} })

The first line turns decodes an array of encodables, and the second line converts the encodables (which represent the NSCoding wrappers) into the actual objects. The code that’s generated looks something like:

guard let encodable_images = decoder.decodeObject(forKey: "image") as? [EncodableImage] else { return nil }
let images = encodable_images.flatMap({ $0.image })

Frankly, the stencil code is hideous. Stencil doesn’t support things like assignment of processed data to new variables, so things like {{ variable.typeName.name|replace:"[",""|replace:"]","" }} (which extracts the type name from the array’s type name) can’t be factored out. Stencil is designed more for presentation and less for logic, so this omission is understandable, however, it does make the code uglier.

The astute reader will note that I used underscores in a variable name, which is not the typical Swift style. I did this purely out of laziness: I didn’t want to deal with correctly capitalizing the variable name. Ideally, no one will ever look at this code, and it will work transparently in the background.

Next up, optionals. Encodable optionals first:

let encodable_{{ variable.name }} = decoder.decodeObject(forKey: "{{ variable.name }}") as? Encodable{{ variable.unwrappedTypeName }}
let {{ variable.name }} = encodable_{{variable.name}}?.{{variable.name}}

which generates something like

let encodable_image = decoder.decodeObject(forKey: "image") as? EncodableImage
let image = encodable_image?.image

And regular optionals, for things like numbers:

let {{ variable.name }} = decoder.decodeObject(forKey: "{{ variable.name }}") as? {{ variable.unwrappedTypeName }}

And its generated code:

let imageCount = decode.decodeObject(forKey: "imageCount") as? Int

Next, regular encodables:

guard let encodable_{{ variable.name }} = decoder.decodeObject(forKey: "{{ variable.name }}") as? Encodable{{ variable.typeName }},
let {{ variable.name }} = encodable_{{ variable.name }}.{{ variable.name }} else { return nil }

which generates:

guard let encodable_images = decoder.decodeObject(forKey: "image") as? EncodableImage,
let images = encodable_image?.image else { return nil }

And finally regular values:

guard let {{ variable.name }} = decoder.decodeObject(forKey: "{{ variable.name }}") as? {{ variable.typeName }} else { return nil }

And its generated code:

guard let imageCount = decoder.decodeObject(forKey: "imageCount") as? Int else { return nil }

I won’t go into too much detail on these last few, since they work similarly to the ones above.

Next, let’s quickly look at the encode(coder:) method. Because NSCoding is designed for Objective-C and everything is “optional” in Objective-C, we don’t have to handle optionals any differently. This means the number of cases we have to deal with are minimized.

func encode(with encoder: NSCoder) {
    {% for variable in type.storedVariables %}
    {% if variable.typeName.name|hasPrefix:"[" %}
    
    // array

    {% elif variable.type.implements.AutoCodable %}
    
    // encodable
    
    {% else %}
    
    // normal value
    
    {% endif %}
    {% endfor %}
}

The array handling code looks like this:

let encoded_{{ variable.name }} = {{ type.name | lowerFirstWord }}?.{{ variable.name }}.map({ return Encodable{{ variable.typeName.name|replace:"[",""|replace:"]","" }}({{ variable.typeName.name|replace:"[",""|replace:"]",""| lowerFirstWord }}: $0) })
encoder.encode(encoded_{{ variable.name }}, forKey: "{{ variable.name }}")

The encodable handling code looks like this:

encoder.encode(Encodable{{ variable.unwrappedTypeName }}({{ variable.name | lowerFirstWord }}: {{ type.name | lowerFirstWord }}?.{{ variable.name }}), forKey: "{{ variable.name }}")

And the normal properties can be encoded like so:

encoder.encode({{ type.name | lowerFirstWord }}?.{{ variable.name }}, forKey: "{{ variable.name }}")

The stencil code is a bit complex, hard to read, and messy. However, the nice thing is that if you write this repetitive code once, it will generate a lot more repetitive code for you. For example, the whole template is 86 lines, and for all of our models, it generates about 500 lines of boilerplate.

One thing I was surprised to learn is that the code is surprisingly robust. We wrote this template in a day back in February. In the intervening 6 months, we haven’t edited this template at all. No modifications have been necessary to support any of the changes we made to our models since then.

The last thing we need to support a few protocol that show our system how to bridge between the encodables and the structs:

extension Encodable{{ type.name }}: Encoder {
    typealias Value = {{ type.name }}
    
    var value: {{ type.name }}? {
        return {{ type.name | lowerFirstWord }}
    }
}

extension {{ type.name }}: Archivable {
    typealias Encodable = Encodable{{ type.name }}
    
    var encoder: Encodable{{ type.name }} {
        return Encodable{{ type.name }}({{ type.name | lowerFirstWord }}: self)
    }
}

To learn more about the purpose of these extra conformances, you can read the original blog post.

Finally, let’s take a quick look at the Sourcery template for something we could call AutoInitializable. The approach is similiar to what we’ve looked at for AutoCodable, with one additional component. Because of Swift’s initialization rules, it only works with structs (classes in Swift can’t add new initializers in extensions).

{# this template currently only works with structs (no classes) #}
{% for type in types.implementing.AutoInitializable %}

extension {{ type.name }} {
{% if type.kind == "struct" and type.initializers.count == 0 %}
    // no initializer, since there is a free memberwise intializer that Swift gives us
{% else %}
    {{ type.accessLevel }} init({% for variable in type.storedVariables %}{{ variable.name }}: {{ variable.typeName }}{% if variable.annotations.initializerDefault %} = {{ variable.annotations.initializerDefault }}{% endif %}{% if not forloop.last %}, {% endif %}{% endfor %}) {
        {% for variable in type.storedVariables %}
        self.{{ variable.name }} = {{ variable.name }}
        {% endfor %}
    }
{% endif %}
}

{% endfor %}

I won’t belabor this with a line-by-line breakdown, but I will note that it takes advantage of a Sourcery feature called “annotations”. This lets you add additional information to a particular property. In this case, we had certain cases where some properties needed to have initializerDefault values (usually nil), so we were able to add support for that. A Sourcery annotation is declared like so:

// sourcery: initializerDefault = nil
var distance: Distance?

This post lays out one of the more involved uses for code generation. Sourcery gives you the building blocks you need to build complex templates like this one, and it removes the need to maintain onerous boilerplate manually. This particular template code was designed for our needs and may not suit every app. It will ultimately be rendered obsolete by Swift 4’s Codable. However, the example serves as case study for more complex Sourcery templates. They are flexible without any loss in robustness. I was initially worried that this meta-code would be brittle, breaking frequently, but in practice, these templates haven’t required a single change since they were first written, and they ensure that our encodable representations always stay perfectly up-to-date with any model changes.

One of the selling points of server-side Swift is that you get to use the same tools that you’re used to on the client. One of these tools is Grand Central Dispatch. Dispatch is one of the best asynchronous toolboxes in terms of its API and abstractions, and getting to use it for the Beacon server is an absolute pleasure.

While there’s a broader discussion to be had about actors in Swift in the future, spurred by Chris Lattner’s concurrency manifesto, and perhaps in the future some of the patterns for asynchronous workers will change, for now, Dispatch is the best tool that we have.

On the client, we rely on Dispatch for a few reasons. Key among them, and notably irrelevant on the server, we use Dispatch to get expensive work off the main thread to keep our UIs responsive. While the server does not have this need specifically, services with faster response times (under 250ms per request) are used more often than those that are slower. (Other uses of Dispatch, like synchronization of concurrent tasks and gating access to resources, is similarly valuable on both platforms.)

To make requests faster, a lot of nonessential work can be deferred until after the consumer’s request has been responded to. Examples of this are expensive calculations or external side effects, like sending email or push notifications. Further, some code should be executed on a regular basis: hourly or daily or weekly.

Dispatch is well-suited for these types of tasks, and in this post, we’ll discuss how Dispatch usage is similar relative to the client. My experience here is with the framework Vapor, though I suspect much of this advice holds true for other frameworks as well.

Your server app is long running. Some web frameworks tear down the whole process between requests, to clear out any old state. Vapor doesn’t work like this. While each request is responded to in a synchronous fashion, Vapor will respond to multiple requests at the same time. The same instance of the application handles these requests. This means that if you want something to happen, but don’t want to block returning a response for the current request, you can follow your intuition and use DispatchQueue.async to kick that block to another queue for execution, and return the response immediately.

A concrete example of this is firing off a push notification in reaction to some request the user makes: the user makes a new event and the user’s friends need to be notified. If you don’t use Dispatch for this, then the response to the user will be delayed by however long it takes to successfully send the push notification payload to an APNS server. In particular, if you have many push notifications to send, this can greatly delay the user’s request. By deferring this until after the user’s request is responded to, the request will return faster. Once the side effect is deferred, it can take as long it needs to without affect the user’s experience.

Lastly, sometimes you want to delay push notifications by a few seconds so that if the user deletes the resource in question, the user’s friends aren’t notified about an object that doesn’t exist. To accomplish this, you can swap async for asyncAfter, just as you would expect from your client-side experience.

You can’t use the main queue. The “main” queue is blocked, constantly spinning, in order to prevent the progam from ending. Unlike in iOS apps, there’s no real concept of a run loop, so the main thread has no way to execute blocks that are enqueued to it. Therefore, every time you want to async some code, you must dispatch it to a shared, concurrent .global() queue or to a queue of your own creation. Because there is no UI code, there’s no reason to prefer the main thread over any other thread.

Thread safety is still important. Vapor handles many requests at once, each on their own global queue. Any global, mutable data needs to be isolated behind some kind of synchronization pattern. While you can use Foundation’s locks, I find isolation queues to be an easier solution to use. They’re slightly more performant than locks, since they enable concurrent reads, and they work exactly the same way on the server as they do on iOS.

Semaphores are good for making async code synchronous. Other Swift server frameworks work differently, but Vapor expects the responses to requests to be synchronous. Therefore, there’s no sense in using code with completion blocks. APIs like URLSession’s dataTask(with:completionHandler:) can be made synchronous using semaphores:

extension URLSession {
    public func data(with request: URLRequest) throws -> (Data, HTTPURLResponse) {
        var error: Error?
        var result: (Data, HTTPURLResponse)?
        let semaphore = DispatchSemaphore(value: 0)

        self.dataTask(with: request, completionHandler: { data, response, innerError in
            if let data = data, let response = response as? HTTPURLResponse {
                result = (data, response)
            } else {
                error = innerError
            }
            semaphore.signal()
        }).resume()

        semaphore.wait()

        if let error = error {
            throw error
        } else if let result = result {
            return result
        } else {
            fatalError("Something went horribly wrong.")
        }
	}
}

This code kicks off a networking request and blocks the calling thread with semaphore.wait(). When the data task calls the completion block, the result or error is assigned, and we can call semaphore.signal(), which allows the code to continue, either returning a value or a throwing an error.

Dispatch timers can perform regularly scheduled work. For work that needs to occur on a regular basis, like database cleanup, maintenance, and events that need to happen at a particular time, you can create a dispatch timer.

let timer = DispatchSource.makeTimerSource()
timer.scheduleRepeating(deadline: .now(), interval: .seconds(60))

timer.setEventHandler(handler: {
	//fired every minute
})

timer.resume()

The only thing of note here is that, like on the client, this timer won’t retain itself, so you have to store it somewhere. Because it’s pretty easy to build your own behaviors on top of something like a Dispatch timer, I think we won’t see job libraries, like Rails’s ActiveJob, have quite the uptake in Swift that they have had in other environments. Nevertheless, I think it’s worth linking to the job/worker queue libraries I’ve found on GitHub:

Dispatch is a useful library with tons of awesome behaviors that can be built with its lower-level primitives. When setting out, I wasn’t sure how it would work in a Linux/server environment, and I’m pleased to report that working with it on the server is about as straightforward as you would want it to be. It’s a real delight to use, and it makes writing server applications that much easier.

This is a post I’ve been trying to write for a long time — literally years — and have struggled for want of the perfect example. I think I’ve finally found the one, courtesy of David James, Tim Vermeulen, Dave DeLong, and Erica Sadun.

Once upon a time, Erica came up with a way for constraints to install themselves. This code was eventually obviated by isActive in UIKit, but the code moved from Objective-C to Swift. It wasn’t perfect or particularly efficient, but it got the job done.

The following code comes from a rote Swift migration. It calculates the nearest common ancestor between two items in a tree of views. This was an early stab at this concept, abandoned after isActive was added.

Back when I worked at Rap Genius, we would often say the first cut is the deepest. Your first attempt at something, while it might not be the cleanest or most polished, involves the most work because it provides the superstructure for what you’re building. Erica’s version is that superstructure. It’s a solution and it works, but it’s ripe for cleaning up.

public extension UIView {

	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with otherView: UIView) -> UIView? {
		
		// Two equal views are each other's NCA
		guard self != otherView else { return self }
		
		// Compute superviews
		let mySuperviews = sequence(first: self.superview, next: { $0?.superview }).flatMap({ $0 })
		let theirSuperviews = sequence(first: otherView.superview, next: { $0?.superview }).flatMap({ $0 })	 
		
		// Check for direct ancestry
		guard !mySuperviews.contains(otherView)
			else { return otherView }
		guard !theirSuperviews.contains(self)
			else { return self }
		
		// Check for indirect ancestry
		for view in mySuperviews {
			guard !theirSuperviews.contains(view)
				else { return view }
		}
		
		// No shared ancestry
		return nil
	}
}

There’s a lot wrong with this code. It’s complex. There’s lots of cases to think about. It’s a simple piece of functionality, and yet there’s four guards and three different lookups. Simplifying this code will make it easier to read, understand, and maintain.

Perhaps you’re fine with this code in your codebase. The old saying goes, “if it ain’t broke, don’t fix it”. However, my experience has shown me that when there’s an inelegant algorithm like this, there’s a pearl in the center of it that wants to come out. Even a function this long is too hard to keep in your brain all at once. If you can’t understand it all, things slip through the cracks. I’m not confident that there aren’t any bugs in the above code; as a friend said, “every branch is a place for bugs to hide”. (This concept is known more academically as cyclomatic complexity.) And bugs or no bugs, with the power of retrospection, I can now see a few performance enhancements hiding in there, obscured by the current state of the code.

The refactoring process helps eliminate these potential bugs and expose these enhancements by iteratively driving the complex towards the simple. Reducing the algorithm down to its barest form also helps you see how it’s similar to other algorithms in your code base. These are all second-order effects, to be sure, but second-order effects pay off.

To kick off our refactoring, let’s look at the sequence(first:next:) function. Erica’s version started with self.superview, which is an optional value. This creates a sequence of optionals, which then forced Erica to flatMap them out. If we can remove this optionality from the sequence, we can remove the flatMap too. We changed the sequence to start from self instead (which isn’t optional), and added dropFirst() to remove that self:

let mySuperviews = sequence(first: self, next: { $0.superview }).flatMap({ $0 }).dropFirst()

Next, we killed the flatMap({ $0 }), because there are no nils to remove any more:

sequence(first: self, next: { $0.superview }).dropFirst()

This change led to this intermediate state, with plenty of code still left to trim:

public extension UIView {

	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with otherView: UIView) -> UIView? {
		
		// Two equal views are each other's NCA
		guard self != otherView else { return self }
		
		// Compute superviews
		let mySuperviews = sequence(first: self, next: { $0.superview }).dropFirst()
		let theirSuperviews = sequence(first: otherView, next: { $0.superview }).dropFirst()

		// Check for direct ancestry
		guard !mySuperviews.contains(otherView)
			else { return otherView }
		guard !theirSuperviews.contains(self)
			else { return self }

		// Check for indirect ancestry
		for view in mySuperviews {
			guard !theirSuperviews.contains(view)
				else { return view }
		}
		
		// No shared ancestry
		return nil
	}
}

At this point, we looked at the indirect ancestry component.

// Check for indirect ancestry
for view in mySuperviews {
	guard !theirSuperviews.contains(view)
		else { return view }
}

A for loop with an embedded test is a signal to use first(where:). The code simplified down to this, removing the loop and test:

if let view = mySuperviews.first(where: { theirSuperviews.contains($0) }) { return view }

Function references make this more elegant, readable, and clear:

if let view = mySuperviews.first(where: theirSuperviews.contains) { return view }

Our function now looks like this:

public extension UIView {

	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with otherView: UIView) -> UIView? {
		
		// Two equal views are each other's NCA
		guard self != otherView else { return self }

		// Compute superviews
		let mySuperviews = sequence(first: self, next: { $0.superview }).dropFirst()
		let theirSuperviews = sequence(first: otherView, next: { $0.superview }).dropFirst()
		
		// Check for direct ancestry
		guard !mySuperviews.contains(otherView)
			else { return otherView }
		guard !theirSuperviews.contains(self)
			else { return self }

		// Check for indirect ancestry
		if let view = mySuperviews.first(where: theirSuperviews.contains) { return view }
		if let view = theirSuperviews.first(where: mySuperviews.contains) { return view }
		
		// No shared ancestry
		return nil
	}
}

After this point, we stepped back and looked at the algorithm as a whole. We realized that if we include self and otherView in their respective superview sequences, the “direct ancestry” check and the “two equal views” check at the top would be completely subsumed by the first(where:) “indirect ancestry” checks. To perform this step, we first dropped the dropFirst():

sequence(first: self, next: { $0.superview })

And then we could kill the “direct ancestry” check:

// Check for direct ancestry
guard !mySuperviews.contains(otherView)
	else { return otherView }
guard !theirSuperviews.contains(self)
	else { return self }

And finally we could remove the first guard as well:

guard self != otherView else { return self }

After deleting them both, the function now looked like this:

public extension UIView {

	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with otherView: UIView) -> UIView? {
		
		// Compute superviews
		let mySuperviews = sequence(first: self, next: { $0.superview })
		let theirSuperviews = sequence(first: otherView, next: { $0.superview })
		
		if let view = mySuperviews.first(where: theirSuperviews.contains) { return view }
		if let view = theirSuperviews.first(where: mySuperviews.contains) { return view }
		
		// No shared ancestry
		return nil
	}
}

That was a major turning point in our understanding of the function. At this point, this code was starting to reveal its own internal structure. Each step clarifies the next potential refactoring to perform, to get closer to the heart of the function. For the next refactoring, Tim realized we could simplify the tail end of the function by applying a nil-coalescing operator:

return mySuperviews.first(where: theirSuperviews.contains) ?? theirSuperviews.first(where: mySuperviews.contains)

But here the first test before the nil-coalescing operator has covered all the views in both hierarchies. Because we’re looking for the first intersection between mySuperviews and theirSuperviews, there’s no reason to test both it and its opposite. We can drop everything after the ??:

public extension UIView {
	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with otherView: UIView) -> UIView? {
		let mySuperviews = sequence(first: self, next: { $0.superview })
		let theirSuperviews = sequence(first: otherView, next: { $0.superview })
		
		return mySuperviews.first(where: theirSuperviews.contains)
	}
}

The algorithm has revealed its beautiful internal symmetry now. Very clear intent, very clear algorithm, and each component is simple. It’s now more obvious how to tweak and modify this algorithm. For example,

  • If you don’t want the views self and otherView to be included in the calculation of ancestry, you can restore dropFirst() to the superview sequences.
  • If you want to know if the views have a common ancestor (rather than caring about which ancestor it is), you can replace the first(where:) with a contains(where:).
  • If you want to know all the common ancestors, you could replace the first(where:) with a filter(_:).

With the code in its original state, I couldn’t see before that these kinds of transformations were possible; now, they’re practically trivial.

From here, there are two potential routes.

First, there’s a UIView API for determining if one view is a descendant of another, which makes for a super readable solution:

extension UIView {
	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with other: View) -> View? {
		return sequence(first: self, next: { $0.superview })
			.first(where: { other.isDescendant(of: $0) })
	}
}

The second option is to explore performance. We noticed that theirSuperviews was only used for a contains check. If we wrap that sequence in a Set, existence lookup becomes O(1), and this whole algorithm gets blisteringly fast.

public extension UIView {
	// Return nearest common ancestor between two views
	public func nearestCommonAncestor(with otherView: UIView) -> UIView? {
		let mySuperviews = sequence(first: self, next: { $0.superview })
		let theirSuperviews = Set(sequence(first: otherView, next: { $0.superview }))
		return mySuperviews.first(where: theirSuperviews.contains)
	}
}

For view hierarchies that are pathologically deep (10,000 or so levels), this solution leaves the other one in the dust. Almost no view hierarchies contain that many layers, so this isn’t really a necessary optimization. However, if it were necessary, it would have been very hard to find without this refactoring process. Once we performed it, it become obvious what to tweak to speed things up.

Thomas Aquinas writes:

Properly speaking, truth resides in the intellect composing and dividing; and not in the senses; nor in the intellect knowing “what a thing is.”

This quote reflects the process of refactoring. If you’re doing it right, you don’t need to understand what the original code actually does. In the best of cases, you won’t even need to compile the code. You can operate on the code, composing and dividing, through a series of transformations that always leave the code in a correctly working state.

Perhaps you could have written the final version of this code from the very start. Perhaps it was obvious to you that this combination of APIs would yield the correct behavior in all cases. I don’t think I could have predicted that the original code would end up as an elegant one-line solution that handles all edge cases gracefully. I definitely couldn’t have predicted that there was a big performance optimization that changes this algorithm from O(n²) to O(n). Refactoring is an iterative process, and continual refinement reveals the code’s true essence.

This article is also available in Chinese.

Part of the promise of Swift is the ability to write simple, correct, and expressive code. Swift’s error system is no exception, and clever usage of it vastly improves the code on the server. Our app Beacon uses Vapor for its API. Vapor provides a lot of the fundamental components to building an API, but more importantly, it provides the extension points for adding things like good error handling yourself.

The crucial fact is that pretty much every function in your server app is marked as throws. At any point, you can throw an error, and that error will bubble all the way through any functions, through the response handler that you registered with the router, and through any registered middlewares.

Vapor typically handles errors by loading an HTML error page. Because the Beacon’s server component is a JSON API, we need some middleware that will translate an AbortError (Vapor’s error type, which includes a message and a status code) into usable JSON for the consumer. This middleware is pretty boilerplate-y, so I’ll drop it here without much comment.

public final class JSONErrorMiddleware: Middleware {
    	    
    public func respond(to request: Request, chainingTo next: Responder) throws -> Response {
        do {
            return try next.respond(to: request)
        } catch let error as AbortError {
            let response = Response(status: error.status)
            response.json = try JSON(node: [
                "error": true,
                "message": error.message,
                "code": error.code,
                "metadata": error.metadata,
            ])
            return response
        }
    }
}

In Vapor 1.5, you activate this middleware by adding it to the droplet, which is an object that represents your app.

droplet.middleware.append(JSONErrorMiddleware())

Now that we have a way to present errors, we can start exploring some useful errors. Most of the time when something on the server fails, that failure is represented by a nil where there shouldn’t be one. So, the very first thing I added was the unwrap() function:

struct NilError: Error { }

extension Optional {
    func unwrap() throws -> Wrapped {
        guard let result = self else { throw NilError() }
        return result
    }
}

This function enables you to completely fail the request whenever a value is nil and you don’t want it to be. For example, let’s say you want to find an Event by some id.

let event = Event.find(id)

Unsurprisingly, the type of event is Optional<Event>. Because an event with the given ID might not exist when you call that function, it has to return an optional. However, sometimes this doesn’t make for the best code. For example, in Beacon, if you try to attend an event, there’s no meaningful work we can do if that event doesn’t exist. So, to handle this case, I call unwrap() on the value returned from that function:

let event = Event.find(id).unwrap()

The type of event is now Event, and if the event doesn’t exist, function will end early, and bubble the error up until it hits the aforementioned JSONErrorMiddleware, ultimately resulting in error JSON for our user.

The problem with unwrap() is that it lacks any context. What failed to unwrap? If this were Ruby or Java, we’d at least have a stack trace and we could figure out what series of function calls led to our error. This is Swift, however, and we don’t have that. The most we can really do is capture the file and line of the faulty unwrap, which I’ve done in this version of NilError.

In addition, because there’s no context, Vapor doesn’t have a way to figure out what status code to use. You’ll notice that our JSONErrorMiddleware pattern matches on the AbortError protocol only. What happens to other errors? They’re wrapped in AbortError conformant-objects, but the status code is assumed to be 500. This isn’t ideal. While unwrap() works great for quickly getting stuff going, it quickly begins to fall apart once your clients start expecting correct status codes and useful error messages. To this end, we’ll be exploring a few useful custom errors that we built for this project.

Missing Resources

Let’s tackle our missing object first. This request should probably 404, especially if our ID comes from a URL parameter. Making errors in Swift is really easy:

struct ModelNotFoundError: AbortError {
    
    let status = Status.notFound
    	    
    var code: Int {
        return status.statusCode
    }
    	    
    let message: String
    
    public init<T>(type: T) {
        self.message = "\(typeName) could not be found."
    }
}

In future examples, I’ll leave out the computed code property, since that will always just forward the statusCode of the status.

Once we have our ModelNotFoundError, we can guard and throw with it.

guard let event = Event.find(id) else {
	throw ModelNotFoundError(type: Event)
}

But this is kind of annoying to do every time we want to ensure that a model is found. To solve that, we package this code up into an extension on every Entity:

extension Entity {
	static func findOr404(_ id: Node) throws -> Self {
		guard let result = self.find(id) else {
			throw ModelNotFoundError(type: Self.self)
		}
		return result
	}
}

And now, at the call site, our code is simple and nice:

let event = try Event.findOr404(id)

Leveraging native errors on the server yields both more correctness (in status codes and accurate messages) and more expressiveness.

Authentication

Our API and others use require authenticating the user so that some action can be performed on their behalf. To execute this cleanly, we use a middleware to fetch to the user from some auth token that the client passes us, and save that user data into the request object. (Vapor includes a handy dictionary on each Request called storage that you can use to store any additional data of your own.) (Also, Vapor includes some authentication and session handling components, but it was easier to write this than to try to figure out how to use Vapor’s built-in thing.)

final class CurrentSession {

	init(user: User? = nil) {
		self.user = user
	}
    
	var user: User?
    
	@discardableResult
	public func ensureUser() throws -> User {
		return user.unwrap()
	}
}

Every request will provide a Session object like the one above. If you want to ensure that a user has been authenticated (and want to work with that user), you can call:

let currentUser = try request.session.ensureUser()

However, this has the same problem as our previous code. If the user isn’t correctly authed, the consumer of this API will see a 500 with a meaningless error about nil objects, instead of a 401 Unauthorized code and a nice error message. We’re going to need another custom error.

struct AuthorizationError: AbortError {
	let status = Status.unauthorized

	var message = "Invalid credentials."
}

Vapor actually has a shorthand for this kind of simple error:

Abort.custom(status: .unauthorized, message: "Invalid credentials.")

Which I used until I needed the error to be its own object, for reasons that will become apparent later.

Our function ensureUser() now becomes:

@discardableResult
public func ensureUser() throws -> User {
	guard let user = user else {
		throw AuthorizationError()
	}
	return user
}

Bad JSON

Vapor’s JSON handling leaves much to be desired. Let’s say you want a string from the JSON body that’s keyed under the name “title”. Look at all these question marks:

let title = request.json?["title"]?.string

At the end of this chain, of course, title is an Optional<String>. Even throwing an unwrap() at the end of this chain doesn’t solve our problem: because of Swift’s optional chaining precedence rules, it will only unwrap the last component of the chain, .string. We can solve this in two ways. First, by wrapping the whole expression in parentheses:

let title = try (request.json?["title"]?.string).unwrap()

or unwrapping at each step:

let title = try request.json.unwrap()["title"].unwrap().string.unwrap()

Needless to say, this is horrible. Each unwrap represents a different error: the first represents a missing application/json Content-Type (or malformed data), the second, the absence of the key, and the third, the expectation of the key’s type. All that data is thrown away with unwrap(). Ideally, our API would have a different error message for each error.

enum JSONError: AbortError {

	var status: Status {
		return .badRequest
	}
	
	case jsonMissing
	case missingKey(keyName: String)
	case typeMismatch(keyName: String, expectedType: String, actualType: String)
}

These cases represent the three different errors from above. We need to add a function to generate a message depending on the case, but that’s really all this need. We have errors that are a lot more expressive, and ones that help the client debug common errors (like forgetting a Content-Type).

These errors, combined with NiceJSON (which you can read more about it in this post), result in code like this:

let title: String = try request.niceJSON.fetch("title")

Much easier on the eyes. title is also usually an instance variable (of a command) with a pre-set type, so the : String required for type inference can be omitted as well.

By making the “correct way” to write code the same as the “nice way” to write code, you never have to make a painful trade-off between helpful error messages or type safety, and short easy-to-read code.

Externally Visible Errors

By default, Vapor will wrap an error that fails into an AbortError. However, many (most!) errors reveal implementation details that users shouldn’t see. For example, the PostgreSQL adapter’s errors reveal details about your choice of database and the structure of your tables. Even NilError includes the file and line of the error, which reveals that the server is built on Swift and is therefore vulnerable to attacks targeted at Swift.

In order to hide some errors and allow others to make it through the user, I made a new protocol.

public protocol ExternallyVisibleError: Error {
    
    var status: Status { get }
    
    var externalMessage: String { get }
}

Notice that ExternallyVisibleError doesn’t inherit from AbortError. Once you conform your AbortError to this protocol, you have to provide one more property, externalMessage, which is the message that will be shown to users.

Once that’s done, we need a quick modification to our JSONErrorMiddleware to hide the details of the error if it’s not an ExternallyVisibleError:

public func respond(to request: Request, chainingTo next: Responder) throws -> Response {
    do {
        return try next.respond(to: request)
    } catch let error as ExternallyVisibleError {
        let response = Response(status: error.status)
        response.json = try JSON(node: [
            "error": true,
            "message": error.externalMessage,
            "code": error.status.statusCode,
        ])
        return response
    } catch let error as AbortError {
        let response = Response(status: error.status)
        response.json = try JSON(node: [
            "error": true,
            "message": "There was an error processing this request.",
            "code": error.code,
        ])
        return response
    }
}

I also added some code that would send down the AbortError’s message as long as the environment wasn’t .production.

Swift’s errors are a powerful tool that can store additional data, metadata, and types. A few simple extensions to Vapor’s built-in types will enable you to write better code along a number of axes. For me, the ability to write terse, expressive, and correct code is the promise that Swift offered from the beginning, and this compact is maintained on the server as much as it is on the client.

Beacon is built with Swift on the server. Since we have all of the niceties of Swift in this new environment, we can use our knowledge and experience from building iOS app to build efficient server applications. Today, we’ll look at two examples of working with sequences on the server to achieve efficiency and performance.

Over the network

For its social graph, Beacon needs to find your mutual Twitter followers — that is, the people you follow that follow you back. There’s no Twitter API for this, so we have to get the list of follower IDs and the list of following IDs, and intersect them. The Twitter API batches these IDs into groups of 5,000. While people rarely follow more than 5,000 people, some users on Beacon have a few hundred thousand Twitter followers, so these will have to be batched. Because of these contraints, this problem provides a pretty interesting case study for advanced sequence usage.

We do this on the server instead of the client, because there will be a lot of requests to the Twitter API, and it doesn’t make much sense to perform those on a user’s precarious cellular connection. For our backend, we use the Vapor framework, and Vapor’s request handling is completely synchronous. Because of this, there’s no sense in using completion blocks for network requests. You can just return the result of the network request as the result of your function (and throw if anything goes wrong). For an example, let’s fetch the IDs of the first 5,000 people that someone follows:

let following = try client.send(request: TwitterFollowingRequest())

To perform the batching, the Twitter API uses the concept of cursors. To get the first batch, you can leave off the cursor, or pass -1. Each request returns a new next_cursor, which you give back to Twitter when you want the next batch. This concept of cursors fits nicely into Swift’s free function sequence(state:next:). Let’s examine this function’s signature:

func sequence<T, State>(state: State, next: @escaping (inout State) -> T?) -> UnfoldSequence<T, State>

This function is generic over two types: T and State. We can tell from the signature that we need provide an initial State as a parameter, and we also provide a closure that takes an inout State and returns an optional T. inout means we can mutate the state, so this is how we update the state for the next iteration of the sequence. The T that we return each time will form our sequence. Returning nil instead of some T ends the sequence.

Because the Fibonacci sequence is the gold standard for stateful sequences, let’s take a look at using sequence(state:next:) to create a Fibonacci sequence:

let fibonacci = sequence(state: (1, 1), next: { (state: inout (Int, Int)) -> Int? in
    let next = state.0 + state.1
    state = (state.1, next)
    return next
})

The state in this case has type (Int, Int) and represents the last two numbers in the sequence. First, we figure out the next number by adding the two elements in the tuple together; then, we update the state variable with the new last two values; finally, we return the next element in the sequence.

(Note that this sequence never returns nil, so it never terminates. It is lazy, however, so none of this code is actually evaluated until you ask for some elements. You can use .prefix(n) to limit to the first n values.)

To build our sequence of Twitter IDs, we start with the state "-1", and build our sequence from there.

let lazyFollowerIDs = sequence(state: "-1", next: { (state) -> [Int]? in

})

We need to send the request in this block, and return the IDs from the result of the request. The request itself looks a lot like the TwitterFollowingRequest from above, except it’s now for followers instead.

let lazyFollowerIDs = sequence(state: "-1", next: { (state) -> [Int]? in
    let result = try? self.client.send(request: TwitterFollowersRequest(cursor: state))
    return result?.ids
})

Right now, this request never updates its state, so it fetches the same page over and over again. Let’s fix that.

let lazyFollowerIDs = sequence(state: "-1", next: { (state) -> [Int]? in
    let result = try? self.client.send(request: TwitterFollowersRequest(cursor: state))
    state = result?.nextCursor ?? "0"
    return result?.ids
})

For the last page, Twitter will return "0" for the next_cursor, so we can use that for our default value if the request fails. (If the request fails, result?.ids will also be nil, so the sequence will end anyway.)

Lastly, let’s put a guard in place to catch the case when Twitter has shown us the last page.

let lazyFollowerIDs = sequence(state: "-1", next: { (state) -> [Int]? in
    guard state != "0" else { return nil }
    let result = try? self.client.send(request: TwitterFollowersRequest(cursor: state))
    state = result?.nextCursor ?? "0"
    return result?.ids
})

(If we added a little more error handling here, it would look almost identical to the actual code that Beacon uses.)

This sequence is getting close. It’s already lazy, like our Fibonacci sequence, so it won’t fetch the second batch of 5,000 items until the 5,001st element is requested. It needs one more big thing: it’s not actually a sequence of IDs yet. It’s still a sequence of arrays of IDs. We need to flatten this into one big sequence. For this, Swift has a function called joined() that joins a sequence of sequences into a big sequence. This function (mercifully) preserves laziness, so if the sequence was lazy before, it’ll stay lazy. All we have to do is add .joined() to the end of our expression.

To get our mutual follows from this lazyFollowerIDs sequence, we need something to intersect the followers and the following. To make this operation efficient, let’s turn the following IDs into a set. This will make contains lookup really fast:

let followingIDSet = Set(following.ids)

We make sure to filter over the lazyFollowerIDs since that sequence is lazy and we’d like to iterate over it only once.

let mutuals = lazyFollowerIDs.filter({ id in followingIDSet.contains(id) })

This reads “keep only the elements from lazyFollowerIDs that can be be found in followingIDSet”. Apply a little syntactic sugar magic to this, and you end up with a pretty terse statement:

let mutuals = lazyFollowerIDs.filter(followingIDSet.contains)

Off the disk

A similar technique can be used for handling batches of items from the database.

Vapor’s ORM is called Fluent. In Fluent, all queries go through the Query type, which is type parameterized on T, your entity, e.g, User. Queries are chainable objects, and you can call methods like filter and sort on them to refine them. When you’re done refining them, you can call methods like first(), all() , or count() to actually execute the Query.

While Fluent doesn’t have the ability to fetch in batches, its interface allows us to build this functionality easily, and Swift’s lazy sequence mechanics let us build it efficiently.

We know we’ll need a function on every Query. We don’t know what kind of Sequence we’ll be returning, but we’ll use Sequence<T> as a placeholder for now.

extension Query {
	func inBatches(of batchSize: Int) throws -> Sequence<T> {
		
	}
}

First, we need to know how many items match our query, so we can tell how many batches we’ll be fetching. Because the object we’re inside already represents the query that we’re going to be fetching with, and it already has all the relevant filters and joins, we can just call count() on self, and get the number of objects that match the query.

extension Query {
	func inBatches(of batchSize: Int) throws -> Sequence<T> {
		let count = try self.count()
		
	}
}

Once we have the count, we can use Swift’s stride(from:to:by:) to build a sequence that will step from 0 to our count with a stride of our batchSize.

extension Query {
	func inBatches(of batchSize: Int) throws -> Sequence<T> {
		let count = try self.count()
		stride(from: 0, to: self.count(), by: batchSize)
		
	}
}

Next, we want to transform each step of this stride (which represents one batch) into a set of the objects in question.

extension Query {
	func inBatches(of batchSize: Int) throws -> Sequence<T> {
		let count = try self.count()
		stride(from: 0, to: self.count(), by: batchSize)
			.map({ offset in
				return (try? self.limit(batchSize, withOffset: offset).all()) ?? []
			})
	}
}

Because .all() is a throwing function, we need to handle its error somehow. This will be a lazy sequence, so the map block will get stored and executed later. It is @escaping. This means that we can’t just throw, because we can’t guarantee that we’d be in a position to catch that error. Therefore, we just discard the error and return an empty array if it fails.

If we try to execute this as-is, the map will run instantly and fetch all of our batches at once. Not ideal. We have to add a .lazy to our chain to ensure that that each fetch doesn’t happen until an item from that batch is requested.

extension Query {
	func inBatches(of batchSize: Int) throws -> Sequence<T> {
		let count = try self.count()
		stride(from: 0, to: self.count(), by: batchSize)
			.lazy
			.map({ offset in
				return (try? self.limit(batchSize, withOffset: offset).all()) ?? []
			})
	}
}

The last step here, like the Twitter example, is to call .joined() to turn our lazy sequence of arrays into one big lazy sequence.

extension Query {
	func inBatches(of batchSize: Int) throws -> Sequence<T> {
		let count = try self.count()
		return stride(from: 0, to: self.count(), by: batchSize)
			.lazy
			.map({ offset in
				return (try? self.limit(batchSize, withOffset: offset).all()) ?? []
			})
			.joined()
	}
}

When we run this code, we see that the our big Sequence chain returns a LazySequence<FlattenSequence<LazyMapSequence<StrideTo<Int>, [T]>>>. This type is absurd. We can see all the components of our sequence chain in there, but we actually don’t care about those implementation details. It would be great if we could just erase the type and be left with something simple. This technique is called a type erasing and it will hide these details. AnySequence is a type eraser that the Swift standard library provides for this exact purpose. AnySequence also will become our return type.

extension Query {
    func inBatches(of batchSize: Int) throws -> AnySequence<T> {
		let count = try self.count()
        return AnySequence(stride(from: 0, to: count, by: batchSize)
            .lazy
            .map({ (offset) -> [T] in
                return (try? self.limit(batchSize, withOffset: offset).all()) ?? []
            })
            .joined())
    }
}

We can now write the code we want at the callsite:

try User.query().sort("id", .ascending)
	.inBatches(of: 20)
	.forEach({ user in
		//do something with user
	})

This is reminiscent of Ruby’s find_in_batches or the property fetchBatchSize on NSFetchRequest, which returns a very similar lazy NSArray using the NSArray class cluster.

This is not the first time I’ve said this, but Swift’s sequence handling is exceptionally robust and fun to work with. Understanding the basics of Swift’s sequences enable you to compose those solutions to tackle bigger and more interesting problems.

This article is also available in Chinese.

When working with Swift on the server, most of the routing frameworks work by associating a route with a given closure. When we wrote Beacon, we chose the Vapor framework, which works like this. You can see this in action in the test example on their home page:

import Vapor

let droplet = try Droplet()

droplet.get("hello") { req in
    return "Hello, world."
}

try droplet.run()

Once you run this code, visiting localhost:8080/hello will display the text “Hello, world.”.

Sometimes, you also want to return a special HTTP code to signal to consumers of the API that a special action happened. Take this example endpoint:

droplet.post("devices", handler: { request in
	let apnsToken: String = try request.niceJSON.fetch("apnsToken")
	let user = try request.session.ensureUser()
    
	var device = try Device(apnsToken: apnsToken, userID: user.id.unwrap())
	try device.save()
	return try device.makeJSON()
})

(I’ve written more about NiceJSON here, if you’re curious about it.)

This is a perfectly fine request and is similar to code from the Beacon app. There is one problem: Vapor will assume a status code of 200 when you return objects like a string (in the first example in this blog post) or JSON (in the second example). However, this is a POST request and a new Device resource is being created, so it should return the HTTP status code “201 Created”. To do this, you have to create a full response object, like so:

let response = Response(status: .created)
response.json = try device.makeJSON()
return response

which is a bit annoying to have to do for every creation request.

Lastly, endpoints will often have side effects. Especially with apps written in Rails, managing and testing these is really hard, and much ink has been spilled in the Rails community about it. If signing up needs to send out a registration email, how do you stub that while still testing the rest of the logic? It’s a hard thing to do, and if everything is in one big function, it’s even harder. In Beacon’s case, we don’t have don’t have many emails to send, but we do have a lot of push notifications. Managing those side effects is important.

Generally speaking, this style of routing, where you use a closure for each route, has been used in frameworks like Flask, Sinatra, and Express. It makes for a pretty great demo, but a project in practice often has complicated endpoints, and putting everything in one big function doesn’t scale.

Going even further, the Rails style of having a giant controller which serves as a namespace for vaguely related methods for each endpoint is borderline offensive. I think we can do better than both of these. (If you want to dig into Ruby server architecture, I’ve taken a few ideas from the Trailblazer project.)

Basically, I want a better abstraction for responding to incoming requests. To this end, I’ve started using an object that I call a Command to encapsulate the work that an endpoint needs to do.

The Command pattern starts with a protocol:

public protocol Command {

	init(request: Request, droplet: Droplet) throws
    
	var status: Status { get }

	func execute() throws -> JSON
	
}

extension Command: ResponseRepresentable {
    
	public func makeResponse() throws -> Response {
		let response = Response(status: self.status)
		response.json = try execute()
		return response
	}
    
}

We’ll add more stuff to it as we go, but this is the basic shell of the Command protocol. You can see see just from the basics of the protocol how this pattern is meant to be used. Let’s rewrite the “register device” endpoint with this pattern.

droplet.post("devices", handler: { request in
	return RegisterDeviceCommand(request: request, droplet: droplet)
})

Because the command is ResponseRepresentable, Vapor accepts it as a valid result from the handler block for the route. It will automatically call makeResponse() on the Command and return that Response to the consumer of the API.

public final class RegisterDeviceCommand: Command {

	let apnsToken: String
	let user: User

	public init(request: Request, droplet: Droplet) throws {
		self.apnsToken = try request.niceJSON.fetch("apnsToken")
		self.user = try request.session.ensureUser()
	}

	public let status = Status.created

	public func execute() throws -> JSON {
		var device = try Device(apnsToken: apnsToken, userID: user.id.unwrap())
		try device.save()
		return try device.makeJSON()
	}
}

There are a few advantages conferred by this pattern already.

  1. Maybe the major appeal of using a language like Swift for the server is to take advantage of things like optionals (and more pertinently, their absence) to be able to define the absolute requirements for a request to successfully complete. Because apnsToken and user are non-optional, this file will not compile if the init function ends without setting all of those values.
  2. The status code is applied in a nice declarative way.
  3. Initialization is separate from execution. You can write a test that checks to that the initialization of the object (e.g., the extraction of the properties from the request) that is completely separate from the test that checks that the actual save() works correctly.
  4. As a side benefit, using this pattern makes it easy to put each Command into its own file.

There are two more important components to add to a Command like this. First, validation. We’ll add func validate() throws to the Command protocol and give it a default implementation that does nothing. It’ll also be added to the makeResponse() function, before execute():

public func makeResponse() throws -> Response {
	let response = Response(status: self.status)
	try validate()
	response.json = try execute()
	return response
}

A typical validate() function might look like this (this comes from Beacon’s AttendEventCommand):

public func validate() throws {
	if attendees.contains(where: { $0.userID == user.id }) {
		throw ValidationError(message: "You can't join an event you've already joined.")
	}
	if attendees.count >= event.attendanceLimit {
		throw ValidationError(message: "This event is at capacity.")
	}
	if user.id == event.organizer.id {
		throw ValidationError(message: "You can't join an event you're organizing.")
	}
}

Easy to read, keeps all validations localized, and very testable as well. While you can construct your Request and Droplet objects and pass them to the prescribed initializer for the Command, you’re not obligated to. Because each Command is your own object, you can write an initializer that accepts fully fledged User, Event, etc objects and you don’t have to muck about with manually constructing Request objects for testing unless you’re specifically testing the initialization of the Command.

The last component that a Command needs is the ability to execute side effects. Side effects are simple:

public protocol SideEffect {
	func perform() throws
}

I added a property to the Command protocol that lists the SideEffect-conforming objects to perform once the command’s execution is done.

var sideEffects: [SideEffect] { get }

And finally, the side effects have to be added to the makeResponse() function:

public func makeResponse() throws -> Response {
	let response = Response(status: self.status)
	try validate()
	response.json = try execute()
	try sideEffects.forEach({ try $0.perform() })
	return response
}

(In a future version of this code, side effects may end up being performed asynchronously, i.e., not blocking the response being sent back to the user, but currently they’re just performed synchronously.) The primary reason to decouple side effects from the rest of the Command is to enable testing. You can create the Command and execute() it, without having to stub out the side effects, because they will never get fired.

The Command pattern is a simple abstraction, but it enables testing and correctness, and frankly, it’s pleasant to use. You can find the complete protocol in this gist. I don’t knock Vapor for not including an abstraction like this: Vapor, like the other Swift on the server frameworks, is designed to be simple and and that simplicity allows you to bring abstractions to your own taste.

There are a few more blog posts coming on server-side Swift, as well as a few more in the Coordinator series. Beacon and WWDC have kept me busy, but rest assured! More posts are coming.

Ashley Nelson-Hornstein and I built an app for hanging at WWDC. It took 5 weeks to build. It’s called Beacon, and you can get it on the App Store today.

Beacon is a way to signal to your friends that you’re down to hang out. You can set up an event, and your friends will be able to see those events, and let you know that they want to come. Beacon answers questions like “Who’s free?” “Who likes persian food?”, “We have 2 spots for dinner, and who would want to come?” without the messiness of having to text your entire address book. Each event has a big chat room for organizing, and honestly, goofing around in those chat rooms has been some of the most fun of the beta. Beacon is, at its heart, a very social app.

Ashley and I did a ton of work in these few weeks, trying to get this app from concept to production. I never built an app this fast before, and it’s been an exhilarating ride. In addition to being a stellar dev, Ashley’s got a great eye for the holes in the product and the user loop, which let us tighten up the experience before putting the app in the hands of all of our friends. This project absolutely wouldn’t have worked without her.

Linda Dong also contributed a considerable amount of design work, giving the app life and personality. Before her touch, the “design” was the output of two developers, and you can imagine what a horror show that was.

From a technical perspective, one of the things I’m most excited about is the server side of this project. Chris and I got to talk about this on the last episode of Fatal Error season 2 (Patreon link). Beacon finally gave me the chance to build an application for the server using Swift. We chose Vapor for the framework, purely for the quality of support (mostly a friendly Slack channel) and the size of the community using it.

Swift on the server is a budding project. Builds are slow, test targets are hard to set up, there’s no Xcode (which means no autocompletion or command/option-clicking), Foundation isn’t complete, there’s almost no library support, documentation is god-awful, and everything is changing extremely quickly. Nevertheless, it’s fun as hell to write Swift for the server, and I don’t regret the decision. I think it’s most comparable to writing Swift 1 or 1.1 in a production iOS app. Potentially a problematic decision, but the language was so fun that everyone who did it had no complaints. I think in 2 or 3 years, Swift on the server will be where Swift in the client is now, and that will be a great time indeed.

I’ve written web apps in Node, Rails, and various PHP frameworks, and while it’s possible to take advantage of their dynamically typed features for certain patterns, I often felt like I was programming without a safety net. I felt forced to write tests to make sure that various code paths were getting hit and the right methods were being called.

With Swift on the server, you get all the Swift niceties you’re used to: enums, generics, protocols, sequences, and everything else. All of the other tiny pieces of knowledge of Swift that you’ve built up over the last weeks and months are valid and useful.

A few scattered thoughts on Swift on the server:

  • Because you have a type system, building up little abstractions is much easier, and you can change those abstractions without worrying that protocol conformances down the line will be broken. Optionals are excellent. It’s so nice to know that you have something. For example, in the Event model, I have a non-optional User called organizer, and I have total confidence that through any code path in the app, if I have an event, I will have an organizer.
  • I definitely want Linux support for Sourcery. There’s a lot of boilerplate in model code on the server (sometimes even more than the client) and Sourcery would help with that pain a lot.
  • Because everything in Vapor is synchronous, I rewrote my networking library to simply return a value (or throw) for each request. This makes writing network code so simple, and I find it quite a shame that we can’t take advantage of this on the client as well. I hold out hope that Swift’s async/await implementation will be the answer to some of these woes.

We don’t know if Beacon is a viable product for the broader market, but we think it’ll be a lot of fun at WWDC and we look forward to organizing lots of ad hoc events with all of you awesome people. Find me on the app, and let’s hang out.

This is a post in a series on Advanced Coordinators. If you haven’t read the original post or the longer follow-up, make sure to check those out first. The series will cover a few advanced coordinator techniques, gotchas, FAQs, and other trivia.

When working with coordinators, all flow events should travel through the coordinator. Any time a view controller intends to change flow state, it informs the coordinator, and the coordinator can handle side effects and make decisions about how to proceed.

There is one glaring exception to this rule: when a navigation controller navigates “back”. That back button is not a traditional button, so you can’t add handlers to it to send messages up to the coordinator. Further, its associated behavior is performed directly by the navigation controller itself. If you need to do any work in a coordinator when a view controller is dismissed, you need some way to hook into that behavior.

While there are other less common examples, the primary use case is when you have a sub-flow that takes place entirely within the context of another navigation controller. Coordinators typically own one navigation controller exclusively, but sometimes, a subset of the flow with in a navigation controller stack needs to be broken out into its own coordinator, usually for reuse purposes. That separate coordinator shares the navigation controller with its parent coordinator. If the user enters the child coordinator (entering the sub-flow) and then taps the back button, that child coordinator needs to be cleaned up. If it’s not cleaned up, that coordinator’s memory will effectively be leaked. Further, if they enter that flow a second time, we might have two of the same coordinator, potentially reacting to similar events and executing code twice.

So, we need a way to know that the navigation’s back button has been tapped. The UINavigationControllerDelegate is the easiest way to get access to this event. (You could subclass or swizzle, but let’s not.)

There are a few ways to use this delegate to solve this problem, and I’d like to highlight two of them. The first is Bryan Irace’s approach to tackling this problem. He makes a special view controller called NavigationController that allows you to push coordinators in addition to pushing view controllers.

I’ll elide some of the details and give an overview of the approach, but if you want a full details, I recommend reading his whole post. The main thing to note in his code is:

final class NavigationController: UIViewController {

	// ...

	private let navigationController: UINavigationController = //..
	
	private var viewControllersToChildCoordinators: [UIViewController: Coordinator] = [:]
  
	// ...

}

This shows the way that this class works. When you add a new coordinator to this class, it creates an entry in this dictionary. The entry maps the root view controller of a coordinator to the coordinator itself. Once you have that, you can conform to the UINavigationControllerDelegate.

extension NavigationController: UINavigationControllerDelegate {    
	func navigationController(navigationController: UINavigationController,
		didShowViewController viewController: UIViewController, animated: Bool) {
		// ...
	}
}

At that point, if the popped view controller is found in the coordinator dictionary, it will remove it, allowing it to correctly deallocate.

There’s a lot to like about this approach. Coordinator deallocation is handled automatically for you, when you use this class instead of a UINavigationController. However, it comes with a few downsides, as well. My primary concern is that the NavigationController class, which is a view controller, knows about and has to deal with coordinators. This is tantamount to a view having a reference to a view controller.

I think there are some goopy bits on the inside of UIKit where views know about their view controllers. I haven’t seen the source code, but the stack trace for -viewDidLayoutSubviews suggests that there’s some voodoo going on here. Sometimes, components in a library may be coupled together more tightly, in order to make the end user’s code cleaner. This is the tradeoff that Bryan is making here.

If you don’t want to make that tradeoff, you can bring the navigation controller delegate methods to the parent coordinator, where they can live with all the other flow events. This is my preference. By making the coordinator into the delegate of the navigation controller, you can maintain the structure of the coordinator: namely that it is the parent of the navigation controller. When you get the delegate messages that a view controller was popped off, you can manually clean up any coordinators that need to be dealt with.

extension Coordinator: UINavigationControllerDelegate {    
	func navigationController(navigationController: UINavigationController,
		didShowViewController viewController: UIViewController, animated: Bool) {
		
		// ensure the view controller is popping
		guard
		  let fromViewController = navigationController.transitionCoordinator?.viewController(forKey: .from),
		  !navigationController.viewControllers.contains(fromViewController) else {
			return
    	}
		
		// and it's the right type
		if fromViewController is FirstViewControllerInCoordinator) {
			//deallocate the relevant coordinator
		}
	}
}

This approach is slightly more manual, with the up- and downsides that come with that: more control and more boilerplate. If you don’t like the direct type check, you can replace it with a protocol.

You’ll also need to re-enable the interactivePopGestureRecognizer by conforming to UIGestureRecognizerDelegate and returning true for the shouldRecognizeSimultaneouslyWithGestureRecognizer delegate method.

Both approaches are good ways of handling decommisioned coordinators and ensuring that they correctly deallocate, and these techniques are crucial for breaking out your subflows out into their own coordinators so they can be reused.

Update: Ian MacCallum provides another approach to this problem. He essentially provides a onPop block for a weak coupling between the coordinator and navigation controller (which he wraps up in an object called a Router). It’s a good approach.

This is a post in a series on Advanced Coordinators. If you haven’t read the original post or the longer follow-up, make sure to check those out first. The series will cover a few advanced coordinator techniques, gotchas, FAQs, and other trivia.

When splitting up the responsibilities of a view controller, I do a curious thing. While I leave reading data (for example, a GET request, or reading from a database or cache) in the view controller, I move writing data (such as POST requests, or writing to a database) up to the coordinator. In this post, I’ll explore why I separate these two tasks.

Coordinators are primarily in charge of one thing: flow. Why sully a beautiful single responsibility object with a second responsibility?

I make this distinction because I think flow is the wrong way to think about this object’s responsibility. The correct responsibility is “handle the user’s action”. The reason to draw this distinction is so that the knowledge of when to “do a thing” (mutate the model) and when to “initiate a flow step” can be removed from the view controller. I don’t want a view controller to know what happens when it passes the user’s action up to a coordinator.

You can imagine a change to your app’s requirements that would make this distinction clear. For example, let’s say you have an app with an authentication flow. The old way the app worked was that the user typed their username and password into one screen, and then the signup request could be fired. Now, the product team wants the user to be able to fill out the profile on the next screen, before firing off the signup request. If you keep model mutation in the view controller and the flow code in the coordinator, you’ll have to make a change to both the view controller and the coordinator to make this work.

It gets even worse if you’re A/B testing this change, or slowly rolling it out. The view controller would need an additional component to tell it how to behave (not just how to present its data), which means either a delegate method back up to the coordinator or another object entirely, which would help it decide if it should call inform the coordinator to present the next screen or if it should just post the signup call itself.

If you keep model mutation and flow actions together, the view controller doesn’t have to change at all. The view controller gets to mostly act like it’s in the view layer, and the coordinator, with its fullness of knowledge, gets to make the decision about how to proceed.

Another example: imagine your app has a modal form for posting a message. If the “Close” button is tapped, it should dismiss the modal and delete the draft from the database (which, let’s say, is saved for crash protection). If your designer decides that they want an alert view that asks “Are you sure?” before deleting the draft, your flow and your database mutation are again intertwined. Showing the dialog is presenting a view controller, which is a flow change, and deleting an item from the database is a model mutation. Keeping these responsibilities in the same place will ease your pain when you have to make changes to your app.

One additional, slightly related note: the coordinator’s mutative effect on the model should happen via a collaborator. In other words, your coordinator shouldn’t touch URLSession directly, nor any database handle, like an NSManagedObjectContext. If you like thinking about view models, you might consider a separation between read-only view models (which you could call a Presenter) and write-only view models (which you could call an Interactor or a Gateway). Read-only view models can go down into the view controller, and write-only view models stay at the coordinator level.

The line between model mutation and flow step is thinner than you’d expect. By treating those two responsibilities as one (responding to user action), you can make your app easier to change.