In Apple’s documentation, they suggest you use a pattern called MVC to structure your apps. However, the pattern they describe isn’t true MVC in the original sense of the term. I’ve touched on this point here before, but MVC was a design pattern created for Smalltalk. In Smalltalk’s formulation, each of the 3 components, model, view, and controller, each talked directly to each other. This means that either the view knows how to apply a model to itself, or the model knows how to apply itself to a view.

When we write iOS apps, we consider models and views that talk directly to each other as an anti-pattern. What we call MVC is more accurately described as model-view-adapter. Our “view controllers” (the “adapters”) sit in between the model and view and mediate their interactions. In general, I think this is a good modification to MVC — the model and view not being directly coupled together and instead connected via an intermediary seems like a positive step. However, I will caveat this by saying that I haven’t worked with many systems that don’t maintain this separation.

So, that’s why we have view controllers in iOS. They serve to glue the model and view together. Now, there are downstream problems with this style of coding: code that doesn’t obviously belong in models or views ends up in the view controller, and you end up with gigantic view controllers. I’ve discussed that particular problem on this blog many times, but it’s not exactly what I want to talk about today.


I’ve heard whispers through the grapevine of what’s going on under the hood with UIViewController. I think the longer you’ve been working with UIKit, the more obvious this is, but the UIViewController base class is not pretty. I’ve heard that, in terms of lines of code, it’s on the higher end of 10,000 to 20,000 lines (and this was a few years ago, so they’ve maybe broken past the 20 kloc mark at this point).

When you want the benefits of an object to glue a UIView and a model object (or collection thereof) together, typically, we use view controller containment to break the view controller up into smaller pieces, and compose them back together.

However, containment can be finicky. It subtly breaks things if you don’t do it right, with no indication of how to fix any issues. Then, when you finally do see your bug, which was probably a misordering of calls to didMove or willMove or whatever, everything magically starts working. In fact, the very presence of willMove and didMove suggest that containment has some invisible internal state that needs to be cleaned up.

I’ve seen this firsthand in two particular situations. First, I’ve seen this issue pop up with putting view controllers in cells. When I first did this, I had a bug in the app where some content in the table view would randomly disappear. This bug went on for months, until I realized I misunderstood the lifecycle of table view cells, and I wasn’t correctly respecting containment. Once I added the correct -addChildViewController calls, everything started working great.

To me, this showed a big thing: a view controller’s view isn’t just a dumb view. It knows that it’s not just a regular view, but that it’s a view controller’s view, and its qualities change in order to accommodate that. In retrospect, it should have been obvious. How does UIViewController know when to call -viewDidLayoutSubviews? The view must be telling it, which means the view has some knowledge of the view controller.

The second case where I’ve more recently run into this is trying to use a view controller’s view as a text field’s inputAccessoryView. Getting this behavior to play nicely with the behavior from messaging apps (like iMessage) of having the textField stick to the bottom was very frustrating. I spent over a day trying to get this to work, with minimal success to show for it. I ended up reverting to a plain view.

I think at a point like that, when you’ve spent over a day wrestling with UIKit, it’s time to ask: is it really worth it to subclass from UIViewController here? Do you really need 20,000 lines of dead weight to to make an object that binds a view and a model? Do you really need viewWillAppear and rotation callbacks that badly?


So, what does UIViewController do that we always want?

  • Hold a view.
  • Bind a model to a view.

What does it do that we usually don’t care about?

  • Provide storage for child view controllers.
  • Forward appearance (-viewWillAppear:, etc) and transition coordination to children.
  • Can be presented in container view controllers like UINavigationController.
  • Notify on low memory.
  • Handle status bars.
  • Preserve and restore state.

So, with this knowledge, we now know what to build in order to replace view controllers for the strange edge cases where we don’t necessarily want all their baggage. I like this pattern because it tickles my “just build it yourself” bone and solves real problems quickly, at the exact same time.

There is one open question, which is what to name it. I don’t think it should be named a view controller, because it might be easy to confuse for a UIViewController subclass. We could just call it a regular Controller? I don’t hate this solution (despite any writings in the past) because it serves this same purpose a controller in iOS’s MVC (bind a view and model together), but there are other options as well: Binder, Binding, Pair, Mediator, Concierge.

The other nice thing about this pattern is how easy it is to build.

class DestinationTextFieldController {

    var destination: Destination?

    weak var delegate: DestinationTextFieldControllerDelegate?

    let textField = UITextField().configure({
        $0.autocorrectionType = .no
        $0.clearButtonMode = .always
    })
    
}

It almost seems like heresy to create an object like this and not subclass UIViewController, but when UIViewController isn’t pulling its weight, it’s gotta go.

You already know how to add functionality to your new object. In this case, the controller ends up being the delegate of the textField, emitting events (and domain metadata) when the text changes, and providing hooks into updating its view (the textField in this case).

extension DestinationTextFieldController {
	var isActive: Bool {
		return self.textField.isFirstResponder
	}

	func update(with destination: Destination) {
		self.destination = destination
		configureView()
	}
	
	private func configureView() {
		textField.text = destination.descriptionForDisplay
	}
}

There are a few new things you’re responsible with this new type of controller:

  • you have to make an instance variable to store it
  • you’re responsible for triggering events on it — because it’s not a real view controller, there’s no more -viewDidAppear:
  • you’re not in UIKit anymore, so you can’t directly rely on things like trait collections or safe area insets or the responder chain — you have to pass those things to your controller explicitly

Using this new object isn’t too hard, even thought you do have to explicitly store it so it doesn’t deallocate:

class MyViewController: UIViewController, DestinationTextFieldControllerDelegate {


	let destinationViewController = DestinationTextFieldController()
	
	override func viewDidLoad() {
		super.viewDidLoad()
		destinationViewController.delegate = self
		view.addSubview(destinationViewController.view)
	}
	
	//handle any delegate methods

}

Even if you use this pattern, most of your view controllers will still be view controllers and subclass from UIViewController. However, in those special cases where integrating a view controller causes you hours and hours of pain, this can be a perfect way to simply opt out of the torment that UIKit brings to your life daily.

If you want some extreme bikeshedding on a very simple topic, this is the post for you.

For the folks who have been at any of the recent conferences I’ve presented at, you’ve probably seen my talk, You Deserve Nice Things. The talk is about Apple’s reticence to provide conveniences for its various libraries. It draws a lot on an old post of mine, Categorical, as well as bringing in a lot of new Swift stuff.

One part of the standard library that’s eminently extendable is Sequence and friends. Because of the number of different operations we want to perform on these components, even the large amount of affordances that the standard library does provide (standard stuff like map, filter, as well as more abstruse stuff like partition(by:) and lexicographicallyPrecedes), it still doesn’t cover the breadth of operations that are useful in our day to day programming life.

In the talk, I propose a few extensions that I think are useful, including any, all, none, and count(where:), eachPair, chunking, and a few others. None of these ideas are original to me. They’re ideas lifted wholesale from other languages, primary Ruby and its Enumerable module. Enumerable is a useful case study because anything that’s even marginally useful gets added to this module, and Ruby developers reap the benefits.

Swift’s standard library is a bit more conservative, but its ability to extend types and specifically protocols makes the point mostly moot. You can bring your own methods as needed. (In Categorical, I make the case that this is the responsibility of the vendor, but until they’re willing to step up, we’re on our own.)


I have an app where I need to break up a gallery of images in groups based on date. The groups could be based on individual days, but we thought grouping into “sessions” would be better. Each session is defined by starting more than an hour after the last session ended. More simply, if there’s a gap between two photos of more than hour, that should signal the start of a new session.

Since we’re splitting a Sequence, the first thought is to use something built in: there’s a function called split, which takes a bunch of parameters (maxSplits, omittingEmptySubsequences and a block called isSeparator to determine if an element defines a split). However, we can already see from the type signature that this function isn’t quite going to do what we want. The isSeparator function yields only one element, which makes it hard to tell if there should be a split between two consecutive elements, which is what we’re actually trying to do. In point of fact, this function is consumes the separators, because it’s a more generic version of the String function split, which is useful in its own right.

No, we need something different.


In her post, Erica Sadun has some answers for us: she’s got a function that works sort of like what we want. In this case, the block takes the current element and the current partition, and you can determine if the current element fits in the current partition.

extension Sequence {
    public func partitioned(at predicate: (_ element: Iterator.Element, _ currentPartition: [Iterator.Element]) -> Bool ) -> [[Iterator.Element]] {
        var current: [Iterator.Element] = []
        var results: [[Iterator.Element]] = []
        for element in self {
            guard !current.isEmpty else { current = [element]; continue }
            switch predicate(element, current) {
            case true: results.append(current); current = [element]
            case false: current.append(element)
            }
        }
        guard !current.isEmpty else { return results }
        results.append(current)
        return results
    }
}

I’ve got a few small gripes with this function as implemented:

The name. Partitioning in the standard library currently refers to changing the order of a MutableCollection and returning a pivot where the collection switches from one partition to another. I wouldn’t mind calling it something with split in the name, but as mentioned before, that usually consumes the elements. I think calling this slicing is the best thing to do. Slicing is also a concept that already exists in the standard library (taking a small part of an existing Collection) but in a lot of ways, that’s actually what we’re doing in this case. We’re just generating more than one slice.

The name again. Specifically, the at in partition(at:) doesn’t seem exactly right. Since we’re not consuming elements anymore, the new partition has to happen either before the given element, or after it. The function’s name doesn’t tell us which. If we change the signature of the function to return a consecutive pair of elements, we can change its name to slice(between:).

This has the added benefit of no longer requiring a force unwrap in usage. Erica’s whole problem with this code is that passing back a collection that is known to be non-empty (*COUGH*), means that she has to force unwrap access to the last element:

testArray.partitioned(at: { $0 != $1.last! })

If we change the function to slice(between:), we can simplify this way down, to:

testArray.slice(between: !=)

Way nicer.

Now that we’ve beaten the name into the ground, let’s move onto discussion about the implementation. Specifically, there is some duplication in the code that I find problematic. Two expressions in particular:

First,

current = [element]

Second,

results.append(current)

This code may not look like much — only a few characters, you protest! — but it represents a duplication in concept. If these expressions needed to expand to do more, they’d have to be expanded in two places. This was excruciatingly highlighted when porting this code over to my gallery app (which is Objective-C) I wanted to wrap the image collections in a class called a PhotoSection. Creating the PhotoSection in two places made it painfully obvious that this algorithm duplicates code.

This code is ugly and frustrating: the important part of the code, the actual logic of what’s happening is tucked away in the middle there.

[photosForCurrentSection.lastObject.date timeIntervalSinceDate:photo.date] > 60*60

The entire rest of the code could be abstracted away. Part of this is Objective-C’s fault, but this version of the code really does highlight the duplication that happens when modifying the code that creates a new section.


The last piece of the puzzle here is how Ruby’s Enumerable comes into play. When exploring that module, you can see three functions that look like they could be related: slice_when, slice_before, and slice_after. What is the relationship between these functions?

It seems obvious now that I know, but I did have to do some searching around for blog posts to explain it. Essentially, slice_when is like our slice(between:). It gives you two elements and you tell it if there should be a division there. slice(before:) and slice(after:) each yield only one element in their block, and slice before or after that element. This clears up the confusion with Erica’s original function — we now know if it’ll slice before or after our element, based on the name of the function.


As for implementation, you could use a for loop and the slight duplication of Erica’s code, but I’ve recently been trying to express more code in terms of chains of functions that express a whole operation that happens to the collection all at once. I find this kind of code more elegant, easier to read, and less prone to error, even though it does pass over the relevant collection multiple times.

To write our slice(between:) function in that style, we first need two helpers:

extension Collection {
	func eachPair() -> Zip2Sequence<Self, Self.SubSequence> {
		return zip(self, self.dropFirst())
	}

	func indexed() -> [(index: Index, element: Element)] {
		return zip(indices, self).map({ (index: $0, element: $1) })
	}
}

I’m a big fan of composing these operations together, from smaller, simpler concepts into bigger, more complex ones. There are slightly more performant ways to write both of those functions, but these simple implementations will do for now.

extension Collection {
	func slice(between predicate: (Element, Element) -> Bool) -> [SubSequence] {
		let innerSlicingPoints = self
			.indexed()
			.eachPair()
			.map({ (indexAfter: $0.1.index, shouldSlice: predicate($0.0.element, $0.1.element)) })
			.filter({ $0.shouldSlice })
			.map({ $0.indexAfter })

		let slicingPoints = [self.startIndex] + innerSlicingPoints + [self.endIndex]

		return slicingPoints
			.eachPair()
			.map({ self[$0..<$1] })
	}
}

This function is broken up into 3 main sections:

  1. Find the indexes after any point where the caller wants to divide the sequence.
  2. Add the startIndex and the endIndex.
  3. Create slices of for each consecutive pair of those indexes.

Again, this isn’t the most performant implementation. Namely, it does quite a few iterations, and requires Collection where Erica’s solution required just a Sequence. But I do think it’s easier to read and understand. As I discussed in the post about refactoring, distilling an algorithm into forms straightforward, simple forms, operating at the highest possible level of abstraction, enables you to see the structure of our algorithm and potential transforms that it can undergo to create other algorithms.

Erica’s original example is a lot simpler now:

let groups = [1, 1, 2, 3, 3, 2, 3, 3, 4].slice(between: !=)

To close this post out, the implementations for slice(before:) and slice(after:) practically fall out of the ether, now that we have slice(between:):

extension Collection {
	func slice(before predicate: (Element) -> Bool) -> [SubSequence] {
		return self.slice(between: { left, right in
			return predicate(right)
		})
	}

	func slice(after predicate: (Element) -> Bool) -> [SubSequence] {
		return self.slice(between: { left, right in
			return predicate(left)
		})
	}
}

This enables weird things to become easy, like splitting a CamelCased type name:

let components = "GalleryDataSource"
	.slice(before: { ("A"..."Z").contains($0) })
	.map({ String($0) })

(Don’t try it with type name with a initialism in it, like URLSession.)

A few months ago, Alex Cox put out a call on Twitter asking for someone to help her find a backpack that would suit her needs. She’d tried a bunch, and found all of them lacking.

I have a small obsession with bags, suitcases, and backpacks. For probably 15 years now, I’ve been hunting for the “the perfect bag”. Along this journey, I’ve attained bag enlightenment, and I’m here to impart that knowledge to you today.

Here’s the deal. You won’t find a perfect bag. This is for one simple reason: the perfect bag doesn’t exist. I know this because, at the time of writing, I own 4 rolling suitcases, 3 messenger bags, 3 daily carry backpacks, 2 travel backpacks, and 3 duffel bags/weekenders. They’re all stuffed inside each other, stacked in a small closet in my apartment, my Russian nesting travel bags.

And every last one is a compromise in some way or another.

These compromises are very apparent if you troll Kickstarter for travel backpacks. You’ll find tons: the Minaal, NOMATIC, PAKT One, Brevitē, Hexad, Allpa, RuitBag, Numi, and the aptly named DoucheBags. Their videos all start exactly the same: “We looked at every bag on the market, and found them lacking. We then spent 2 years designing our bag and sourcing the highest quality zippers, and we’re finally ready to start producing this bag.” They then go through the 5 features that they think makes their bag different from the rest, talk about a manufacturer they’ve found, and then they hit you with the “and that’s where you come in” and ask you for money.

Once you watch the first of these videos, the argument is somewhat compelling. Wow, they’ve really figured it out! But once you’ve seen five of them, you know there must be something else going on. How could all these different bag startups claim that each other’s products are clearly inferior and that only they have discovered the secret to bag excellence?

To understand what makes bags so unsatisfying, here’s a quote from Joel Spolsky:

When you design a trash can for the corner, you have to make choices between conflicting requirements. It needs to be heavy so it won’t blow away. It needs to be light so the trash collector can dump it out. It needs to be large so it can hold a lot of trash. It needs to be small so it doesn’t get in peoples’ way on the sidewalk.

It’s the same with bags. You need a big bag so that it can fit all your stuff, but you need a small bag to avoid hitting people on public transit. You need a bag with wheels so you can roll it through the airport, but you need a bag with backpack straps so that you can get over rougher terrain with it. You need a bag that’s soft so you can squish it into places, but you need a bag that’s hard so that it can protect what’s inside.

These competing requirements can’t be simultaneously satisfied. You have to choose one or the other. You can’t have it both ways.

In Buddhism, they say “there is no path, there are only paths.” It’s the same with bags. Once you accept that all bags are some form of compromise, you can get a few different bags and use the right one for the right trip.

Sometimes I want to look nice and need to carry a laptop: leather messenger bag. Sometimes I want to have lots of space and flexibility even if I look kind of dorky: my high school backpack. Sometimes I have a bunch of a camera gear and I’m going to be gone for 3 weeks: checked rolling suitcase. Picking the right bag for the trip lets me graduate from “man, I wish I’d brought a bigger to bag” to “bringing a bigger bag would have been great, but then I wouldn’t have been able to jump on this scooter to the airport”. Less regret, more acceptance.

It’s worth thinking about how you like to travel, what types of trips you take, what qualities you value having in your bags, and getting a few bags that meet those requirements. Of the 14 bags I mentioned earlier, I think I’ve used 12 of them this year for different trips/events.

Finding the right bags takes time. I’ve had the oldest of my 14 bags for some 15 years, and got the newest of them in the last year. Some bags of note:

  • Kathmandu Shuttle 40L — This is my current favorite travel backpack. It’s pretty much one giant pocket, so doesn’t prescribe much in terms of how you should pack, and that flexibility is really nice. Kathmandu is an Australian company, so getting their stuff can be kind of tough, but I’m pretty happy with my pack.
  • I also really like the Osprey Stratos 34. It’s a bit on the small side, but a good addition to a rolling suitcase for a longer trip. It’s great for day hikes and has excellent ventilation.
  • Packable day packs: This backpack and this duffel bag are great. They add very little extra weight to your stuff, and they allow you to expand a little bit whe you get wherever you’re going.
  • Wirecutter’s recommendation for travel backpacks at the time of writing is the Osprey Farpoint 55. It used to be the Tortuga Outbreaker, which I think is absolutely hideous.

Accept that no one bag will work in every situation. The truth will set you free.

Over the course of this week, some bloggers have written about a problem — analytics events — a proposed multiple solutions to for this problem: enums, structs, and protocols. I also chimed in with a cheeky post about using inheritance and subclassing to solve this problem. I happened to have a post in my drafts about a few problems very similar to the analytics events that John proposed, and I think there’s no better time to post it than now. Without further ado:

If a Swift programmer wants to bring two values together, like an Int and a String, they have two options. They can either use a “product type”, the construction of which requires you to have both values; or they can use a “sum type”, the construction of which requires you to have one value or the other.

Swift is bountiful, however, and has 3 ways to express a product type and 3 ways to express a sum type. While plenty of ink has been spilled about when to use a struct, a class, or a tuple, there isn’t as much guidance on when to how to choose between an enum, protocol, or subclass. I personally haven’t found subclassing all that useful in Swift, as my post from early today implies, since protocols and enums are so powerful. In my year and a half writing Swift, I haven’t written an intentionally subclassable thing. So, for the purpose of this discussion, I’ll broadly ignore subclassing.

Enums and protocols vary in a few key ways:

  • Completeness. Every case for an enum has to be declared when you declare the enum itself. You can’t add more cases in an extension or in a different module. On the other hand, protocols allow you to add new conformances anywhere in the codebase, even adding them across module boundaries. If that kind of flexibility is required, you have to use protocols.
  • Destructuring. To get data out of an enum, you have to pattern match. This requires either a switch or an if case statement. These are a bit unwieldy to use (who can even remember the case let syntax?) but are better than adding a method in the protocol for each type of variant data or casting to the concrete type and accessing it directly.

Based on these differences, my hard-won advice on this is to use enums when you care which case you have, and use protocols you don’t — when all the cases can be treated the same.

I want to take a look at two examples, and decide whether enums or protocols are a better fit.

Errors

Errors are frankly a bit of a toss up. On the one hand, errors are absolutely a situation where we care which case we have; also, catch pattern matching is very powerful when it comes to enums. Let’s take a look at an example error.

enum NetworkError: Error {
   case noInternet
   case statusCode(Int)
   case apiError(message: String)
}

If something throws an error, we can switch in the catch statement:

do {
	// something that produces a network error
} catch NetworkError.noInternet {
	// handle no internet
} catch let NetworkError.statusCode(statusCode) {
	// use `statusCode` here to handle this error
} catch {
	// catch any other errors
}

While this matching syntax is really nice, using enums for your errors comes with a few downsides. First, it hamstrings you if you’re maintaining an external library. If you, the writer of a library, add a new enum case to an error, and a consumer of your library updates, they’ll have to change any code which exhaustively switches on your error enum. While this is desirable in some cases, it means that, per semantic versioning, you’ll have to bump the major version number of your library. This makes adding a new enum case in an external libraries is currently a breaking change. Swift 5 should be bringing nonexhaustive enums, which will ameliorate this problem.

The second issue with enums is that this type gets muddy fast. Let’s say you want to provide conformance to the LocalizedError protocol to get good bridging to NSError. Because each case has its own scheme for how to convert its associated data into the userInfo dictionary, you’ll end up with a giant switch statement.

When examining this error, it becomes apparent that the cases of the enum don’t really have anything to do with each other. NetworkError is really only acting as a convenient namespace for these errors.

One approach here is to just use structs instead.

struct NoInternetError: Error { }

struct StatusCodeError: Error {
	let statusCode: Int
}

struct APIError: Error {
	let message: String
}

If each of these network error cases become their own type, we get a few cool things: a nice breakdown between types, custom initializers, easier conformance to things like LocalizedError, and it’s just as easy to pattern match:

do {
	// something that produces a network error
} catch let error as StatusCodeError {
	
} catch let error as NoInternetError {
	
} catch {
	// catch any other errors
}

You could even make all of the different structs conform to a protocol, called NetworkError. However, there is one downside to making each error case into its own type. Swift’s generic system requires a concrete type for all generic parameters; you can’t use a protocol there. Put another way, if the type’s signature is Result<T, E: Error>, you have to use an error enum. If the type’s signature is Result<T>, then the error is untyped and you can use anything that conforms to Swift.Error.

API Endpoints

Because talking to an API has a fixed number of endpoints, it can be tempting to model each of those endpoints as an enum case. However, all requests more or less have the same data: method, path, parameters, etc.

If you implement your network requests as an enum, you’ll have a methods with giant switch statements in them — one each for the method, path, and so on. If you think about how this data is broken up, it’s exactly flipped. When you look at a chunk of code, do you want to see all of the paths for all the endpoints in one place, or do you want to see the method, path, parameters, and so on for one request in one place? How do you want to colocate your data? For me, I definitely want to have all the data for each request in one place. This is the locality problem that Matt mentioned in his post.

Protocols shine when the interface to all of the different cases are similar, so they work well for describing network requests.

protocol Request {
	var method: Method { get }
	var path: String { get }
	// etc
}

Now, you can actually conform to this protocol with either a struct or enum (Inception gong), if it does happen to be the right time to use an enum for a subset of your requests. This is Dave’s point about flexibility.

More importantly, however, your network architecture won’t care. It’ll just get an object conforming to the protocol and request the right data from it.

Protocols confer a few other benefits here as well:

  1. You might find that it makes life easier to conform URL to your Request protocol, and you can easily do that.
  2. Associating a type with a protocol is possible, associating a type with an enum case is meaningfully impossible. This means you can get well-typed results back from your network architecture.
  3. Implementations of the protocol are so flexible that you can bring your own sub-abstractions as you need to. For example, in the Beacon API, we needed to be able to get Twitter followers and following. These requests are nearly identical in their parameters, results, and everything save for the path.

     struct FollowersRequest: TypedRequest {
    
         typealias OutputType = IDsResult
        
         enum Kind {
             case followers, following
            
             var pathComponent: String {
                 return self == .followers ? "followers" : "friends"
             }
         }
    
         let path: String
    
         init(kind: Kind) {
             self.path = kind.pathComponent + "/ids.json"
         }
     }
    

    Being able to bring your own abstractions to the protocol is just one more reason protocols are the right tool for the job here.

There are other cases that are useful for exploring when to use enums and protocols, but these two I think shine the most light on the problem. Use enums when you care which case you have, and use protocols when you don’t.

This is a response to Dave DeLong’s article, which is itself a response to Matt Diephouse’s article, which is itself a response to John Sundell’s article. You should go read these first.

Dave starts off by saying:

Matt starts off by saying:

Earlier this week, John Sundell wrote a nice article about building an enum-based analytics system in Swift. He included many fine suggestions, but I believe he’s wrong about one point: that enums are the right choice.

Therefore, I’ll start similarly:

Earlier this week, Matt Diephouse wrote a nice article about building a struct-based analytics system in Swift. He included many fine suggestions, but I believe he’s wrong about one point: that structs are the right choice.

Therefore, I’ll start similarly:

Earlier this week, Dave DeLong wrote a nice article about building a protocol-based analytics system in Swift. He included many fine suggestions, but I believe he’s wrong about one point: that protocols are the right choice.

As examples in both articles show, analytic events can have different payloads of information that they’re going to capture. While you can use many different approaches to solve this problem, I believe creating a deep inheritance hierarchy is the best solution.

Using reference types (class in Swift) with an inheritance hierarchy yields all the upsides of the other solutions. Like John’s solution of enums, they can store data on a per-event basis. Like Matt’s solution, you can create new analytics events across module boundaries. And like Dave’s solution, you can build a hierarchy of categorization for your analytics events.

However, in addition to all these benefits, subclassing brings a few extra benefits the other solutions don’t have. Subclassing allows you to store new information with each layer of your hierarchy. Let’s take a look at the metadata specifically. If you have a base class called AnalyticsEvent, a subclass for NetworkEvent, a subclass for NetworkErrorEvent, and a subclass for NoInternetNetworkErrorEvent, each subclass can bring its own components to the metadata. For example:

open class AnalyticsEvent {

	var name: String {
		fatalError("name must be provided by a subclass")
	}

	var metadata: [String: Any] {
	    return ["AppVersion": version]
	}
	
}

open class NetworkEvent: AnalyticsEvent {

	var urlSessionConfiguration: URLSessionConfiguration
	
	override var metadata: [String: Any] {
	    return super
	    	.metadata
	    	.merging(["UserAgent": userAgent]) { (_, new) in new }
	}
}

open class NetworkErrorEvent: NetworkEvent {
	
	var error: Error
	
	override var metadata: [String: Any] {
	    return super
	    	.metadata
	    	.merging(["ErrorCode": error.code]) { (_, new) in new }
	}
}

open class NoInternetNetworkErrorEvent: NetworkErrorEvent {

	override var name = "NoInternetNetworkErrorEvent"
	
	override var metadata: [String: Any] {
	    return super
	    	.metadata
	    	.merging(["Message": "No Internet"]) { (_, new) in new }
	}
}

As you can see, this reduces duplication between various types of analytics events. Each layer refers to the layers above it, in an almost recursive style.

I hope this article has convinced you to try subclassing to solve your next problem. While Swift gives us many different ways to solve a problem, I’m confident that a deep inheritance hierarchy is the solution for this one.

Swift structs can provide mutating and nonmutating versions of their functions. However, the Swift standard library is inconsistent about when it provides one form of a function or the other. Some APIs come in only mutating forms, some come only in nonmutating, and some come in both. Because both types of functions are useful in different situations, I argue that almost all such functions should have both versions provided. In this post, we’ll examine when these different forms are useful, examples APIs that provide one but not the other, and solutions for this problem.

(Quick sidebar here: I’ll be referring to functions that return a new struct with some field changed as nonmutating functions. Unfortunately, Swift also includes a keyword called nonmutating, which is designed for property setters that don’t require the struct to be copied when they’re set. Because I think that nonmutating is the best word to describe functions that return an altered copy of the original, I’ll keep using that language here. Apologies for the confusion.)

Mutating and Nonmutating Forms

These two forms, mutating and nonmutating, are useful in different cases. First, let’s look at when the nonmutating version is more useful.

This code, taken from Beacon’s OAuthSignatureGenerator, uses a long chain of immutable functions to make its code cleaner and more uniform:

public var authorizationHeader: String {
	let headerComponents = authorizationParameters
		.dictionaryByAdding("oauth_signature", value: self.oauthSignature)
		.urlEncodedQueryPairs(using: self.dataEncoding)
		.map({ pair in "\(pair.0)=\"\(pair.1)\"" })
		.sorted()
		.joined(separator: ", ")
	return "OAuth " + headerComponents
}

However, it wouldn’t be possible without an extension adding dictionaryByAdding(_:,value:), the nonmutating version of the subscript setter.

Here’s another, slightly more academic example. Inverting a binary tree is more commonly a punchline for bad interviews than practically useful, but the algorithm illustrates this point nicely. To start, let’s define a binary tree an enum like so:

indirect enum TreeNode<Element> {
	case children(left: TreeNode<Element>, right: TreeNode<Element>, value: Element)
	case leaf(value: Element)
}

To invert this binary tree, the tree on the left goes on the right, and the tree on right goes on the left. The recursive form of nonmutating version is simple and elegant:

extension TreeNode {
	func inverted() -> TreeNode<Element> {
		switch self {
		case let .children(left, right, value):
			return .children(left: right.inverted(), right: left.inverted(), value: value)
		case let .leaf(value: value):
			return .leaf(value: value)
		}
	}
}

Whereas the recursive mutating version contains lots of extra noise and nonsense:

extension TreeNode {
	mutating func invert() {
		switch self {
		case let .children(left, right, value):
			var rightCopy = right
			rightCopy.invert()
			var leftCopy = left
			leftCopy.invert()
			self = .children(left: rightCopy, right: leftCopy, value: value)
		case .leaf(value: _):
			break
		}
	}
}

Mutating functions are also useful, albeit in different contexts. We primarily work in apps that have a lot of state, and sometimes that state is represented by data in structs. When mutating that state, we often want to do it in place. Swift gives us lots of first class affordances for this: the mutating keyword was added especially for this behavior, and changing any property on a struct acts as a mutating function as well, reassigning the reference to a new copy of the struct with the value changed.

There are concrete examples as well. If you’re animating a CGAffineTransform on a view, that code currently has to look something like this.

UIView.animate(duration: 0.25, animations: {
	view.transform = view.transform.rotated(by: .pi)
})

Because the transformation APIs are all nonmutating, you have to manually assign the struct back to the original reference, causing duplicated, ugly code. If there were a mutating rotate(by:) function, then this code would be much cleaner:

UIView.animate(duration: 0.25, animations: {
	view.transform.rotate(by: .pi)
})

The APIs aren’t consistent

The primary problem here is that while both mutating functions and nonmutating functions are useful, not all APIs provide versions for both.

Some APIs include only mutating APIs. This is common with collections. Both the APIs for adding items to collections, like Array.append, Dictionary.subscript, Set.insert, and APIs for removing items from collections, like Array.remove(at:), Dictionary.removeValue(forKey:), and Set.remove, have this issue.

Some APIs include only nonmutating APIs. filter and map are defined on Sequence, and they return arrays, so they can’t be mutating (because Sequence objects can’t necessarily be mutated). However, we could have a mutating filterInPlace function on Array. The aforementioned CGAffineTransform functions also fall in this category. They only include nonmutating versions, which is great for representing a CGAffineTransform as a chain of transformations, but not so great for mutating an existing transform, say, on a view.

Some APIs provide both. I think sorting is a great example of getting this right. Sequence includes sorted(by:), which is a nonmutating function that returns a sorted array, whereas MutableCollection (the first protocol in the Sequence and Collection heirarchy that allows mutation) includes sort(by:). This way, users of the API can choose whether they want a mutating sort or a nonmutating sort, and they’re available in the API where they’re first possible. Array, of course, conforms to MutableCollection and Sequence, so it gets both of them.

Another example of an API that gets this right: Set includes union (nonmutating) and formUnion (mutating). (I could quibble with these names, but I’m happy that both version of the API exist.) Swift 4’s new Dictionary merging APIs also include both merge(_:uniquingKeysWith:) and merging(_:uniquingKeysWith:).

Bridging Between the Two

The interesting thing with this problem is that, because of the way Swift is designed, it’s really easy to bridge from one form to the other. If you have the mutating version of any function:

mutating func mutatingVersion() { ... }

You can synthesize a nonmutating version:

func nonMutatingVersion() -> Self {
	var copy = self
	copy.mutatingVersion()
	return copy
}

And vice versa, if you already have a nonmutating version:

func nonMutatingVersion() -> Self { ... }

mutating func mutatingVersion() {
	self = self.nonMutatingVersion()
}

As long as the type returned by the nonmutating version is the same as Self, this trick works for any API, which is awesome. The only thing you really need is the new name for the alternate version.

Leaning on the Compiler

With this, I think it should be possible to have the compiler trivially synthesize one version from the other for us. Imagine something like this:

extension Array {

	@synthesizeNonmutating(appending(_:))
	mutating func append(_ newElement) {
		// ...
	}
	
}

The compiler would use the same trick above — create a copy, mutate the copy, and return it — to synthesize a nonmutating version of this function. You could also have a @synthesizeMutating keyword. Types could of course choose not to use this shorthand, which they might do in instances where there are there are optimizations for one form or another. However, getting tooling like this means that API designers no longer have to consider whether they’re APIs are likely to be used in mutating or nonmutating ways, and they can easily add both forms. Because each form is useful in different contexts, providing both allows the consumer of an API to choose which form they want and when.

Very early this year, I posted about request behaviors. These simple objects can help you factor out small bits of reused UI, persistence, and validation code related to your network. If you haven’t read the post, it lays the groundwork for this post.

Request behaviors are objects that can exceute arbitrary code at various points during some request’s execution: before sending, after success, and after failure. They also contain hooks to add any necessary headers and arbitrarily modify the URL request. If you come from the server world, you can think of request behaviors as sort of reverse middleware. It’s a simple pattern, but there are lots of very powerful behaviors that can be built on top of them.

In the original post, I proposed 3 behaviors, that because of their access to some piece of global state, were particularly hard to test: BackgroundTaskBehavior, NetworkActivityIndicatorBehavior, AuthTokenHeaderBehavior. Those are useful behaviors, but in this post, I want to show a few more maybe less obvious behaviors that I’ve used across a few apps.

API Verification

One of the apps where I’ve employed this pattern needs a very special behavior. It relies on receiving 2XX status codes from sync API requests. When a request returns a 200, it assumes that sync request was successfully executed, and it can remove it from the queue.

The problem is that sometimes, captive portals, like those used at hotels or at coffee shops, will often redirect any request to their special login page, which returns a 200. There are a few ways to handle this, but the solution we opted for was to send a special header that the server would turn around and return completely unmodified. No returned header? Probably a captive portal or some other tomfoolery. To implement this, the server used a very straightforward middleware, and the client needed some code to handle it as well. Perfect for a request behavior.

class APIVerificationBehavior: RequestBehavior {

    let nonce = UUID().uuidString

    var additionalHeaders: [String: String] {
        return ["X-API-Nonce": nonce]
    }

    func afterSuccess(response: AnyResponse) throws {
        guard let returnedNonce = response.httpResponse.httpHeaderFields["X-API-Nonce"],
            returnedNonce == nonce {
                throw APIError(message: "Sync request intercepted.")
        }
    }
}

It requires a small change: making the afterSuccess method a throwing method. This lets the request behavior check conditions and fail the request if they’re not met, and it’s a straightforward compiler-driven change. Also, because the request behavior architecture is so testable, making changes like this to the network code can be reliably tested, making changes much easier.

OAuth Behavior

Beacon relies on Twitter, which uses OAuth for authentication and user identification. At the time I wrote the code, none of the Swift OAuth libraries worked correctly on Linux, so there was a process of extracting, refactoring, and, for some components, rewriting the libraries to make them work right. While I was working on this, I was hoping to test out one of the reasons I wanted to created request behaviors in the first place: to decouple the authentication protocol (OAuth, in this case) from the data that any given request requires. You should be able to transparently add OAuth to a request without having to modify the request struct or the network client at all.

Extracting the code to generate the OAuth signature was a decent amount of work. Debugging in particular is hard for OAuth, and I recommend this page on Twitter’s API docs which walks you through the whole process and shows you what your data should look like at each step. (I hope this link doesn’t just break when Twitter inevitably changes its documentation format.) I also added tests for each step, so that if anything failed, it would be obvious which steps succeeded and which steps failed.

Once you have something to generate the OAuth signature (called OAuthSignatureGenerator here), the request behavior for adding OAuth to a request turns out to not be so bad.

struct Credentials {
    let key: String
    let secret: String
}

class OAuthRequestBehavior: RequestBehavior  {

    var consumerCredentials: Credentials

    var credentials: Credentials
    
    init(consumerCredentials: Credentials, credentials: Credentials) {
        self.consumerCredentials = consumerCredentials
        self.credentials = credentials
    }
    
    func modify(request: URLRequest) -> URLRequest {
        let generator = OAuthSignatureGenerator(consumerCredentials: consumerCredentials, credentials: credentials, request: request)
        var mutableRequest = request
        mutableRequest.setValue(generator.authorizationHeader, forHTTPHeaderField: "Authorization")
        return mutableRequest
    }
    
}

Using modify(request:) to perform some mutation of the request, we can add the header for the OAuth signature from OAuthSignatureGenerator. Digging into the nitty-gritty of OAuth is a out of the scope of this post, but you can find the code for the signature generator here. The only thing of note is that this code relies on Vapor’s SHA1 and base 64 encoding, which you’ll have to swap out for implementations more friendly to your particular environment.

When it’s time to use the use this behavior to create a client, you can create a client specific to the Twitter API, and then you’re good to go:

let twitterClient = NetworkClient(
	configuration: RequestConfiguration(
		baseURLString: "https://api.twitter.com/1.1/",
		defaultRequestBehavior: oAuthBehavior
	)
)

Persistence

I also explored saving things to Core Data via request behaviors, without having to trouble the code that sends the request with that responsibility. This was another promise of the request behavior pattern: if you could write reusable and parameterizable behaviors for saving things to Core Data, you could cut down on a lot of boilerplate.

However, when implementing this, we ran into a wrinkle. Each request needs to finish saving to Core Data before the request’s promise is fulfilled. However, the current afterSuccess(result:) and afterFailure(error:) methods are synchronous and called on the main thread. Saving lots of data to Core Data can take seconds, during which time the UI can’t be locked up. We need to change these methods to allow asynchronous work. If we define a function that takes a Promise and returns a Promise, we can completely subsume three methods: beforeSend, afterSuccess, afterFailure.

func handleRequest(promise: Promise<AnyResponse>) -> Promise<AnyResponse> {
	// before request
	return promise
		.then({ response in
			// after success
		})
		.catch({ error in
			// after failure
		})
}

Now, we can do asynchronous work when the request succeeds or fails, and we can also cause a succeeding to request to fail if some condition isn’t met (like in the first example in this post) by throwing from the then block.

Core Data architecture varies greatly from app to app, and I’m not here to prescribe any particular pattern. In this case, we have a foreground context (for reading) and background context (for writing). We wanted simplify the creation of a Core Data request behavior; all you should have to provide is a context and a method that will be performed by that context. Building a protocol around that, we ended up with something like this:

protocol CoreDataRequestBehavior: RequestBehavior {

	var context: NSManagedObjectContext { get }
	
	func performBeforeSave(in context: NSManagedObjectContext, withResponse response: AnyResponse) throws
	
}

And that protocol is extended to provide a handle all of the boilerplate mapping to and from the Promise:

extension CoreDataRequestBehavior {

	func handleRequest(promise: Promise<AnyResponse>) -> Promise<AnyResponse> {
		return promise.then({ response in
			return Promise<AnyResponse>(work: { fulfill, reject in
				self.context.perform({
					do {
						try self.performBeforeSave(in: context, withResponse: response)
						try context.save()
						fulfill(response)
					} catch let error {
						reject(error)
					}
				})
			})
		})
	}

}

Creating a type that conforms to CoreDataRequestBehavior means that you provide a context and a function to modify that context before saving, and that function will be called on the right thread, will delay the completion of the request until the work in the Core Data context is completed. As an added bonus, performBeforeSave is a throwing function, so it’ll handle errors for you by failing the request.

On top of CoreDataRequestBehavior you can build more complex behaviors, such as a behavior that is parameterized on a managed object type and can save an array of objects of that type to Core Data.

Request behaviors provide the hooks to attach complex behavior to a request. Any side effect that needs to happen during a network request is a great candidate for a request behavior. (If these side effects occur for more than one request, all the better.) These three examples highlight more advanced usage of request behaviors.

The model layer of a client is a tough nut to crack. Because it’s not the canonical representation of the data (that representation lives on the server), the data must live a fundamentally transient form. The version of the data that the app has is essentially a cache of what lives on the network (whether that cache is in memory or on-disk), and if there’s anything that’s true about caches, it’s that they will always end up with stale data. The truth is multiplied by the number of caches you have in your app. Reducing the number of cached versions of any given object decreases the likelihood that it will be out of date.

Core Data has an internal feature that ensures that there is never more than one instance of an object for a given identifier (within the same managed object context). They call this feature “uniquing”, but it is known more broadly as the identity map pattern. I want to steal it without adopting the rest of baggage of Core Data.

I think of this concept as a “flat cache”. A flat cache is basically just a big dictionary. The keys are a composite of an object’s type name and the object’s ID, and the value is the object. A flat cache normalizes the data in it, like a relational database, and all object-object relationships go through the flat cache. A flat cache confers several interesting benefits.

  • Normalizing the data means that you’ll use less bandwidth while the data is in-flight, and less memory while the data is at rest.
  • Because data is normalized, modifying or updating a resource in one place modifies it everywhere.
  • Because relationships to other entities go through the flat cache, back references with structs are now possible. Back references with classes don’t have to be weak.
  • With a flat cache of structs, any mutation deep in a nested struct only requires the object in question to change, instead of the object and all of its parents.

In this post, we’ll discuss how to make this pattern work in Swift. First, you’ll need the composite key.

struct FlatCacheKey: Equatable, Hashable {

    let typeName: String
    let id: String
    
    static func == (lhs: FlatCacheKey, rhs: FlatCacheKey) -> Bool {
	    return lhs.typeName == rhs.typeName && lhs.id == rhs.id
    }
		    
	var hashValue: Int {
		return typeName.hashValue ^ id.hashValue
	}
}

We can use a protocol to make the generation of flat cache keys easier:

protocol Identifiable {
    var id: String { get }	
}

protocol Cachable: Identifiable { }

extension Cachable {
    static var typeName: String {
        return String(describing: self)
    }
    
    var flatCacheKey: FlatCacheKey {
        return FlatCacheKey(typeName: Self.typeName, id: id)
    }
}

All of our model objects already have IDs so they can trivially conform to Cachable and Identifiable.

Next, let’s get to the basic flat cache.

class FlatCache {
    
    static let shared = FlatCache()
    
    private var storage: [FlatCacheKey: Any] = [:]
    
    func set<T: Cachable>(value: T) {
        storage[value.flatCacheKey] = value
    }
    
    func get<T: Cachable>(id: String) -> T? {
        let key = FlatCacheKey(typeName: T.typeName, id: id)
        return storage[key] as? T
    }
    	    
    func clearCache() {
        storage = [:]
    }
    
}

Not too much to say here, just a private dictionary and get and set methods. Notably, the set method only takes one parameter, since the key for the flat cache can be derived from the value.

Here’s where the interesting stuff happens. Let’s say you have a Post with an Author. Typically, the Author would be a child of the Post:

struct Author {
	let name: String
}

struct Post {
	let author: Author
	let content: String
}

However, if you wanted back references (so that you could get all the posts by an author, let’s say), this isn’t possible with value types. This kind of relationship would cause a reference cycle, which can never happen with Swift structs. If you gave the Author a list of Post objects, each Post would be a full copy, including the Author which would have to include the author’s posts, et cetera. You could switch to classes, but the back reference would cause a retain cycle, so the reference would need to be weak. You’d have to manage these weak relationships back to the parent manually.

Neither of these solutions is ideal. A flat cache treats relationships a little differently. With a flat cache, each relationship is fetched from the centralized identity map. In this case, the Post has an authorID and the Author would have a list of postIDs:

struct Author: Identifiable, Cachable {
	let id: String
	let name: String
	let postIDs: [String]
}

struct Post: Identifiable, Cachable {
	let id: String
	let authorID: String
	let content: String
}

Now, you still have to do some work to fetch the object itself. To get the author for a post, you would write something like:

FlatCache.shared.get(id: post.authorID) as Author

You could put this into an extension on the Post to make it a little cleaner:

extension Post {
	var author: Author? {
		return FlatCache.shared.get(id: authorID)
	}
}

But this is pretty painful to do for every single relationship in your model layer. Fortunately, it’s something that can be generated! By adding an annotation to the ID property, you can tell a tool like Sourcery to generate the computed accessors for you. I won’t belabor the explanation of the template, but you can find it here. If you have trouble reading it or understanding how it works, you can read the Sourcery in Practice post.

It will let you write Swift code like this:

struct Author {
	let name: String
	// sourcery: relationshipType = Post
	let postIDs: [String]
}

struct Post {
	// sourcery: relationshipType = Author
	let authorID: String
	let content: String
}

Which will generate a file that looks like this:

// Generated using Sourcery 0.8.0 — https://github.com/krzysztofzablocki/Sourcery
// DO NOT EDIT
	
extension Author {
	var posts: [Post] {
		return postIDs.flatMap({ id -> Post? in
			return FlatCache.shared.get(id: id)
		})
	}
}

extension Post {	
	var author: Author? {
		return FlatCache.shared.get(id: authorID)
	}	
}

This is the bulk of the pattern. However, there a few considerations to examine.

JSON

Building this structure from a tree of nested JSON is messy and tough. The system works a lot better if you use a structure of the JSON looks like the structure of the flat cache. All the objects exist in a big dictionary at the top level (one key for each type of object), and the relationships are defined by IDs. When a new JSON payload comes in, you can iterate over this top level, create all your local objects, and store them in the flat cache. Inform the requester of the JSON that the new objects have been downloaded, and then it can fetch relevant objects directly from the flat cache. The ideal structure of the JSON looks a lot like JSON API, although I’m not surpassingly familiar with JSON API.

Missing Values

One big difference between managing relationships directly and managing them through the flat cache is that with the flat cache, there is a (small) chance that the relationship won’t be there. This might happen because of a bug on the server side, or it might happen because of a consistency error when mutating the data in the flat cache (we’ll discuss mutation more in a moment). There are a few ways to handle this:

  • Return an Optional. What we chose to do for this app is return an optional. There are a lot of ways of handling missing values with optionals, including optional chaining, force-unwrapping, if let, and flatmapping, so it isn’t too painful to have to deal with an optional, and there aren’t any seriously deleterious effects to your app if a value is missing.
  • Force unwrap. You could choose to force-unwrap the relationship. That’s putting a lot of trust in the source of the data (JSON in our case). If a relationship is missing becuase of a bug on your server, your app will crash. This is really bad, but on the bright side, you’ll get a crash report for missing relationships, and you can fix it on the server-side quickly.
  • Return a Promise. While a promise is the most complex of these three solutions to deal with at the call site, the benefit is that if the relationship doesn’t exist, you can fetch it fresh from the server and fulfill the promise a few seconds later.

Each choice has its downsides. However, one benefit to code generation is that you can support more than one option. You can synthesize both a promise and an optional getter for each relationship, and use whichever one you want at the call site.

Mutability

So far I’ve only really discussed immutable relationships and read-only data. The app where we’re using the pattern has entirely immutable data in its flat cache. All the data comes down in one giant blob of JSON, and then the flat cache is fully hydrated. We never write back up to the server, and all user-generated data is stored locally on the device.

If you want the same pattern to work with mutable data, a few things change, depending on if your model objects are classes or structs.

If they’re classes, they’ll need to be thread safe, since each instance will be shared across the whole app. If they’re classes, mutating any one reference to an entity will mutate them all, since they’re all the same instance.

If they’re structs, your flat cache will need to be thread safe. The Sourcery template will have to synthesize a setter as well as a getter. In addition, anything long-lived (like a VC) that relies on the data being updated regularly should draw its value directly from the flat cache.

final class PostVC {
    var postID: String
    
    init(postID: String) {
        self.postID = postID
    }
    
    var post: Post? {
		get {
			return cache.get(id: postID)
		}
		set {
			guard let newValue = newValue else { return }
			cache.set(value: newValue)
		}
	}
}

To make this even better, we can pull a page from Chris Eidhof’s most recent blog post about struct references, and roll this into a type.

class Cached<T: Cachable> {
	
	var cache = FlatCache.shared
	
	let id: String
	
	init(id: String) {
		self.id = id
	}
	
	var value: T? {
		get {
			return cache.get(id: id)
		}
		set {
			guard let newValue = newValue else { return}
			cache.set(value: newValue)
		}
	}
}

And in your VC, the post property would be replaced with something like:

lazy var cachedPost: Cached<Post>(id: postID)

Lastly, if your system has mutations, you need a way to inform objects if a mutation occurs. NSNotificationCenter could be a good system for this, or some kind of reactive implementation where you filter out irrelevant notifications and subscribe only to the ones you care about (a specific post with a given post ID, for example).

Singletons and Testing

This pattern relies on a singleton to fetch the relationships. This has a chance of hampering testability. There are a few different options to handle this, including injecting the flat cache into the various objects that use it, and having the flat cache be a weak var property on each model object instead of a static let. At that point, any flat cache would be responsible for ensuring the integrity of its child objects references to the flat cache. This is definitely added complexity, and it’s a tradeoff that comes along with this pattern.

References and other systems

  • Redux recommends a pattern similar to this. They have a post describing how to structure your data called Normalizing State Shape.
  • They discuss it a bit in this post as well:

    The solution to caching GraphQL is to normalize the hierarchical response into a flat collection of records. Relay implements this cache as a map from IDs to records. Each record is a map from field names to field values. Records may also link to other records (allowing it to describe a cyclic graph), and these links are stored as a special value type that references back into the top-level map. With this approach each server record is stored once regardless of how it is fetched.

Vapor’s JSON handling leaves something to be desired. Pulling something out of the request’s JSON body looks like this:

var numberOfSpots: Int?

init(request: Request) {
	self.numberOfSpots = request.json?["numberOfSpots"]?.int
}

There’s a lot of things I don’t like about this code.

  1. The int hanging off the end is extraneous: The type system already knows self.numberOfSpots is an int; ideally, I wouldn’t have to tell it twice.
  2. The optional situation is out of control. The JSON property might not exist on the request, the title property might not exist in the JSON, and the property might not be a int. Each of those branches is represented by an Optional, which is flattened using the optional chaining operator. At the end of the expression, if the value is nil, there’s no way to know which one of the components failed.
  3. At the end of the ridiculous optional chain, the resulting numberOfSpots value must be optional. If I need the numberOfSpots property to be required, I need to add an extra variable and a guard.

     guard let numberOfSpots = request.json?["numberOfSpots"]?.int else {
         throw Abort(status: .badRequest, reason: "Missing 'numberOfSpots'.")
     }
     self.numberOfSpots = numberOfSpots
    

    Needless to say, this is bad code made worse. The body of the error doesn’t contain any information besides the key “numberOfSpots”, so there’s a little more duplication there, and that error isn’t even accurate in many cases. If the json property of the request is nil, that means that either the Content-Type header was wrong or that the JSON failed to parse, neither of which are communicated by the message “Missing ‘numberOfSpots’.” If the “numberOfSpots” key was present, but stored a string (instead of an int), the .int conversion would fail, resulting in an optional, and the error message would be equally useless.

Probably more than half of the requests in the Beacon API have JSON bodies to parse and dig values out of, so this is an important thing to get right.

The broad approach here is to follow the general model for how we parse JSON on the client. We can use type inference to deal with the extraneous conversions, and errors instead of optionals.

Let’s look at the errors first. We’ve discussed three possible errors: missing JSON, missing keys, and mismatched types. Perfect for an error enum:

enum JSONError: AbortError {

	var status: Status {
		return .badRequest
	}
	
	case missingJSON
	case missingKey(keyName: String)
	case mismatchedType(keyName: String, expectedType: String, actualType: String)
	
	var reason: String {
		switch self {
		case .missingJSON:
			return "The endpoint requires a JSON body and a \"Content-Type\" of \"application/json\"."
		case let .missingKey(keyName):
			return "This endpoint expects the JSON key \(missingKey), but it wasn't present."
		case let .mismatchedType(keyName, expectedType, actualType):
			return "This endpoint expects the JSON key '\(key)'. It was present, but did not have the expected type \(expectedType). It had type '\(actualType).'"
		}
	}
}

Once the errors have been laid out, we can begin work on the rest of this implementation. Using a similar technique to the one laid out Decoding JSON in Swift, we can begin to build things up. (I call it NiceJSON because Vapor provides a json property on the Request, and I’d like to not collide with that.)

class NiceJSON {
	let json: JSON?

	public init(json: JSON?) {
		self.json = json
	}

	public func fetch<T>(_ key: String) throws -> T {
		// ...
	}

}

However, here, we run into the next roadblock. I typically store JSON on the client as a [String: Any] dictionary. In Vapor, it’s stored as a StructuredData, which is an enum that can be stored in one of many cases: .number, .string, .object, .bool, .date, and so on.

While this is strictly more type-safe (a JSON object can’t store any values that aren’t representable in one of those basic forms — even though .date and .data are a cases of StructuredData, ignore them for now), it stands in the way of this technique. You need a way to bridge between compile-time types (like T and Int), and run-time types (like knowing to call the computed property .string). One way to handle this is to check the type of T precisely.

public func fetch<T>(_ key: String) throws -> T {

	guard let json = self.json else { throw JSONError.missingJSON }
	
	guard let untypedValue = json[key] else { throw JSONError.missingKey(key) }
	
	if T.self == Int.self {
		guard let value = untypedValue.int else {
			throw JSONError.mismatchedType(key, String.self)
		}
		return value
	}
	// handle bools, strings, arrays, and objects
}

While this works, it has one quality that I’m not crazy about. When you access the .int computed property, if the case’s associated value isn’t a int but can be coerced into a int, it will be. For example, if the consumer of the API passes the string “5”, it’ll be silently converted into a number. (Strings have it even worse: numbers are converted into strings, boolean values become the strings "true" and "false", and so on. You can see the code that I’m referring to here.)

I want the typing to be a little stricter. If the consumer of the API passes me a number and I want a string, that should be a .mismatchedType error. To accomplish this, we need to destructure Vapor’s JSON into a [String: Any] dictionary. Digging around the vapor/json repo a little, we can find code that lets us do this. It’s unfortunately marked as internal, so you have to copy it into your project.

extension Node {
	var backDoorToRealValues: Any {
		return self.wrapped.backDoorToRealValues
	}
}

extension StructuredData {
	internal var backDoorToRealValues: Any {
		switch self {
		case .array(let values):
			return values.map { $0.backDoorToRealValues }
		case .bool(let value):
			return value
		case .bytes(let bytes):
			return bytes
		case .null:
			return NSNull()
		case .number(let number):
			switch number {
			case .double(let value):
				return value
			case .int(let value):
				return value
			case .uint(let value):
				return value
			}
		case .object(let values):
			var dictionary: [String: Any] = [:]
			for (key, value) in values {
				dictionary[key] = value.backDoorToRealValues
			}
			return dictionary
		case .string(let value):
			return value
		case .date(let value):
			return value
		}
	}
}

The code is pretty boring, but it essentially converts from Vapor’s JSON type to a less well-typed (but easier to work with) object. Now that we have this, we can write our fetch method:

var dictionary: [String: Any]? {
	return json?.wrapped.backDoorToRealValues as? [String: Any]
}

public func fetch<T>(_ key: String) throws -> T {
	guard let dictionary = dictionary else {
		throw JSONError.jsonMissing()
	}
	
	guard let fetched = dictionary[key] else {
		throw JSONError.missingKey(key)
	}
	
	guard let typed = fetched as? T else {
		throw JSONError.typeMismatch(key: key, expectedType: String(describing: T.self), actualType: String(describing: type(of: fetched)))
	}
	
	return typed
}

It’s pretty straightforward to write implementations of fetchOptional(_:), fetch(_, transformation:), and the other necessary functions. I’ve gone into detail on them in the Decoding JSON in Swift post and in the Parser repo for that post, so I won’t dwell on those implementations here.

For the final piece, we need a way to access our new NiceJSON on a request. For that, I added a computed property to the request:

extension Request {
    public var niceJSON: NiceJSON {
        return NiceJSON(json: json)
    }
}

This version of the code creates a new NiceJSON each time you access the property, which can be optimized a little bit by constructing it once and sticking it in the storage property of the Request.

Finally, we can write the nice code that we want at the call site.

var numberOfSpots: Int

init(request: Request) throws {
	self.numberOfSpots = try request.niceJSON.fetch("numberOfSpots")
}

This code has provides a non-optional value, no duplication, and generates descriptive errors.

There’s on last gotcha that I want to go over: numbers in JSON. As of Vapor 2, all JSON numbers are stored as Double values. This means fetching numbers will only work if you fetch them as a Double. This doesn’t appear to be documented anywhere, so I’m not sure how much we should rely on it, but it appears to currently work this way. I think the reason for it is NSNumber subtyping weirdness. On Mac Foundation, numbers in JSON are stored in the NSNumber type, which can be as casted to a Double, Bool, Int, or UInt. Because of the different runtime, that stuff doesn’t work the same way in Linux Foundation, so everything is stored in the least common denominator format, Double, which can (more or less) represent Int and UInt types.

I have a small workaround for this, which at the top of the fetch method, add a special case for Int:

public func fetch<T>(_ key: String) throws -> T {
    if T.self == Int.self {
        return try Int(self.fetch(key) as Double) as! T
    }

Bool works correctly without having to be caught in a special way, and you shouldn’t use UInt types regularly in your code. The note is copied below for posterity and link rot:

Use UInt only when you specifically need an unsigned integer type with the same size as the platform’s native word size. If this isn’t the case, Int is preferred, even when the values to be stored are known to be nonnegative. A consistent use of Int for integer values aids code interoperability, avoids the need to convert between different number types, and matches integer type inference, as described in Type Safety and Type Inference.

NiceJSON is a small abstraction that lets you work with JSON in clean way, without having to litter your code with guards, expectations, and hand-written errors about the presence of various keys in the JSON body.

Sourcery is a code generation tool for Swift. While we’ve talked about code generation on the podcast, I haven’t really talked much about it on this blog. Today, I’d like to go in-depth on a concrete example of how we’re using Sourcery in practice for an app.

The app in question uses structs for its model layer. The app is mostly read-only, and data comes down from JSON, so structs work well. However, we do need to persist the objects so they load faster the next time, and so we need NSCoding conformance.

Swift 4 will bring Codable, a new protocol that supports JSON encoding and decoding, as well as NSCoding. Using Codable with NSKeyedArchiver is a little different than you’re used to, but it basically works. I’ve written up a small code sample here that you can drop into a playground to test. While that will obviate this specific use case for codegen eventually, the technique is still useful in the abstract. The new Codable protocol works by synthesizing encoding and decoding implementations for your types, and until we get access to this machinery directly, Sourcery is the best way to steal this power for ourselves. (Update, August 2018: We have moved away from this specific approach and towards Codable. It’s pretty good.)

To persist structs, I’m using the technique that I lay in this blog post. Essentially, each struct that needs to be encodable will get a corresponding NSObject wrapper that conforms it to NSCoding. If you haven’t read that post, now is a good time. The background in that post is necessary for the approach detailed here.

The technique in that blog post somewhat pedantically subscribes to the single responsiblity principle. One type (the struct) stores the data in memory, and another type (the NSObject wrapper) adds the conformance to NSCoding. The downside to this seperation is that you have to maintain a second type: if you add a new property to one of your structs, you need to add it to the init(coder:) and encode(with:) methods manually. The upside is that the separate type can be really easily generated.

This is where Sourcery comes in. With Sourcery, you use a templating language called Stencil to define templates. When the app is built, one of the build phases “renders” these templates into actual Swift code that is then compiled into your app. Other blog posts go into detail about how to set Sourcery up, so I won’t go into detail here, except to say that we check the Sourcery binary into git (so that everyone is on the same version) and the generated source files as well. Like CocoaPods, it’s just easier if those files are checked in.

Let’s discuss the actual technique. First, you need a protocol called AutoCodable.

protocol AutoCodable { }

This protocol doesn’t have anything in its definition — it purely serves as a signal to Sourcery that this file should have an encoder generated for it.

In a new file called AutoCodable.stencil, you can enumerate the objects that conform to this protocol.

{% for type in types.implementing.AutoCodable %}
	
// ...

{% endfor %}

Inside this for loop, we have access to a variable called type that has various properties that describe the type we’re working on this on this iteration of the for loop.

Inside the for loop, we can begin generating our code:

class Encodable{{ type.name }}: NSObject, NSCoding {
    
    var {{ type.name | lowerFirstWord }}: {{ type.name }}?
    
    init({{ type.name | lowerFirstWord }}: {{ type.name }}?) {
        self.{{ type.name | lowerFirstWord }} = {{ type.name | lowerFirstWord }}
    }
	 
    required init?(coder decoder: NSCoder) {
        // ...
    }
    	    
    func encode(with encoder: NSCoder) {
        // ...
    }
}

Sourcery’s templates mix Swift code (regular code) with stencil code (meta code, code that writes code). Anything inside the double braces ({{ }}) will be printed out, so

class Encodable{{ type.name }}: NSObject, NSCoding

will output something like

class EncodableCoordinate: NSObject, NSCoding

Stencil and Sourcery also provide useful “filters”, like lowerFirstWord. That filter turns an upper-camel-case identifier into a lower-camel-case identifier. For example, it will convert DogHouse to dogHouse.

Thus, the line

var {{ type.name | lowerFirstWord }}: {{ type.name }}?

converts to

var coordinate: Coordinate?

which is exactly what we are after.

At this point, we can run our app, have the build phase generate our code, and take a look at the file AutoCodable.generated.swift file and ensure everything is generating correctly.

Next, let’s take a look at the init(coder:) function that we will have to generate. This is tougher. Let’s lay out the groundwork:

required init?(coder decoder: NSCoder) {
    {% for variable in type.storedVariables %}
    
    // ...
    
    {% endfor %})
    
    {{ type.name | lowerFirstWord }} = {{ type.name }}(
        {% for variable in type.storedVariables %}
        {{ variable.name }}: {{ variable.name }}{% if not forloop.last %},{% endif %}
        {% endfor %}
     )

}

We will loop through all of the variables in order to pull something useful out of the decoder. The last 4 lines here use an initializer to actually initialize the type from all the variables we will create. It will generate code something like:

coordinate = Coordinate(
    latitude: latitude,
    longitude: longitude
)

This corresponds to the memberwise initializer that Swift provides for structs. While working on this feature, I became worried that the user of this template could become confused if the memberwise initailizer disappeared or if they re-implemented it with the parameters in a different order. At the end of the post, we’ll take a look at a second Sourcery template for generating these initializers.

Because our model objects (“encodables”) can contain properties which are also encodables, we have to make sure to convert those to and from their encodable representations. For something like a coordinate, where the only values are two doubles (for latitude and longitude), we don’t need to do much. For more interesting objects, there are three cases of encodables (array, optional, and regular) to handle in a special way, so in total, we have 5 situations to handle: an encodable array, an encodable optional, a regular encodable, a regular optional, and a regular value.

required init?(coder decoder: NSCoder) {
    {% for variable in type.storedVariables %}
    {% if variable.typeName.name|hasPrefix:"[" %} // note, this doesn't support Array<T>, only `[T]`

    // handle arrays

    {% elif variable.isOptional and variable.type.implements.AutoCodable %}

    // handle encodable optionals

    {% elif variable.isOptional and not variable.type.implements.AutoCodable %}
    
    // handle regular optionals
    
    {% elif variable.type.implements.AutoCodable %}

    // handle regular encodables

    {% else %}
    
    // handle regular values

    {% endif %}
    
    {% endfor %}

}

This sets up our branching logic.

Next, let’s look at each of the 5 cases. First, arrays of encodables:

guard let encodable_{{ variable.name }} = decoder.decodeObject(forKey: "{{ variable.name }}") as? [Encodable{{ variable.typeName.name|replace:"[",""|replace:"]","" }}] else { return nil }
let {{ variable.name }} = encodable_{{ variable.name }}.flatMap({ $0.{{ variable.typeName.name|replace:"[",""|replace:"]",""| lowerFirstWord}} })

The first line turns decodes an array of encodables, and the second line converts the encodables (which represent the NSCoding wrappers) into the actual objects. The code that’s generated looks something like:

guard let encodable_images = decoder.decodeObject(forKey: "image") as? [EncodableImage] else { return nil }
let images = encodable_images.flatMap({ $0.image })

Frankly, the stencil code is hideous. Stencil doesn’t support things like assignment of processed data to new variables, so things like {{ variable.typeName.name|replace:"[",""|replace:"]","" }} (which extracts the type name from the array’s type name) can’t be factored out. Stencil is designed more for presentation and less for logic, so this omission is understandable, however, it does make the code uglier.

The astute reader will note that I used underscores in a variable name, which is not the typical Swift style. I did this purely out of laziness: I didn’t want to deal with correctly capitalizing the variable name. Ideally, no one will ever look at this code, and it will work transparently in the background.

Next up, optionals. Encodable optionals first:

let encodable_{{ variable.name }} = decoder.decodeObject(forKey: "{{ variable.name }}") as? Encodable{{ variable.unwrappedTypeName }}
let {{ variable.name }} = encodable_{{variable.name}}?.{{variable.name}}

which generates something like

let encodable_image = decoder.decodeObject(forKey: "image") as? EncodableImage
let image = encodable_image?.image

And regular optionals, for things like numbers:

let {{ variable.name }} = decoder.decodeObject(forKey: "{{ variable.name }}") as? {{ variable.unwrappedTypeName }}

And its generated code:

let imageCount = decode.decodeObject(forKey: "imageCount") as? Int

Next, regular encodables:

guard let encodable_{{ variable.name }} = decoder.decodeObject(forKey: "{{ variable.name }}") as? Encodable{{ variable.typeName }},
let {{ variable.name }} = encodable_{{ variable.name }}.{{ variable.name }} else { return nil }

which generates:

guard let encodable_images = decoder.decodeObject(forKey: "image") as? EncodableImage,
let images = encodable_image?.image else { return nil }

And finally regular values:

guard let {{ variable.name }} = decoder.decodeObject(forKey: "{{ variable.name }}") as? {{ variable.typeName }} else { return nil }

And its generated code:

guard let imageCount = decoder.decodeObject(forKey: "imageCount") as? Int else { return nil }

I won’t go into too much detail on these last few, since they work similarly to the ones above.

Next, let’s quickly look at the encode(coder:) method. Because NSCoding is designed for Objective-C and everything is “optional” in Objective-C, we don’t have to handle optionals any differently. This means the number of cases we have to deal with are minimized.

func encode(with encoder: NSCoder) {
    {% for variable in type.storedVariables %}
    {% if variable.typeName.name|hasPrefix:"[" %}
    
    // array

    {% elif variable.type.implements.AutoCodable %}
    
    // encodable
    
    {% else %}
    
    // normal value
    
    {% endif %}
    {% endfor %}
}

The array handling code looks like this:

let encoded_{{ variable.name }} = {{ type.name | lowerFirstWord }}?.{{ variable.name }}.map({ return Encodable{{ variable.typeName.name|replace:"[",""|replace:"]","" }}({{ variable.typeName.name|replace:"[",""|replace:"]",""| lowerFirstWord }}: $0) })
encoder.encode(encoded_{{ variable.name }}, forKey: "{{ variable.name }}")

The encodable handling code looks like this:

encoder.encode(Encodable{{ variable.unwrappedTypeName }}({{ variable.name | lowerFirstWord }}: {{ type.name | lowerFirstWord }}?.{{ variable.name }}), forKey: "{{ variable.name }}")

And the normal properties can be encoded like so:

encoder.encode({{ type.name | lowerFirstWord }}?.{{ variable.name }}, forKey: "{{ variable.name }}")

The stencil code is a bit complex, hard to read, and messy. However, the nice thing is that if you write this repetitive code once, it will generate a lot more repetitive code for you. For example, the whole template is 86 lines, and for all of our models, it generates about 500 lines of boilerplate.

One thing I was surprised to learn is that the code is surprisingly robust. We wrote this template in a day back in February. In the intervening 6 months, we haven’t edited this template at all. No modifications have been necessary to support any of the changes we made to our models since then.

The last thing we need to support a few protocol that show our system how to bridge between the encodables and the structs:

extension Encodable{{ type.name }}: Encoder {
    typealias Value = {{ type.name }}
    
    var value: {{ type.name }}? {
        return {{ type.name | lowerFirstWord }}
    }
}

extension {{ type.name }}: Archivable {
    typealias Encodable = Encodable{{ type.name }}
    
    var encoder: Encodable{{ type.name }} {
        return Encodable{{ type.name }}({{ type.name | lowerFirstWord }}: self)
    }
}

To learn more about the purpose of these extra conformances, you can read the original blog post.

Finally, let’s take a quick look at the Sourcery template for something we could call AutoInitializable. The approach is similiar to what we’ve looked at for AutoCodable, with one additional component. Because of Swift’s initialization rules, it only works with structs (classes in Swift can’t add new initializers in extensions).

{# this template currently only works with structs (no classes) #}
{% for type in types.implementing.AutoInitializable %}

extension {{ type.name }} {
{% if type.kind == "struct" and type.initializers.count == 0 %}
    // no initializer, since there is a free memberwise intializer that Swift gives us
{% else %}
    {{ type.accessLevel }} init({% for variable in type.storedVariables %}{{ variable.name }}: {{ variable.typeName }}{% if variable.annotations.initializerDefault %} = {{ variable.annotations.initializerDefault }}{% endif %}{% if not forloop.last %}, {% endif %}{% endfor %}) {
        {% for variable in type.storedVariables %}
        self.{{ variable.name }} = {{ variable.name }}
        {% endfor %}
    }
{% endif %}
}

{% endfor %}

I won’t belabor this with a line-by-line breakdown, but I will note that it takes advantage of a Sourcery feature called “annotations”. This lets you add additional information to a particular property. In this case, we had certain cases where some properties needed to have initializerDefault values (usually nil), so we were able to add support for that. A Sourcery annotation is declared like so:

// sourcery: initializerDefault = nil
var distance: Distance?

This post lays out one of the more involved uses for code generation. Sourcery gives you the building blocks you need to build complex templates like this one, and it removes the need to maintain onerous boilerplate manually. This particular template code was designed for our needs and may not suit every app. It will ultimately be rendered obsolete by Swift 4’s Codable. However, the example serves as case study for more complex Sourcery templates. They are flexible without any loss in robustness. I was initially worried that this meta-code would be brittle, breaking frequently, but in practice, these templates haven’t required a single change since they were first written, and they ensure that our encodable representations always stay perfectly up-to-date with any model changes.