There’s quite a bit of conversation on Swift Evolution about the potential future of Sequence and Collection hierarchy. There’s a number of problems that the community is trying to solve and a variety of approaches that have been presented to solve them. Several of these approaches include the ability to create infinite collections, and every time anyone brings this up, someone invariably asks how they would work. Nothing in the type system prevents us from making an infinite collection — it’s purely a semantic constraint — meaning that while we can write infinite collections in Swift today, we’re not supposed to. It violates the assumptions that other people’s code will make about our object. In this post, I’ll take an infinite sequence and demonstrate how we could turn it into an infinite collection.

The Fibonacci numbers are a great example of an infinite sequence. For a quick refresher, each element in that sequence is the sum of the previous two numbers.

Swift has pretty good facility for defining new sequences quickly. Two useful functions are sequence(state:next:) and sequence(first:next). Here, we’ll use the state version:

let fib = sequence(state: (0, 1), next: { state -> Int? in
    let next = state.0 + state.1
    state = (state.1, next)
    return next
})

Calling fib.prefix(10) will give you the first 10 Fibonacci numbers.

Note that this is an infinite sequence! If you try to map over this sequence (which the type system fully allows!), your program will just spin infinitely until it runs out of integers or the heat death of the universe, whichever comes first.

To convert this into an infinite collection, we need to think about a few things. First, how will we represent an index to a value? For an array, this is a number that probably represents a memory offset. For a string, this might represents a byte offset that that character starts at. It needs to be some value that lets us retrieve the value at that index at constant time, or O(1).

One of the interesting things about iterators is that their current state represents the data that you need to get to the next value in constant time. So any deterministic sequence can be represented as a collection, by using its iterator’s state as an index. Our state before is (Int, Int), so we can start off by using that type for our Index. These are the four methods we need to define a Collection:

struct FibonacciCollection: Collection {

	typealias Index = (Int, Int)
	
	var startIndex: Index {
		return (0, 1)
	}
	
	func index(after previous: Index) -> Index {
		return (previous.1, previous.0 + previous.1)
	}
	
	var endIndex: Index {
		fatalError("This collection has no endIndex, because it is infinite")
	}
	
	subscript(index: Index) -> Int {
		return index.0 + index.1
	}
}

We can use (Int, Int) (our state from the fibonacci numbers when they were a sequence) as the index, and use (0, 1) as our starting value (just like our sequence).

But there are a few problems with this approach. To iterate over an arbitrary collection, you can imagine constructing a big while loop:

var index = collection.startIndex
while index < endIndex {
	let value = collection[index]
	// use `value` somehow
	index = collection.index(after: index)
}

(This isn’t how we’d use our collection, since we know it’s infinite, but this is how normal collections work.) Two things stand out. First, we need the Index type to be Comparable — so we can’t use a tuple — and we need endIndex to be defined so that we can check against it — so we can’t fatalError (ding!).

So we need a new index type: something that’s Comparable and something that’s unreachable. The unreachable value should compute to being “greater” than all the reachable values (since it goes at the “end” of the collection). Our first attack is to model this as a struct of two integers, to solve the comparability problem:

struct FibIndex {
	let first: Int
	let second: Int
}

extension FibIndex: Comparable {
	static func == (lhs: FibIndex, rhs: FibIndex) -> Bool {
		return lhs.first == rhs.first && lhs.second == rhs.second
	}
	
	static func < (lhs: FibIndex, rhs: FibIndex) -> Bool {
		return lhs.first < rhs.first
	}
}

This solves the comparability problem, but not the unreachability problem. We need a way to represent one more value, a value that’s separate from all the others. Let’s start over, but with an enum this time:

enum FibIndex {
	case reachable(Int, Int)
	case unreachable
}

This could solve both of our problems. Unfortunately, it makes writing == and < pretty messy:

extension FibIndex: Comparable {
	static func == (lhs: FibIndex, rhs: FibIndex) -> Bool {
		switch (lhs, rhs) {
		case (.unreachable, .unreachable):
			return true
		case let (.reachable(l1, l2), .reachable(r1, r2)):
			return l1 == r1 && l2 == r2
		case (.reachable, .unreachable):
			return false
		case (.unreachable, .reachable):
			return false
		}
	}
	
	static func < (lhs: FibIndex, rhs: FibIndex) -> Bool {
		switch (lhs, rhs) {
		case (.unreachable, .unreachable):
			return false
		case let (.reachable(l1, l2), .reachable(r1, r2)):
			return l1 < r1
		case (.reachable, .unreachable):
			return true
		case (.unreachable, .reachable):
			return false
		}
	}
}

(There are some ways to clean this up with better pattern matching, but I’m going with the explicitness of the full patterns here.)

Let’s add some helpers:

extension FibIndex {
	var sum: Int {
		switch self {
		case .unreachable:
			fatalError("Can't get the sum at an unreachable index")
		case let .reachable(first, second): 
			return first + second
		}
	}
	
	var next: FibIndex {
		switch self {
		case .unreachable:
			fatalError("Can't get the next value of an unreachable index")
		case let .reachable(first, second): 
			return .reachable(second, first + second)
		}
	}
}

And now we can complete our FibonacciCollection:

struct FibonacciCollection: Collection {

	typealias Index = FibIndex
	
	var startIndex: Index {
		return .reachable(0, 1)
	}
	
	func index(after previous: Index) -> Index {
		return previous.next
	}
	
	var endIndex: Index {
		return .unreachable
	}
	
	subscript(index: Index) -> Int {
		return index.sum
	}
}

This is pretty good! We get all of the methods from the Sequence and Collection protocols for free. Unfortunately, this is still an infinite collection, and just like the infinite sequence, we can’t map over it, or we’ll risk looping forever. There are new things on the Collection protocol that we also can’t do — like get the count of this collection. That will also spin forever.

There is one more tweak I’d like to show here, which is extracting the Fibonacci-specific parts of this and making a generic InfiniteCollectionIndex. To do this, we’d keep our basic enum’s structure, but instead of the .reachable case having type (Int, Int), we’d need to put a generic placeholder there. How would that look?

Let’s call the inner index type WrappedIndex. We need to make sure that the wrapped Index is comparable:

enum InfiniteCollectionIndex<WrappedIndex: Comparable>: Comparable {
	case reachable(WrappedIndex)
	case unreachable
}

The Equatable and Comparable implementation stay mostly the same, but instead of comparing integers, they just forward to WrappedIndex:

extension InfiniteCollectionIndex: Comparable {
	static func == (lhs: InfiniteCollectionIndex, rhs: InfiniteCollectionIndex) -> Bool {
		switch (lhs, rhs) {
		case (.unreachable, .unreachable):
		    return true
		case let (.reachable(left), .reachable(right)):
		    return left == right
		case (.reachable, .unreachable):
		    return false
		case (.unreachable, .reachable):
		    return false
		}
	}
	
	static func < (lhs: InfiniteCollectionIndex, rhs: InfiniteCollectionIndex) -> Bool {
		switch (lhs, rhs) {
		case (.unreachable, .unreachable):
			return false
		case let (.reachable(left), .reachable(right)):
			return left < right
		case (.reachable, .unreachable):
			return true
		case (.unreachable, .reachable):
			return false
		}
	}
}

And one more helper, because there will be a lot of situations where we want to assume that that the .unreachable value is, well, unreachable:

extension InfiniteCollectionIndex {
	func assumingReachable<T>(_ block: (WrappedIndex) -> T) -> T {
		switch self {
		case .unreachable:
			fatalError("You can't operate on an unreachable index")
		case let .reachable(wrapped):
			return block(wrapped)
		}
	}
	
	func makeNextReachableIndex(_ block: (WrappedIndex) -> WrappedIndex) -> InfiniteCollectionIndex {
		return .reachable(assumingReachable(block))
	}
}

This function takes a block that assumes that the value is .reachable, and returns whatever that block returns. If the value is .unreachable, it crashes.

We can now use the struct form of FibIndex from above:

struct FibIndex {
	let first: Int
	let second: Int
}

extension FibIndex: Comparable {
	static func == (lhs: FibIndex, rhs: FibIndex) -> Bool {
		return lhs.first == rhs.first && lhs.second == rhs.second
	}
	
	static func < (lhs: FibIndex, rhs: FibIndex) -> Bool {
		return lhs.first < rhs.first
	}
}

I won’t go into too much of the details here, but this allows us to define simple Comparable index types, and make them into infinite indexes easily:

struct FibonacciCollection: Collection {

	typealias Index = InfiniteCollectionIndex<FibIndex>
	
	var startIndex: Index {
		return .reachable(FibIndex(first: 0, second: 1))
	}
	
	func index(after previous: Index) -> Index {
		return previous.makeNextReachableIndex({
			FibIndex(first: $0.second, second: $0.first + $0.second)
		})
	}
	
	var endIndex: Index {
		return .unreachable
	}
	
	subscript(index: Index) -> Int {
		return index.assumingReachable({ $0.first + $0.second })
	}
}

It’s not clear which way the tides will ultimately shift with Swift’s Collection model, but if and when infinite collections become legal in Swift, this technique will help you define them easily.

This article is also available in Chinese.

Last year I wrote a post about how adding simple optional properties to your classes is the easy thing to do when you want to extend functionality, but can actually subtly harm your codebase in the long run. This post is the spiritual successor to that one.

Let’s say you’re designing the auth flow in your app. You know this flow isn’t going to be simple or linear, so you want to make some very testable code. Your approach to this is to first think through all the screens in that flow and put them in an enum:

enum AuthFlowStep {
    case collectUsernameAndPassword
    case findFriends
    case uploadAvatar
}

Then you put all your complicated logic into a single function that takes a the current step and the current state, and spits out the next step of the flow:

func stepAfter(_ currentStep: AuthFlowStep, context: UserState) -> AuthFlowStep

This should be very easy to test. So far, so good.

Except — you’re working through the logic, and you realize that sometimes you won’t always be able to return a AuthFlowStep. Once the user has submitted all the data they need to fully auth, you need something that will signal that the flow is over. You’re working right there in the function, and you want to want to return a special case of the thing that you already have. What do you do? You change the return type to an optional:

func stepAfter(_ currentStep: AuthFlowStep, context: UserState) -> AuthFlowStep?

This solution works. You go to your coordinator and call this function, and start working with it:

func finished(flowStep: AuthFlowStep, state: UserState, from vc: SomeViewController) {
	let nextState = stepAfter(flowStep, context: state)

When we get nextState, it’s optional, so the default move here is to guard it to a non-optional value.

	guard let nextState = stepAfter(flowStep, context: state) else {
		self.parentCoordinator.authFlowFinished(on: self)
	}
	switch nextState {
	case .collectUsernameAndPassword:
		//build and present next view controller

This feels a bit weird, but I remember Olivier’s guide on pattern matching in Swift, and I remember that I can switch on the optional and my enum at the same time:

func finished(flowStep: AuthFlowStep, state: UserState, from viewController: SomeViewController) {
	let nextState = stepAfter(flowStep, context: state) // Optional<AuthFlowStep>
	switch nextState {
	case nil:
		self.parentCoordinator.authFlowFinished(on: self)
	case .collectUsernameAndPassword?:
		//build and present next view controller

That little question mark lets me match an optional case of an enum. It makes for better code, but something still isn’t sitting right. If I’m doing one big switch, why am I trying to unwrap a nested thing? What does nil mean in this context again?

If you paid attention to the title of the post, perhaps you’ve already figured out where this is going. Let’s look at the definition of Optional. Under the hood, it’s just an enum, like AuthFlowState:

enum Optional<Wrapped> {
	case some(Wrapped)
	case none
}

When you slot an enum into the Optional type, you’re essentially just adding one more case to the Wrapped enum. But we have total control over our AuthFlowStep enum! We can change it and add our new case directly to it.

enum AuthFlowStep {
    case collectUsernameAndPassword
    case findFriends
    case uploadAvatar
    case finished
}

We can now remove the ? from our function’s signature:

func stepAfter(_ currentStep: AuthFlowStep, context: UserState) -> AuthFlowStep

And our switch statement now switches directly over all the cases, with no special handling for the nil cases.

Why is this better? A few reasons:

First, what was once nil is now named. Before, users of this function might not know exactly what it means when the function returns nil. They either have to resort to reading documentation, or worse, reading the code of the function, to understand when nil might be returned.

Second, simplicity is king. No need to have a guard then a switch, or a switch that digs through two layers of enums. One layer of thing, always easy to handle.

Lastly, precision. Having return nil available as a bail-out at any point in a function can be a crutch. The next developer might find themselves in an exceptional situation where they’d like to jump out of the function, and so they drop a return nil Except now, nil has two meanings, and you’re not handling one of them correctly.

When you add your own special cases to enums, it’s also worth thinking about the name you’ll use. There are lots of names available, any of which might make sense in your context: .unknown, .none, .finished, .initial, .notFound, .default, .nothing, .unspecified, and so on. (One other thing to note is that if you have a .none case, and you do happen to make that value optional, then whether it will use Optional.none or YourEnum.none is ambiguous.)

I’m using flow states as a general example here, but I think the pattern can be expanded to fit other situations as well — anytime you have an enum and want to wrap it in an optional, it’s worth thinking if there’s another enum case hiding there, begging to be drawn out.

Thanks to Bryan Irace for feedback and the example for this post.

In Apple’s documentation, they suggest you use a pattern called MVC to structure your apps. However, the pattern they describe isn’t true MVC in the original sense of the term. I’ve touched on this point here before, but MVC was a design pattern created for Smalltalk. In Smalltalk’s formulation, each of the 3 components, model, view, and controller, each talked directly to each other. This means that either the view knows how to apply a model to itself, or the model knows how to apply itself to a view.

When we write iOS apps, we consider models and views that talk directly to each other as an anti-pattern. What we call MVC is more accurately described as model-view-adapter. Our “view controllers” (the “adapters”) sit in between the model and view and mediate their interactions. In general, I think this is a good modification to MVC — the model and view not being directly coupled together and instead connected via an intermediary seems like a positive step. However, I will caveat this by saying that I haven’t worked with many systems that don’t maintain this separation.

So, that’s why we have view controllers in iOS. They serve to glue the model and view together. Now, there are downstream problems with this style of coding: code that doesn’t obviously belong in models or views ends up in the view controller, and you end up with gigantic view controllers. I’ve discussed that particular problem on this blog many times, but it’s not exactly what I want to talk about today.


I’ve heard whispers through the grapevine of what’s going on under the hood with UIViewController. I think the longer you’ve been working with UIKit, the more obvious this is, but the UIViewController base class is not pretty. I’ve heard that, in terms of lines of code, it’s on the higher end of 10,000 to 20,000 lines (and this was a few years ago, so they’ve maybe broken past the 20 kloc mark at this point).

When you want the benefits of an object to glue a UIView and a model object (or collection thereof) together, typically, we use view controller containment to break the view controller up into smaller pieces, and compose them back together.

However, containment can be finicky. It subtly breaks things if you don’t do it right, with no indication of how to fix any issues. Then, when you finally do see your bug, which was probably a misordering of calls to didMove or willMove or whatever, everything magically starts working. In fact, the very presence of willMove and didMove suggest that containment has some invisible internal state that needs to be cleaned up.

I’ve seen this firsthand in two particular situations. First, I’ve seen this issue pop up with putting view controllers in cells. When I first did this, I had a bug in the app where some content in the table view would randomly disappear. This bug went on for months, until I realized I misunderstood the lifecycle of table view cells, and I wasn’t correctly respecting containment. Once I added the correct -addChildViewController calls, everything started working great.

To me, this showed a big thing: a view controller’s view isn’t just a dumb view. It knows that it’s not just a regular view, but that it’s a view controller’s view, and its qualities change in order to accommodate that. In retrospect, it should have been obvious. How does UIViewController know when to call -viewDidLayoutSubviews? The view must be telling it, which means the view has some knowledge of the view controller.

The second case where I’ve more recently run into this is trying to use a view controller’s view as a text field’s inputAccessoryView. Getting this behavior to play nicely with the behavior from messaging apps (like iMessage) of having the textField stick to the bottom was very frustrating. I spent over a day trying to get this to work, with minimal success to show for it. I ended up reverting to a plain view.

I think at a point like that, when you’ve spent over a day wrestling with UIKit, it’s time to ask: is it really worth it to subclass from UIViewController here? Do you really need 20,000 lines of dead weight to to make an object that binds a view and a model? Do you really need viewWillAppear and rotation callbacks that badly?


So, what does UIViewController do that we always want?

  • Hold a view.
  • Bind a model to a view.

What does it do that we usually don’t care about?

  • Provide storage for child view controllers.
  • Forward appearance (-viewWillAppear:, etc) and transition coordination to children.
  • Can be presented in container view controllers like UINavigationController.
  • Notify on low memory.
  • Handle status bars.
  • Preserve and restore state.

So, with this knowledge, we now know what to build in order to replace view controllers for the strange edge cases where we don’t necessarily want all their baggage. I like this pattern because it tickles my “just build it yourself” bone and solves real problems quickly, at the exact same time.

There is one open question, which is what to name it. I don’t think it should be named a view controller, because it might be easy to confuse for a UIViewController subclass. We could just call it a regular Controller? I don’t hate this solution (despite any writings in the past) because it serves this same purpose a controller in iOS’s MVC (bind a view and model together), but there are other options as well: Binder, Binding, Pair, Mediator, Concierge.

The other nice thing about this pattern is how easy it is to build.

class DestinationTextFieldController {

    var destination: Destination?

    weak var delegate: DestinationTextFieldControllerDelegate?

    let textField = UITextField().configure({
        $0.autocorrectionType = .no
        $0.clearButtonMode = .always
    })
    
}

It almost seems like heresy to create an object like this and not subclass UIViewController, but when UIViewController isn’t pulling its weight, it’s gotta go.

You already know how to add functionality to your new object. In this case, the controller ends up being the delegate of the textField, emitting events (and domain metadata) when the text changes, and providing hooks into updating its view (the textField in this case).

extension DestinationTextFieldController {
	var isActive: Bool {
		return self.textField.isFirstResponder
	}

	func update(with destination: Destination) {
		self.destination = destination
		configureView()
	}
	
	private func configureView() {
		textField.text = destination.descriptionForDisplay
	}
}

There are a few new things you’re responsible with this new type of controller:

  • you have to make an instance variable to store it
  • you’re responsible for triggering events on it — because it’s not a real view controller, there’s no more -viewDidAppear:
  • you’re not in UIKit anymore, so you can’t directly rely on things like trait collections or safe area insets or the responder chain — you have to pass those things to your controller explicitly

Using this new object isn’t too hard, even thought you do have to explicitly store it so it doesn’t deallocate:

class MyViewController: UIViewController, DestinationTextFieldControllerDelegate {


	let destinationViewController = DestinationTextFieldController()
	
	override func viewDidLoad() {
		super.viewDidLoad()
		destinationViewController.delegate = self
		view.addSubview(destinationViewController.view)
	}
	
	//handle any delegate methods

}

Even if you use this pattern, most of your view controllers will still be view controllers and subclass from UIViewController. However, in those special cases where integrating a view controller causes you hours and hours of pain, this can be a perfect way to simply opt out of the torment that UIKit brings to your life daily.

If you want some extreme bikeshedding on a very simple topic, this is the post for you.

For the folks who have been at any of the recent conferences I’ve presented at, you’ve probably seen my talk, You Deserve Nice Things. The talk is about Apple’s reticence to provide conveniences for its various libraries. It draws a lot on an old post of mine, Categorical, as well as bringing in a lot of new Swift stuff.

One part of the standard library that’s eminently extendable is Sequence and friends. Because of the number of different operations we want to perform on these components, even the large amount of affordances that the standard library does provide (standard stuff like map, filter, as well as more abstruse stuff like partition(by:) and lexicographicallyPrecedes), it still doesn’t cover the breadth of operations that are useful in our day to day programming life.

In the talk, I propose a few extensions that I think are useful, including any, all, none, and count(where:), eachPair, chunking, and a few others. None of these ideas are original to me. They’re ideas lifted wholesale from other languages, primary Ruby and its Enumerable module. Enumerable is a useful case study because anything that’s even marginally useful gets added to this module, and Ruby developers reap the benefits.

Swift’s standard library is a bit more conservative, but its ability to extend types and specifically protocols makes the point mostly moot. You can bring your own methods as needed. (In Categorical, I make the case that this is the responsibility of the vendor, but until they’re willing to step up, we’re on our own.)


I have an app where I need to break up a gallery of images in groups based on date. The groups could be based on individual days, but we thought grouping into “sessions” would be better. Each session is defined by starting more than an hour after the last session ended. More simply, if there’s a gap between two photos of more than hour, that should signal the start of a new session.

Since we’re splitting a Sequence, the first thought is to use something built in: there’s a function called split, which takes a bunch of parameters (maxSplits, omittingEmptySubsequences and a block called isSeparator to determine if an element defines a split). However, we can already see from the type signature that this function isn’t quite going to do what we want. The isSeparator function yields only one element, which makes it hard to tell if there should be a split between two consecutive elements, which is what we’re actually trying to do. In point of fact, this function is consumes the separators, because it’s a more generic version of the String function split, which is useful in its own right.

No, we need something different.


In her post, Erica Sadun has some answers for us: she’s got a function that works sort of like what we want. In this case, the block takes the current element and the current partition, and you can determine if the current element fits in the current partition.

extension Sequence {
    public func partitioned(at predicate: (_ element: Iterator.Element, _ currentPartition: [Iterator.Element]) -> Bool ) -> [[Iterator.Element]] {
        var current: [Iterator.Element] = []
        var results: [[Iterator.Element]] = []
        for element in self {
            guard !current.isEmpty else { current = [element]; continue }
            switch predicate(element, current) {
            case true: results.append(current); current = [element]
            case false: current.append(element)
            }
        }
        guard !current.isEmpty else { return results }
        results.append(current)
        return results
    }
}

I’ve got a few small gripes with this function as implemented:

The name. Partitioning in the standard library currently refers to changing the order of a MutableCollection and returning a pivot where the collection switches from one partition to another. I wouldn’t mind calling it something with split in the name, but as mentioned before, that usually consumes the elements. I think calling this slicing is the best thing to do. Slicing is also a concept that already exists in the standard library (taking a small part of an existing Collection) but in a lot of ways, that’s actually what we’re doing in this case. We’re just generating more than one slice.

The name again. Specifically, the at in partition(at:) doesn’t seem exactly right. Since we’re not consuming elements anymore, the new partition has to happen either before the given element, or after it. The function’s name doesn’t tell us which. If we change the signature of the function to return a consecutive pair of elements, we can change its name to slice(between:).

This has the added benefit of no longer requiring a force unwrap in usage. Erica’s whole problem with this code is that passing back a collection that is known to be non-empty (*COUGH*), means that she has to force unwrap access to the last element:

testArray.partitioned(at: { $0 != $1.last! })

If we change the function to slice(between:), we can simplify this way down, to:

testArray.slice(between: !=)

Way nicer.

Now that we’ve beaten the name into the ground, let’s move onto discussion about the implementation. Specifically, there is some duplication in the code that I find problematic. Two expressions in particular:

First,

current = [element]

Second,

results.append(current)

This code may not look like much — only a few characters, you protest! — but it represents a duplication in concept. If these expressions needed to expand to do more, they’d have to be expanded in two places. This was excruciatingly highlighted when porting this code over to my gallery app (which is Objective-C) I wanted to wrap the image collections in a class called a PhotoSection. Creating the PhotoSection in two places made it painfully obvious that this algorithm duplicates code.

This code is ugly and frustrating: the important part of the code, the actual logic of what’s happening is tucked away in the middle there.

[photosForCurrentSection.lastObject.date timeIntervalSinceDate:photo.date] > 60*60

The entire rest of the code could be abstracted away. Part of this is Objective-C’s fault, but this version of the code really does highlight the duplication that happens when modifying the code that creates a new section.


The last piece of the puzzle here is how Ruby’s Enumerable comes into play. When exploring that module, you can see three functions that look like they could be related: slice_when, slice_before, and slice_after. What is the relationship between these functions?

It seems obvious now that I know, but I did have to do some searching around for blog posts to explain it. Essentially, slice_when is like our slice(between:). It gives you two elements and you tell it if there should be a division there. slice(before:) and slice(after:) each yield only one element in their block, and slice before or after that element. This clears up the confusion with Erica’s original function — we now know if it’ll slice before or after our element, based on the name of the function.


As for implementation, you could use a for loop and the slight duplication of Erica’s code, but I’ve recently been trying to express more code in terms of chains of functions that express a whole operation that happens to the collection all at once. I find this kind of code more elegant, easier to read, and less prone to error, even though it does pass over the relevant collection multiple times.

To write our slice(between:) function in that style, we first need two helpers:

extension Collection {
	func eachPair() -> Zip2Sequence<Self, Self.SubSequence> {
		return zip(self, self.dropFirst())
	}

	func indexed() -> [(index: Index, element: Element)] {
		return zip(indices, self).map({ (index: $0, element: $1) })
	}
}

I’m a big fan of composing these operations together, from smaller, simpler concepts into bigger, more complex ones. There are slightly more performant ways to write both of those functions, but these simple implementations will do for now.

extension Collection {
	func slice(between predicate: (Element, Element) -> Bool) -> [SubSequence] {
		let innerSlicingPoints = self
			.indexed()
			.eachPair()
			.map({ (indexAfter: $0.1.index, shouldSlice: predicate($0.0.element, $0.1.element)) })
			.filter({ $0.shouldSlice })
			.map({ $0.indexAfter })

		let slicingPoints = [self.startIndex] + innerSlicingPoints + [self.endIndex]

		return slicingPoints
			.eachPair()
			.map({ self[$0..<$1] })
	}
}

This function is broken up into 3 main sections:

  1. Find the indexes after any point where the caller wants to divide the sequence.
  2. Add the startIndex and the endIndex.
  3. Create slices of for each consecutive pair of those indexes.

Again, this isn’t the most performant implementation. Namely, it does quite a few iterations, and requires Collection where Erica’s solution required just a Sequence. But I do think it’s easier to read and understand. As I discussed in the post about refactoring, distilling an algorithm into forms straightforward, simple forms, operating at the highest possible level of abstraction, enables you to see the structure of our algorithm and potential transforms that it can undergo to create other algorithms.

Erica’s original example is a lot simpler now:

let groups = [1, 1, 2, 3, 3, 2, 3, 3, 4].slice(between: !=)

To close this post out, the implementations for slice(before:) and slice(after:) practically fall out of the ether, now that we have slice(between:):

extension Collection {
	func slice(before predicate: (Element) -> Bool) -> [SubSequence] {
		return self.slice(between: { left, right in
			return predicate(right)
		})
	}

	func slice(after predicate: (Element) -> Bool) -> [SubSequence] {
		return self.slice(between: { left, right in
			return predicate(left)
		})
	}
}

This enables weird things to become easy, like splitting a CamelCased type name:

let components = "GalleryDataSource"
	.slice(before: { ("A"..."Z").contains($0) })
	.map({ String($0) })

(Don’t try it with type name with a initialism in it, like URLSession.)

A few months ago, Alex Cox put out a call on Twitter asking for someone to help her find a backpack that would suit her needs. She’d tried a bunch, and found all of them lacking.

I have a small obsession with bags, suitcases, and backpacks. For probably 15 years now, I’ve been hunting for the “the perfect bag”. Along this journey, I’ve attained bag enlightenment, and I’m here to impart that knowledge to you today.

Here’s the deal. You won’t find a perfect bag. This is for one simple reason: the perfect bag doesn’t exist. I know this because, at the time of writing, I own 4 rolling suitcases, 3 messenger bags, 3 daily carry backpacks, 2 travel backpacks, and 3 duffel bags/weekenders. They’re all stuffed inside each other, stacked in a small closet in my apartment, my Russian nesting travel bags.

And every last one is a compromise in some way or another.

These compromises are very apparent if you troll Kickstarter for travel backpacks. You’ll find tons: the Minaal, NOMATIC, PAKT One, Brevitē, Hexad, Allpa, RuitBag, Numi, and the aptly named DoucheBags. Their videos all start exactly the same: “We looked at every bag on the market, and found them lacking. We then spent 2 years designing our bag and sourcing the highest quality zippers, and we’re finally ready to start producing this bag.” They then go through the 5 features that they think makes their bag different from the rest, talk about a manufacturer they’ve found, and then they hit you with the “and that’s where you come in” and ask you for money.

Once you watch the first of these videos, the argument is somewhat compelling. Wow, they’ve really figured it out! But once you’ve seen five of them, you know there must be something else going on. How could all these different bag startups claim that each other’s products are clearly inferior and that only they have discovered the secret to bag excellence?

To understand what makes bags so unsatisfying, here’s a quote from Joel Spolsky:

When you design a trash can for the corner, you have to make choices between conflicting requirements. It needs to be heavy so it won’t blow away. It needs to be light so the trash collector can dump it out. It needs to be large so it can hold a lot of trash. It needs to be small so it doesn’t get in peoples’ way on the sidewalk.

It’s the same with bags. You need a big bag so that it can fit all your stuff, but you need a small bag to avoid hitting people on public transit. You need a bag with wheels so you can roll it through the airport, but you need a bag with backpack straps so that you can get over rougher terrain with it. You need a bag that’s soft so you can squish it into places, but you need a bag that’s hard so that it can protect what’s inside.

These competing requirements can’t be simultaneously satisfied. You have to choose one or the other. You can’t have it both ways.

In Buddhism, they say “there is no path, there are only paths.” It’s the same with bags. Once you accept that all bags are some form of compromise, you can get a few different bags and use the right one for the right trip.

Sometimes I want to look nice and need to carry a laptop: leather messenger bag. Sometimes I want to have lots of space and flexibility even if I look kind of dorky: my high school backpack. Sometimes I have a bunch of a camera gear and I’m going to be gone for 3 weeks: checked rolling suitcase. Picking the right bag for the trip lets me graduate from “man, I wish I’d brought a bigger to bag” to “bringing a bigger bag would have been great, but then I wouldn’t have been able to jump on this scooter to the airport”. Less regret, more acceptance.

It’s worth thinking about how you like to travel, what types of trips you take, what qualities you value having in your bags, and getting a few bags that meet those requirements. Of the 14 bags I mentioned earlier, I think I’ve used 12 of them this year for different trips/events.

Finding the right bags takes time. I’ve had the oldest of my 14 bags for some 15 years, and got the newest of them in the last year. Some bags of note:

  • Kathmandu Shuttle 40L — This is my current favorite travel backpack. It’s pretty much one giant pocket, so doesn’t prescribe much in terms of how you should pack, and that flexibility is really nice. Kathmandu is an Australian company, so getting their stuff can be kind of tough, but I’m pretty happy with my pack.
  • I also really like the Osprey Stratos 34. It’s a bit on the small side, but a good addition to a rolling suitcase for a longer trip. It’s great for day hikes and has excellent ventilation.
  • Packable day packs: This backpack and this duffel bag are great. They add very little extra weight to your stuff, and they allow you to expand a little bit whe you get wherever you’re going.
  • Wirecutter’s recommendation for travel backpacks at the time of writing is the Osprey Farpoint 55. It used to be the Tortuga Outbreaker, which I think is absolutely hideous.

Accept that no one bag will work in every situation. The truth will set you free.

Over the course of this week, some bloggers have written about a problem — analytics events — a proposed multiple solutions to for this problem: enums, structs, and protocols. I also chimed in with a cheeky post about using inheritance and subclassing to solve this problem. I happened to have a post in my drafts about a few problems very similar to the analytics events that John proposed, and I think there’s no better time to post it than now. Without further ado:

If a Swift programmer wants to bring two values together, like an Int and a String, they have two options. They can either use a “product type”, the construction of which requires you to have both values; or they can use a “sum type”, the construction of which requires you to have one value or the other.

Swift is bountiful, however, and has 3 ways to express a product type and 3 ways to express a sum type. While plenty of ink has been spilled about when to use a struct, a class, or a tuple, there isn’t as much guidance on when to how to choose between an enum, protocol, or subclass. I personally haven’t found subclassing all that useful in Swift, as my post from early today implies, since protocols and enums are so powerful. In my year and a half writing Swift, I haven’t written an intentionally subclassable thing. So, for the purpose of this discussion, I’ll broadly ignore subclassing.

Enums and protocols vary in a few key ways:

  • Completeness. Every case for an enum has to be declared when you declare the enum itself. You can’t add more cases in an extension or in a different module. On the other hand, protocols allow you to add new conformances anywhere in the codebase, even adding them across module boundaries. If that kind of flexibility is required, you have to use protocols.
  • Destructuring. To get data out of an enum, you have to pattern match. This requires either a switch or an if case statement. These are a bit unwieldy to use (who can even remember the case let syntax?) but are better than adding a method in the protocol for each type of variant data or casting to the concrete type and accessing it directly.

Based on these differences, my hard-won advice on this is to use enums when you care which case you have, and use protocols you don’t — when all the cases can be treated the same.

I want to take a look at two examples, and decide whether enums or protocols are a better fit.

Errors

Errors are frankly a bit of a toss up. On the one hand, errors are absolutely a situation where we care which case we have; also, catch pattern matching is very powerful when it comes to enums. Let’s take a look at an example error.

enum NetworkError: Error {
   case noInternet
   case statusCode(Int)
   case apiError(message: String)
}

If something throws an error, we can switch in the catch statement:

do {
	// something that produces a network error
} catch NetworkError.noInternet {
	// handle no internet
} catch let NetworkError.statusCode(statusCode) {
	// use `statusCode` here to handle this error
} catch {
	// catch any other errors
}

While this matching syntax is really nice, using enums for your errors comes with a few downsides. First, it hamstrings you if you’re maintaining an external library. If you, the writer of a library, add a new enum case to an error, and a consumer of your library updates, they’ll have to change any code which exhaustively switches on your error enum. While this is desirable in some cases, it means that, per semantic versioning, you’ll have to bump the major version number of your library. This makes adding a new enum case in an external libraries is currently a breaking change. Swift 5 should be bringing nonexhaustive enums, which will ameliorate this problem.

The second issue with enums is that this type gets muddy fast. Let’s say you want to provide conformance to the LocalizedError protocol to get good bridging to NSError. Because each case has its own scheme for how to convert its associated data into the userInfo dictionary, you’ll end up with a giant switch statement.

When examining this error, it becomes apparent that the cases of the enum don’t really have anything to do with each other. NetworkError is really only acting as a convenient namespace for these errors.

One approach here is to just use structs instead.

struct NoInternetError: Error { }

struct StatusCodeError: Error {
	let statusCode: Int
}

struct APIError: Error {
	let message: String
}

If each of these network error cases become their own type, we get a few cool things: a nice breakdown between types, custom initializers, easier conformance to things like LocalizedError, and it’s just as easy to pattern match:

do {
	// something that produces a network error
} catch let error as StatusCodeError {
	
} catch let error as NoInternetError {
	
} catch {
	// catch any other errors
}

You could even make all of the different structs conform to a protocol, called NetworkError. However, there is one downside to making each error case into its own type. Swift’s generic system requires a concrete type for all generic parameters; you can’t use a protocol there. Put another way, if the type’s signature is Result<T, E: Error>, you have to use an error enum. If the type’s signature is Result<T>, then the error is untyped and you can use anything that conforms to Swift.Error.

API Endpoints

Because talking to an API has a fixed number of endpoints, it can be tempting to model each of those endpoints as an enum case. However, all requests more or less have the same data: method, path, parameters, etc.

If you implement your network requests as an enum, you’ll have a methods with giant switch statements in them — one each for the method, path, and so on. If you think about how this data is broken up, it’s exactly flipped. When you look at a chunk of code, do you want to see all of the paths for all the endpoints in one place, or do you want to see the method, path, parameters, and so on for one request in one place? How do you want to colocate your data? For me, I definitely want to have all the data for each request in one place. This is the locality problem that Matt mentioned in his post.

Protocols shine when the interface to all of the different cases are similar, so they work well for describing network requests.

protocol Request {
	var method: Method { get }
	var path: String { get }
	// etc
}

Now, you can actually conform to this protocol with either a struct or enum (Inception gong), if it does happen to be the right time to use an enum for a subset of your requests. This is Dave’s point about flexibility.

More importantly, however, your network architecture won’t care. It’ll just get an object conforming to the protocol and request the right data from it.

Protocols confer a few other benefits here as well:

  1. You might find that it makes life easier to conform URL to your Request protocol, and you can easily do that.
  2. Associating a type with a protocol is possible, associating a type with an enum case is meaningfully impossible. This means you can get well-typed results back from your network architecture.
  3. Implementations of the protocol are so flexible that you can bring your own sub-abstractions as you need to. For example, in the Beacon API, we needed to be able to get Twitter followers and following. These requests are nearly identical in their parameters, results, and everything save for the path.

     struct FollowersRequest: TypedRequest {
    
         typealias OutputType = IDsResult
        
         enum Kind {
             case followers, following
            
             var pathComponent: String {
                 return self == .followers ? "followers" : "friends"
             }
         }
    
         let path: String
    
         init(kind: Kind) {
             self.path = kind.pathComponent + "/ids.json"
         }
     }
    

    Being able to bring your own abstractions to the protocol is just one more reason protocols are the right tool for the job here.

There are other cases that are useful for exploring when to use enums and protocols, but these two I think shine the most light on the problem. Use enums when you care which case you have, and use protocols when you don’t.

This is a response to Dave DeLong’s article, which is itself a response to Matt Diephouse’s article, which is itself a response to John Sundell’s article. You should go read these first.

Dave starts off by saying:

Matt starts off by saying:

Earlier this week, John Sundell wrote a nice article about building an enum-based analytics system in Swift. He included many fine suggestions, but I believe he’s wrong about one point: that enums are the right choice.

Therefore, I’ll start similarly:

Earlier this week, Matt Diephouse wrote a nice article about building a struct-based analytics system in Swift. He included many fine suggestions, but I believe he’s wrong about one point: that structs are the right choice.

Therefore, I’ll start similarly:

Earlier this week, Dave DeLong wrote a nice article about building a protocol-based analytics system in Swift. He included many fine suggestions, but I believe he’s wrong about one point: that protocols are the right choice.

As examples in both articles show, analytic events can have different payloads of information that they’re going to capture. While you can use many different approaches to solve this problem, I believe creating a deep inheritance hierarchy is the best solution.

Using reference types (class in Swift) with an inheritance hierarchy yields all the upsides of the other solutions. Like John’s solution of enums, they can store data on a per-event basis. Like Matt’s solution, you can create new analytics events across module boundaries. And like Dave’s solution, you can build a hierarchy of categorization for your analytics events.

However, in addition to all these benefits, subclassing brings a few extra benefits the other solutions don’t have. Subclassing allows you to store new information with each layer of your hierarchy. Let’s take a look at the metadata specifically. If you have a base class called AnalyticsEvent, a subclass for NetworkEvent, a subclass for NetworkErrorEvent, and a subclass for NoInternetNetworkErrorEvent, each subclass can bring its own components to the metadata. For example:

open class AnalyticsEvent {

	var name: String {
		fatalError("name must be provided by a subclass")
	}

	var metadata: [String: Any] {
	    return ["AppVersion": version]
	}
	
}

open class NetworkEvent: AnalyticsEvent {

	var urlSessionConfiguration: URLSessionConfiguration
	
	override var metadata: [String: Any] {
	    return super
	    	.metadata
	    	.merging(["UserAgent": userAgent]) { (_, new) in new }
	}
}

open class NetworkErrorEvent: NetworkEvent {
	
	var error: Error
	
	override var metadata: [String: Any] {
	    return super
	    	.metadata
	    	.merging(["ErrorCode": error.code]) { (_, new) in new }
	}
}

open class NoInternetNetworkErrorEvent: NetworkErrorEvent {

	override var name = "NoInternetNetworkErrorEvent"
	
	override var metadata: [String: Any] {
	    return super
	    	.metadata
	    	.merging(["Message": "No Internet"]) { (_, new) in new }
	}
}

As you can see, this reduces duplication between various types of analytics events. Each layer refers to the layers above it, in an almost recursive style.

I hope this article has convinced you to try subclassing to solve your next problem. While Swift gives us many different ways to solve a problem, I’m confident that a deep inheritance hierarchy is the solution for this one.

Swift structs can provide mutating and nonmutating versions of their functions. However, the Swift standard library is inconsistent about when it provides one form of a function or the other. Some APIs come in only mutating forms, some come only in nonmutating, and some come in both. Because both types of functions are useful in different situations, I argue that almost all such functions should have both versions provided. In this post, we’ll examine when these different forms are useful, examples APIs that provide one but not the other, and solutions for this problem.

(Quick sidebar here: I’ll be referring to functions that return a new struct with some field changed as nonmutating functions. Unfortunately, Swift also includes a keyword called nonmutating, which is designed for property setters that don’t require the struct to be copied when they’re set. Because I think that nonmutating is the best word to describe functions that return an altered copy of the original, I’ll keep using that language here. Apologies for the confusion.)

Mutating and Nonmutating Forms

These two forms, mutating and nonmutating, are useful in different cases. First, let’s look at when the nonmutating version is more useful.

This code, taken from Beacon’s OAuthSignatureGenerator, uses a long chain of immutable functions to make its code cleaner and more uniform:

public var authorizationHeader: String {
	let headerComponents = authorizationParameters
		.dictionaryByAdding("oauth_signature", value: self.oauthSignature)
		.urlEncodedQueryPairs(using: self.dataEncoding)
		.map({ pair in "\(pair.0)=\"\(pair.1)\"" })
		.sorted()
		.joined(separator: ", ")
	return "OAuth " + headerComponents
}

However, it wouldn’t be possible without an extension adding dictionaryByAdding(_:,value:), the nonmutating version of the subscript setter.

Here’s another, slightly more academic example. Inverting a binary tree is more commonly a punchline for bad interviews than practically useful, but the algorithm illustrates this point nicely. To start, let’s define a binary tree an enum like so:

indirect enum TreeNode<Element> {
	case children(left: TreeNode<Element>, right: TreeNode<Element>, value: Element)
	case leaf(value: Element)
}

To invert this binary tree, the tree on the left goes on the right, and the tree on right goes on the left. The recursive form of nonmutating version is simple and elegant:

extension TreeNode {
	func inverted() -> TreeNode<Element> {
		switch self {
		case let .children(left, right, value):
			return .children(left: right.inverted(), right: left.inverted(), value: value)
		case let .leaf(value: value):
			return .leaf(value: value)
		}
	}
}

Whereas the recursive mutating version contains lots of extra noise and nonsense:

extension TreeNode {
	mutating func invert() {
		switch self {
		case let .children(left, right, value):
			var rightCopy = right
			rightCopy.invert()
			var leftCopy = left
			leftCopy.invert()
			self = .children(left: rightCopy, right: leftCopy, value: value)
		case .leaf(value: _):
			break
		}
	}
}

Mutating functions are also useful, albeit in different contexts. We primarily work in apps that have a lot of state, and sometimes that state is represented by data in structs. When mutating that state, we often want to do it in place. Swift gives us lots of first class affordances for this: the mutating keyword was added especially for this behavior, and changing any property on a struct acts as a mutating function as well, reassigning the reference to a new copy of the struct with the value changed.

There are concrete examples as well. If you’re animating a CGAffineTransform on a view, that code currently has to look something like this.

UIView.animate(duration: 0.25, animations: {
	view.transform = view.transform.rotated(by: .pi)
})

Because the transformation APIs are all nonmutating, you have to manually assign the struct back to the original reference, causing duplicated, ugly code. If there were a mutating rotate(by:) function, then this code would be much cleaner:

UIView.animate(duration: 0.25, animations: {
	view.transform.rotate(by: .pi)
})

The APIs aren’t consistent

The primary problem here is that while both mutating functions and nonmutating functions are useful, not all APIs provide versions for both.

Some APIs include only mutating APIs. This is common with collections. Both the APIs for adding items to collections, like Array.append, Dictionary.subscript, Set.insert, and APIs for removing items from collections, like Array.remove(at:), Dictionary.removeValue(forKey:), and Set.remove, have this issue.

Some APIs include only nonmutating APIs. filter and map are defined on Sequence, and they return arrays, so they can’t be mutating (because Sequence objects can’t necessarily be mutated). However, we could have a mutating filterInPlace function on Array. The aforementioned CGAffineTransform functions also fall in this category. They only include nonmutating versions, which is great for representing a CGAffineTransform as a chain of transformations, but not so great for mutating an existing transform, say, on a view.

Some APIs provide both. I think sorting is a great example of getting this right. Sequence includes sorted(by:), which is a nonmutating function that returns a sorted array, whereas MutableCollection (the first protocol in the Sequence and Collection heirarchy that allows mutation) includes sort(by:). This way, users of the API can choose whether they want a mutating sort or a nonmutating sort, and they’re available in the API where they’re first possible. Array, of course, conforms to MutableCollection and Sequence, so it gets both of them.

Another example of an API that gets this right: Set includes union (nonmutating) and formUnion (mutating). (I could quibble with these names, but I’m happy that both version of the API exist.) Swift 4’s new Dictionary merging APIs also include both merge(_:uniquingKeysWith:) and merging(_:uniquingKeysWith:).

Bridging Between the Two

The interesting thing with this problem is that, because of the way Swift is designed, it’s really easy to bridge from one form to the other. If you have the mutating version of any function:

mutating func mutatingVersion() { ... }

You can synthesize a nonmutating version:

func nonMutatingVersion() -> Self {
	var copy = self
	copy.mutatingVersion()
	return copy
}

And vice versa, if you already have a nonmutating version:

func nonMutatingVersion() -> Self { ... }

mutating func mutatingVersion() {
	self = self.nonMutatingVersion()
}

As long as the type returned by the nonmutating version is the same as Self, this trick works for any API, which is awesome. The only thing you really need is the new name for the alternate version.

Leaning on the Compiler

With this, I think it should be possible to have the compiler trivially synthesize one version from the other for us. Imagine something like this:

extension Array {

	@synthesizeNonmutating(appending(_:))
	mutating func append(_ newElement) {
		// ...
	}
	
}

The compiler would use the same trick above — create a copy, mutate the copy, and return it — to synthesize a nonmutating version of this function. You could also have a @synthesizeMutating keyword. Types could of course choose not to use this shorthand, which they might do in instances where there are there are optimizations for one form or another. However, getting tooling like this means that API designers no longer have to consider whether they’re APIs are likely to be used in mutating or nonmutating ways, and they can easily add both forms. Because each form is useful in different contexts, providing both allows the consumer of an API to choose which form they want and when.

Very early this year, I posted about request behaviors. These simple objects can help you factor out small bits of reused UI, persistence, and validation code related to your network. If you haven’t read the post, it lays the groundwork for this post.

Request behaviors are objects that can exceute arbitrary code at various points during some request’s execution: before sending, after success, and after failure. They also contain hooks to add any necessary headers and arbitrarily modify the URL request. If you come from the server world, you can think of request behaviors as sort of reverse middleware. It’s a simple pattern, but there are lots of very powerful behaviors that can be built on top of them.

In the original post, I proposed 3 behaviors, that because of their access to some piece of global state, were particularly hard to test: BackgroundTaskBehavior, NetworkActivityIndicatorBehavior, AuthTokenHeaderBehavior. Those are useful behaviors, but in this post, I want to show a few more maybe less obvious behaviors that I’ve used across a few apps.

API Verification

One of the apps where I’ve employed this pattern needs a very special behavior. It relies on receiving 2XX status codes from sync API requests. When a request returns a 200, it assumes that sync request was successfully executed, and it can remove it from the queue.

The problem is that sometimes, captive portals, like those used at hotels or at coffee shops, will often redirect any request to their special login page, which returns a 200. There are a few ways to handle this, but the solution we opted for was to send a special header that the server would turn around and return completely unmodified. No returned header? Probably a captive portal or some other tomfoolery. To implement this, the server used a very straightforward middleware, and the client needed some code to handle it as well. Perfect for a request behavior.

class APIVerificationBehavior: RequestBehavior {

    let nonce = UUID().uuidString

    var additionalHeaders: [String: String] {
        return ["X-API-Nonce": nonce]
    }

    func afterSuccess(response: AnyResponse) throws {
        guard let returnedNonce = response.httpResponse.httpHeaderFields["X-API-Nonce"],
            returnedNonce == nonce {
                throw APIError(message: "Sync request intercepted.")
        }
    }
}

It requires a small change: making the afterSuccess method a throwing method. This lets the request behavior check conditions and fail the request if they’re not met, and it’s a straightforward compiler-driven change. Also, because the request behavior architecture is so testable, making changes like this to the network code can be reliably tested, making changes much easier.

OAuth Behavior

Beacon relies on Twitter, which uses OAuth for authentication and user identification. At the time I wrote the code, none of the Swift OAuth libraries worked correctly on Linux, so there was a process of extracting, refactoring, and, for some components, rewriting the libraries to make them work right. While I was working on this, I was hoping to test out one of the reasons I wanted to created request behaviors in the first place: to decouple the authentication protocol (OAuth, in this case) from the data that any given request requires. You should be able to transparently add OAuth to a request without having to modify the request struct or the network client at all.

Extracting the code to generate the OAuth signature was a decent amount of work. Debugging in particular is hard for OAuth, and I recommend this page on Twitter’s API docs which walks you through the whole process and shows you what your data should look like at each step. (I hope this link doesn’t just break when Twitter inevitably changes its documentation format.) I also added tests for each step, so that if anything failed, it would be obvious which steps succeeded and which steps failed.

Once you have something to generate the OAuth signature (called OAuthSignatureGenerator here), the request behavior for adding OAuth to a request turns out to not be so bad.

struct Credentials {
    let key: String
    let secret: String
}

class OAuthRequestBehavior: RequestBehavior  {

    var consumerCredentials: Credentials

    var credentials: Credentials
    
    init(consumerCredentials: Credentials, credentials: Credentials) {
        self.consumerCredentials = consumerCredentials
        self.credentials = credentials
    }
    
    func modify(request: URLRequest) -> URLRequest {
        let generator = OAuthSignatureGenerator(consumerCredentials: consumerCredentials, credentials: credentials, request: request)
        var mutableRequest = request
        mutableRequest.setValue(generator.authorizationHeader, forHTTPHeaderField: "Authorization")
        return mutableRequest
    }
    
}

Using modify(request:) to perform some mutation of the request, we can add the header for the OAuth signature from OAuthSignatureGenerator. Digging into the nitty-gritty of OAuth is a out of the scope of this post, but you can find the code for the signature generator here. The only thing of note is that this code relies on Vapor’s SHA1 and base 64 encoding, which you’ll have to swap out for implementations more friendly to your particular environment.

When it’s time to use the use this behavior to create a client, you can create a client specific to the Twitter API, and then you’re good to go:

let twitterClient = NetworkClient(
	configuration: RequestConfiguration(
		baseURLString: "https://api.twitter.com/1.1/",
		defaultRequestBehavior: oAuthBehavior
	)
)

Persistence

I also explored saving things to Core Data via request behaviors, without having to trouble the code that sends the request with that responsibility. This was another promise of the request behavior pattern: if you could write reusable and parameterizable behaviors for saving things to Core Data, you could cut down on a lot of boilerplate.

However, when implementing this, we ran into a wrinkle. Each request needs to finish saving to Core Data before the request’s promise is fulfilled. However, the current afterSuccess(result:) and afterFailure(error:) methods are synchronous and called on the main thread. Saving lots of data to Core Data can take seconds, during which time the UI can’t be locked up. We need to change these methods to allow asynchronous work. If we define a function that takes a Promise and returns a Promise, we can completely subsume three methods: beforeSend, afterSuccess, afterFailure.

func handleRequest(promise: Promise<AnyResponse>) -> Promise<AnyResponse> {
	// before request
	return promise
		.then({ response in
			// after success
		})
		.catch({ error in
			// after failure
		})
}

Now, we can do asynchronous work when the request succeeds or fails, and we can also cause a succeeding to request to fail if some condition isn’t met (like in the first example in this post) by throwing from the then block.

Core Data architecture varies greatly from app to app, and I’m not here to prescribe any particular pattern. In this case, we have a foreground context (for reading) and background context (for writing). We wanted simplify the creation of a Core Data request behavior; all you should have to provide is a context and a method that will be performed by that context. Building a protocol around that, we ended up with something like this:

protocol CoreDataRequestBehavior: RequestBehavior {

	var context: NSManagedObjectContext { get }
	
	func performBeforeSave(in context: NSManagedObjectContext, withResponse response: AnyResponse) throws
	
}

And that protocol is extended to provide a handle all of the boilerplate mapping to and from the Promise:

extension CoreDataRequestBehavior {

	func handleRequest(promise: Promise<AnyResponse>) -> Promise<AnyResponse> {
		return promise.then({ response in
			return Promise<AnyResponse>(work: { fulfill, reject in
				self.context.perform({
					do {
						try self.performBeforeSave(in: context, withResponse: response)
						try context.save()
						fulfill(response)
					} catch let error {
						reject(error)
					}
				})
			})
		})
	}

}

Creating a type that conforms to CoreDataRequestBehavior means that you provide a context and a function to modify that context before saving, and that function will be called on the right thread, will delay the completion of the request until the work in the Core Data context is completed. As an added bonus, performBeforeSave is a throwing function, so it’ll handle errors for you by failing the request.

On top of CoreDataRequestBehavior you can build more complex behaviors, such as a behavior that is parameterized on a managed object type and can save an array of objects of that type to Core Data.

Request behaviors provide the hooks to attach complex behavior to a request. Any side effect that needs to happen during a network request is a great candidate for a request behavior. (If these side effects occur for more than one request, all the better.) These three examples highlight more advanced usage of request behaviors.

The model layer of a client is a tough nut to crack. Because it’s not the canonical representation of the data (that representation lives on the server), the data must live a fundamentally transient form. The version of the data that the app has is essentially a cache of what lives on the network (whether that cache is in memory or on-disk), and if there’s anything that’s true about caches, it’s that they will always end up with stale data. The truth is multiplied by the number of caches you have in your app. Reducing the number of cached versions of any given object decreases the likelihood that it will be out of date.

Core Data has an internal feature that ensures that there is never more than one instance of an object for a given identifier (within the same managed object context). They call this feature “uniquing”, but it is known more broadly as the identity map pattern. I want to steal it without adopting the rest of baggage of Core Data.

I think of this concept as a “flat cache”. A flat cache is basically just a big dictionary. The keys are a composite of an object’s type name and the object’s ID, and the value is the object. A flat cache normalizes the data in it, like a relational database, and all object-object relationships go through the flat cache. A flat cache confers several interesting benefits.

  • Normalizing the data means that you’ll use less bandwidth while the data is in-flight, and less memory while the data is at rest.
  • Because data is normalized, modifying or updating a resource in one place modifies it everywhere.
  • Because relationships to other entities go through the flat cache, back references with structs are now possible. Back references with classes don’t have to be weak.
  • With a flat cache of structs, any mutation deep in a nested struct only requires the object in question to change, instead of the object and all of its parents.

In this post, we’ll discuss how to make this pattern work in Swift. First, you’ll need the composite key.

struct FlatCacheKey: Equatable, Hashable {

    let typeName: String
    let id: String
    
    static func == (lhs: FlatCacheKey, rhs: FlatCacheKey) -> Bool {
	    return lhs.typeName == rhs.typeName && lhs.id == rhs.id
    }
		    
	var hashValue: Int {
		return typeName.hashValue ^ id.hashValue
	}
}

We can use a protocol to make the generation of flat cache keys easier:

protocol Identifiable {
    var id: String { get }	
}

protocol Cachable: Identifiable { }

extension Cachable {
    static var typeName: String {
        return String(describing: self)
    }
    
    var flatCacheKey: FlatCacheKey {
        return FlatCacheKey(typeName: Self.typeName, id: id)
    }
}

All of our model objects already have IDs so they can trivially conform to Cachable and Identifiable.

Next, let’s get to the basic flat cache.

class FlatCache {
    
    static let shared = FlatCache()
    
    private var storage: [FlatCacheKey: Any] = [:]
    
    func set<T: Cachable>(value: T) {
        storage[value.flatCacheKey] = value
    }
    
    func get<T: Cachable>(id: String) -> T? {
        let key = FlatCacheKey(typeName: T.typeName, id: id)
        return storage[key] as? T
    }
    	    
    func clearCache() {
        storage = [:]
    }
    
}

Not too much to say here, just a private dictionary and get and set methods. Notably, the set method only takes one parameter, since the key for the flat cache can be derived from the value.

Here’s where the interesting stuff happens. Let’s say you have a Post with an Author. Typically, the Author would be a child of the Post:

struct Author {
	let name: String
}

struct Post {
	let author: Author
	let content: String
}

However, if you wanted back references (so that you could get all the posts by an author, let’s say), this isn’t possible with value types. This kind of relationship would cause a reference cycle, which can never happen with Swift structs. If you gave the Author a list of Post objects, each Post would be a full copy, including the Author which would have to include the author’s posts, et cetera. You could switch to classes, but the back reference would cause a retain cycle, so the reference would need to be weak. You’d have to manage these weak relationships back to the parent manually.

Neither of these solutions is ideal. A flat cache treats relationships a little differently. With a flat cache, each relationship is fetched from the centralized identity map. In this case, the Post has an authorID and the Author would have a list of postIDs:

struct Author: Identifiable, Cachable {
	let id: String
	let name: String
	let postIDs: [String]
}

struct Post: Identifiable, Cachable {
	let id: String
	let authorID: String
	let content: String
}

Now, you still have to do some work to fetch the object itself. To get the author for a post, you would write something like:

FlatCache.shared.get(id: post.authorID) as Author

You could put this into an extension on the Post to make it a little cleaner:

extension Post {
	var author: Author? {
		return FlatCache.shared.get(id: authorID)
	}
}

But this is pretty painful to do for every single relationship in your model layer. Fortunately, it’s something that can be generated! By adding an annotation to the ID property, you can tell a tool like Sourcery to generate the computed accessors for you. I won’t belabor the explanation of the template, but you can find it here. If you have trouble reading it or understanding how it works, you can read the Sourcery in Practice post.

It will let you write Swift code like this:

struct Author {
	let name: String
	// sourcery: relationshipType = Post
	let postIDs: [String]
}

struct Post {
	// sourcery: relationshipType = Author
	let authorID: String
	let content: String
}

Which will generate a file that looks like this:

// Generated using Sourcery 0.8.0 — https://github.com/krzysztofzablocki/Sourcery
// DO NOT EDIT
	
extension Author {
	var posts: [Post] {
		return postIDs.flatMap({ id -> Post? in
			return FlatCache.shared.get(id: id)
		})
	}
}

extension Post {	
	var author: Author? {
		return FlatCache.shared.get(id: authorID)
	}	
}

This is the bulk of the pattern. However, there a few considerations to examine.

JSON

Building this structure from a tree of nested JSON is messy and tough. The system works a lot better if you use a structure of the JSON looks like the structure of the flat cache. All the objects exist in a big dictionary at the top level (one key for each type of object), and the relationships are defined by IDs. When a new JSON payload comes in, you can iterate over this top level, create all your local objects, and store them in the flat cache. Inform the requester of the JSON that the new objects have been downloaded, and then it can fetch relevant objects directly from the flat cache. The ideal structure of the JSON looks a lot like JSON API, although I’m not surpassingly familiar with JSON API.

Missing Values

One big difference between managing relationships directly and managing them through the flat cache is that with the flat cache, there is a (small) chance that the relationship won’t be there. This might happen because of a bug on the server side, or it might happen because of a consistency error when mutating the data in the flat cache (we’ll discuss mutation more in a moment). There are a few ways to handle this:

  • Return an Optional. What we chose to do for this app is return an optional. There are a lot of ways of handling missing values with optionals, including optional chaining, force-unwrapping, if let, and flatmapping, so it isn’t too painful to have to deal with an optional, and there aren’t any seriously deleterious effects to your app if a value is missing.
  • Force unwrap. You could choose to force-unwrap the relationship. That’s putting a lot of trust in the source of the data (JSON in our case). If a relationship is missing becuase of a bug on your server, your app will crash. This is really bad, but on the bright side, you’ll get a crash report for missing relationships, and you can fix it on the server-side quickly.
  • Return a Promise. While a promise is the most complex of these three solutions to deal with at the call site, the benefit is that if the relationship doesn’t exist, you can fetch it fresh from the server and fulfill the promise a few seconds later.

Each choice has its downsides. However, one benefit to code generation is that you can support more than one option. You can synthesize both a promise and an optional getter for each relationship, and use whichever one you want at the call site.

Mutability

So far I’ve only really discussed immutable relationships and read-only data. The app where we’re using the pattern has entirely immutable data in its flat cache. All the data comes down in one giant blob of JSON, and then the flat cache is fully hydrated. We never write back up to the server, and all user-generated data is stored locally on the device.

If you want the same pattern to work with mutable data, a few things change, depending on if your model objects are classes or structs.

If they’re classes, they’ll need to be thread safe, since each instance will be shared across the whole app. If they’re classes, mutating any one reference to an entity will mutate them all, since they’re all the same instance.

If they’re structs, your flat cache will need to be thread safe. The Sourcery template will have to synthesize a setter as well as a getter. In addition, anything long-lived (like a VC) that relies on the data being updated regularly should draw its value directly from the flat cache.

final class PostVC {
    var postID: String
    
    init(postID: String) {
        self.postID = postID
    }
    
    var post: Post? {
		get {
			return cache.get(id: postID)
		}
		set {
			guard let newValue = newValue else { return }
			cache.set(value: newValue)
		}
	}
}

To make this even better, we can pull a page from Chris Eidhof’s most recent blog post about struct references, and roll this into a type.

class Cached<T: Cachable> {
	
	var cache = FlatCache.shared
	
	let id: String
	
	init(id: String) {
		self.id = id
	}
	
	var value: T? {
		get {
			return cache.get(id: id)
		}
		set {
			guard let newValue = newValue else { return}
			cache.set(value: newValue)
		}
	}
}

And in your VC, the post property would be replaced with something like:

lazy var cachedPost: Cached<Post>(id: postID)

Lastly, if your system has mutations, you need a way to inform objects if a mutation occurs. NSNotificationCenter could be a good system for this, or some kind of reactive implementation where you filter out irrelevant notifications and subscribe only to the ones you care about (a specific post with a given post ID, for example).

Singletons and Testing

This pattern relies on a singleton to fetch the relationships. This has a chance of hampering testability. There are a few different options to handle this, including injecting the flat cache into the various objects that use it, and having the flat cache be a weak var property on each model object instead of a static let. At that point, any flat cache would be responsible for ensuring the integrity of its child objects references to the flat cache. This is definitely added complexity, and it’s a tradeoff that comes along with this pattern.

References and other systems

  • Redux recommends a pattern similar to this. They have a post describing how to structure your data called Normalizing State Shape.
  • They discuss it a bit in this post as well:

    The solution to caching GraphQL is to normalize the hierarchical response into a flat collection of records. Relay implements this cache as a map from IDs to records. Each record is a map from field names to field values. Records may also link to other records (allowing it to describe a cyclic graph), and these links are stored as a special value type that references back into the top-level map. With this approach each server record is stored once regardless of how it is fetched.