After my latest blog post on Meridian, a few people took note of the library and asked some questions on Mastodon. Because of these questions, I realized that a lot of the more advanced tricks that I’ve been exploring with Meridian may not be obvious to those who are casually observing, and I thought I would write about some of those ideas.

Parameterizing Responders

Defining an endpoint in Meridian consists of two components — a responder and a route. You implement the Responder protocol, and then when you mount a responder at a specific path, that creates a Route. You’ll almost never deal with the Route type directly in your code, but it’s there.

At first, it can seem a little strange that almost everything about a Responder lives in one place, but the incoming paths it matches against are in a different place (usually a different file). Let’s explore why.

When I was still experimenting with Meridian, I tried putting everything into one type, which looked something like this:

struct InvoicesEndpoint: Responder {
     static let route: Route = .get("/api/invoices")
     
     @QueryParameter("sort") var sort: Sort
     
     // ...
}

Maybe with today’s macros, we could even write something like this:

@Route(.get("/api/invoices"))
struct InvoicesEndpoint: Responder {
     @QueryParameter("sort") var sort: Sort

     // ...
}

The macro might even be able to handle registering the endpoint with the server so that simply defining the type would be enough for it to start responding to requests.

I see two major benefits of putting everything into one type. First, it makes the whole thing very portable. If you wrote a login endpoint for one of your services, you could just move that file to a new project and it would immediately begin working.

Second, it’s really easy to read. You open one file and you know everything you need to know about sending a request to that endpoint, including what url parameters you’ll need, what query parameters are available, and so on. One of the selling points of Swift on the Server is the ease of client engineers to read and understand the server code, and in this regard, a single source of information about every request is a clear win.

However, there are bigger benefits to not colocating the path with everything else. Putting the route inside the responder ties the responder to one and only one route. You can’t easily respond to two different paths with the same content, and more importantly, you can’t treat the responder like a regular object anymore.

To see why this is important, let’s take a look at some SwiftUI code.

struct Counter: View {
    @State var counter = 0
    
    var body: some View {
        Button("\(counter)") { counter += 1 }
    }
}

struct TestingView: View {
    var body: some View {
        let counter = Counter()
 
        VStack {
            counter
       
            counter
        }
    }
}

My readers who are somewhat well-versed in SwiftUI will know that this snippet creates a VStack with two counters in it, each of which maintains its own state. Incrementing the number in the top counter won’t affect the bottom counter. This is nominally because Counter is a value type. (My readers who are even more well-versed with SwiftUI arcana know that the truth is weirder still, and that this snippet relies on deep magic in Swift’s result builders and SwiftUI’s attribute graph to create the illusion of the value semantics for us to rely on.)

Because Meridian takes a lot of its design cues from SwiftUI, responders are just value types. That means they can be moved, configured, and copied. For our invoice example, let’s say you want to make an endpoint available at a new path as well as an old path. You can mount the same responder at two different paths by simply creating two of them:

Server(errorRenderer: JSONErrorRenderer())
    .register({
        InvoicesEndpoint()
            .on(.get("/api/invoices"))

        InvoicesEndpoint()
            .on(.get("/api/v1/invoices"))
    })
    .listen()

In this snippet, we’ve linked one responder to two different routes. I want to be clear that this is possible with the static route design mentioned above — see Meridian’s RouteMatcher, which is a very general way of determining if an incoming request can be handled by a particular route — making a different instance for each route fits cleanly into the mental model of instantiating objects and using them.

A slightly more interesting problem is if we have two slightly different versions of the same endpoint. Let’s say we wanted to respond to /api/invoices/ and /api/invoices/open and reuse most of the code between the two.

enum InvoiceFilterKind: CaseIterable {
    case all, open
}

Witha block-based framework (like Vapor), you can do this in a few different ways. Here’s one solution:

func makeInvoiceClosure(kind: InvoiceFilterKind) -> ((Request) async throws -> Response) {
    return { request in
        // extract parameters and execute request
    }
}

app.get("api", "invoices", use: makeInvoiceClosure(kind: .all))
app.get("api", "invoices", "open", use: makeInvoiceClosure(kind: .open))

With Meridian, there are a few different ways to do this, but the one I think is most interesting is to parameterize the Responder itself. At the call-site, it can look something like this:

Server(errorRenderer: JSONErrorRenderer())
     .register({
         InvoicesEndpoint(kind: .all)
             .on(.get("/api/invoices"))

         InvoicesEndpoint(kind: .open)
             .on(.get("/api/invoices/open"))
     })
     .listen()

Parameterizing the responder in this way is simple — it’s just a regular property!

struct InvoicesEndpoint: Responder {

    let kind: InvoiceFilterKind

    @QueryParameter("sort") var sort: Sort
     
    public init(kind: InvoiceFilterKind) {
        self.kind = kind
    }

    // ...
}

Because Meridian’s responders are simple values, they can be initialized with differing parameters. Furthermore, like SwiftUI, it’s very powerful to be able treat these responders as simple objects whose semantics you already understand. One of the benefits of using Swift on the server is that you don’t have to context switch in order to go from writing your server code to writing your client code. Meridian seeks to make the two even closer.

Smart Responses

Meridian includes a lot of Responses that are useful in day-to-day programming. The intention, as a framework, is to be “batteries included”, keeping tools that you need close at hand. The documentation goes over a lot of these available responses, but I want to talk about some more advanced things they can do.

Like middleware and responders, responses can hook into any property wrapper, including the environment.

Because most people will be building JSON APIs with tools like Meridian, the JSON response is one of the most commonly used responses. Its usage is very simple:

func execute() async throws -> Response {
    try JSON(database.listTodos())
}

There is an option to pass a custom encoder in the initializer for JSON, but by default it will use the \.jsonEncoder from the environment. The default encoder in the environment is one with no customization, but you can easily create a new one and customize it when setting up your server:

 Server(errorRenderer: JSONErrorRenderer())
     .register({
          ListTodos()
           .on(.get("/todos"))
     })
     .environment(\.jsonEncoder, JSONEncoder.myEncoder)
     .listen()

This will be used by all responders that use the JSON response.

Because Response objects can use any property wrapper, this opens up really advanced tricks. For example, if you wanted to include the path and method with every JSON encoded item, a custom Response can solve that very easily:

struct AnyEncodable: Encodable {
    let base: Encodable

    func encode(to encoder: Encoder) throws {
        try base.encode(to: encoder)
    }
}

public struct JSONWithMeta: Response {

    struct Container: Encodable {
        let meta: Meta
        let content: AnyEncodable
    }
    
    struct Meta: Encodable {
        let path: String
        let method: String
    }

    @Path var path

    @RequestMethod var method

    let encodable: any Encodable

    public func body() throws -> Data {
        return try JSONEncoder().encode(Container(meta: Meta(path: path, method: method.name), content: AnyEncodable(base: encodable)))
    }
}

This outputs JSON that looks something like this:

{
  "meta": {
    "method": "GET",
    "path": "/todos"
  },
  "content": [
    // ...
  ]
}

Even though you will almost never need to implement your own responses, they are a first-class component in the library, and having access to the full request context inside a response unlocks a lot of power that would otherwise require messy middleware or a lot of repetition.

Expressiveness

Using a lot of the same design goals as SwiftUI, Meridian aspires to give users the power to implement their backends with small, reusable components that compose together in useful ways.

These are some techniques that I’ve found to be helpful over the last few years of using the library. They’re available when you need them, and even if you don’t use them directly, they’re working behind the scenes to make Meridian a joy to work with.

A few years ago, I open sourced my Swift on the Server frame, Meridian. There have been a few big updates in the intervening time which I wanted to talk about here.

I’ve now deployed Meridian for a number of sites and projects, and there have been a lot of changes and fixes to make it more reliable and allow it to be deployed for long periods of time without restarts. However, Meridian’s big pitch has always been that its a joy to write web applications in, so I want to focus more on some of the changes to the developer experience.

The Pitch

If you haven’t seen Meridian yet, the short pitch is that it’s a web framework which draws a lot of its design inspiration from SwiftUI. It seeks to make all inputs to your responder the same. Here’s a sample responder that you’ll be familiar with if you’ve ever written an API for an iPhone app:

public struct AddDeviceRoute: Responder {

    public init() {}

    @EnvironmentObject var database: Database

    @Auth var auth

    @JSONValue("token") var token: String

    public func execute() async throws -> any Response {
        try await database.addDevice(token: token, forAccount: auth)
        return JSON(Success())
    }
}

Everything is handled with a property wrapper — query parameters, URL parameters, JSON, headers, internal dependencies like a database. You can even create your own property wrappers that extract data in any form you like.

With Meridian, you declare your dependencies (each represented by a property wrapper), and it ensures that all those dependencies are fulfilled before running your execute() function, which can stay focused on business logic.

async/await

First and foremost, Meridian now supports async/await. Because synchronous functions can fulfill asynchonous protocol requirements, this change is totally backwards compatible and opt-in, and that makes it really easy to gradually migrate to.

Being able to use async/await also lets Meridian play in the wider sphere of Swift on the Server packages. AsyncHTTPClient, PostgresNIO, and APNSwift now all fit nicely into Meridian and its environment. When custom executor support is ready for NIO, that also should provide a performance bump to users of Meridian with very few changes (or more likely, none at all) required to the user’s code.

Websockets

One of the apps I built needed to make very heavy use of websockets, so I worked on support in Meridian. The code for this went in about a year ago and fortunately wasn’t too gnarly. It relies heavily on NIOs built-in helpers for upgrading a regular HTTP request to a bidirectional websocket request.

Keeping the interfaces feeling like Meridian is one of the most important parts of the design of Meridan, and when designing this feature, I wanted to stay true to the design ethos of the rest of the library.

The heavy lifting from NIO plus a little of Meridian’s syntactic sugar magic allows the websocket responder to look very similar to any other responder in Meridian, with access to all the same property wrappers that a regular request can use:

struct WebSocketTester: WebSocketResponder {

    @Path var path

    func connected(to webSocket: WebSocket) async throws {

        print("Connected to websocket")

        for try await message in webSocket.textMessages {
            print("Received \(message) at \(path)")
            webSocket.send(text: "String: \(message) is \(message.count) characters long")
        }

        print("Websocket closed!")
    }
}

It even uses AsyncSequences so that you can use Swift’s for try await syntax for iterating over incoming messages, and mix-and-match that code with other await-able code.

Middleware

Like websockets, middleware also needs to fit with the rest of the library. Take, for example, block-based HTTP frameworks like Express.js. They have the benefit that everything looks almost exactly the same. Here’s a middleware and a route handler in Express:

router.use((req, res, next) => {
  console.log(`Request: ${req.method) ${req.route.path}`)
  next()
})

router.get('/user/:id', (req, res) => {
  res.send('hello, user!')
})

These two chunks of code are very similar, so you can use the same techniques you learn for writing responders for writing middleware. (Due to a quirk in JavaScript, functions that accept fewer parameters can be passed as arguments that expect more parameters, and the later parameters are simply ignored, meaning that (req, res) => and (req, res, next) => are actually hooking into the same thing. Anathema for a type-minded Swift developer, but it works in JavaScript.)

In Meridian, these look like this:

public struct LoggingMiddleware: Middleware {

    @Path var path
    
    @RequestMethod var method

    public init() { }

    public func execute(next: Responder) async throws -> Response {
        print("Request: \(method) \(path)")
        return try await next.execute()
    }
}

struct HelloUser: Responder {

    func execute() throws -> Response {
        "Hello, user!"
    }

}

Similar to Express, there are very few differences between a middleware and a bog standard route: 1) conforming to a different protocol, and 2) adding a next: Responder argument so that the chain can be continued. Middleware, like websockets, has full access to all the property wrappers you’d care to use, as well as being an async and error-friendly environment.

The Future

I’ve been using Meridian heavily in my work, so I’m really invested in making it better. There are two big areas of improvement that I’m going to be focused on this year.

First, macros. Adding things to Meridian’s environment is similar to adding things to SwiftUI’s environment, and the new Entry() macro introduced this year at WWDC 2024 will fit very nicely with Meridian. I also think there’s some room for a macro that runs a computed variable only once, so that you can use a variable multiple times without, e.g., loading things from the database more than once.

Second, my white whale — OpenAPI. I’ve actually started on this work, and while it looks like it will be conceptually possible, it’s definitely going to be an uphill battle with some of the more complex representations of data. The goal here is to write your endpoints in Meridian, and then have your endpoints magically show up in your client-side Swift code, ready for autocomplete.

I’ve been doing a lot more server-side programming in the last few years. Being able to write Swift on both sides is a real joy. I have a client with whom I’ve built a reasonably full-featured social app in Vapor, and all my personal stuff has been using Meridian, which has been going great. (I will have some contracting availability for server-side Swift coming up soon, so definitely get in touch if you have a project!)

However, one part of the process that I don’t enjoy much is using an ORM. There’s too much magic when working with them. I feel disconnected from the queries that are being run, and it’s too easy to accidentally add an n+1 query. I also don’t like how it turns a relational system into an object graph — I’d much prefer to work in terms of records with related IDs, rather than objects with children.

ORMs run some of the biggest sites and systems in the world — if you like them, keep using them. If they make you feel weird too, the rest of the post might be for you.

ORMs do give you one thing that is great: a single source of truth, which is the model definition in the application code. However, this single source of truth is not always trustworthy — there’s nothing to keep it in-line with what’s actually in the database.

For example, if a field in my model object goes from being optional to non-optional, things could work smoothly for almost every row, but if some old row has a stray null in the database, decoding the model object will fail and cause my application to do something unexpected.

The real problem here is that my application code is always changing and a home for bugs.

As much as I distrust my own application code, I’ve come to trust Postgres. Postgres is a reliable and sensible choice for a backend database. Postgres is laden with great features — nullability, strong foreign keys, data blobs, performant unbounded text, JSON, extensions for UUIDs, GIS, and on and on. I just love it. I use it for everything, and far prefer it to other options.

Postgres’s constraint system feels like a warm blanket in the same way that Swift’s type system does.

  • If a field is never supposed to be null, you make it NOT NULL and you’re set.
  • If a field is a foreign key for for another table, mark it so and you won’t be able to delete a referenced row without deleting the referenced row (unless you choose some other behavior).
  • If a field has arbitrary computable constraints (like a score that must be between 0 and 100), you can add those, too.

What I want is a way to marry Postgres’s constraints to Swift’s type system — some way to propogate Postgres’s guarantees into my own type system.

After watching a Gary Bernhardt screencast (video, 18m, corresponding blog post), I saw the path forward. (If you’ve got 18 minutes, watch this talk. It’s a wonder.)

Gary shows how he uses types (in TypeScript) to make a change flow from the database all the way through to the UI, using a TypeScript tool called schemats, which creates simple interfaces that represent each table. These simple structures can be decoded easily and always represent the state of the database.

I ported schemats to Swift, to bring this same strategy to our favorite language. It’s called SchemaSwift. Working with it is pretty straightforward:

SchemaSwift \
    --url="<POSTGRES_URL>" \
    --override blog_posts.category=Category \
    --output ~/Desktop/DatabaseModels.generated.swift \

When you run it, out pops a file with all your tables, represented as Swift structs:

// Generated code:

struct BlogPost: Codable {
    static let tableName = "blog_posts"

    let id: UUID
    let content: String
    let authorID: UUID
    let category: Category?

    enum CodingKeys: String, CodingKey {
        case id = "id"
        case content = "content"
        case author = "author_id"
        case category = "category"
    }
}

The types it can infer as native Swift values are automatically handled, and you can override other fields with your own custom types. Nullability/optionals are brought over, so you’ll never decode an honest value when the database can potentially have a null in it. Postgres enums are turned into Swift enums as well.

It slots really nicely into Vapor’s SQLKit, using Codable:

let users = try db.select()
    ...
    .all(decoding: BlogPost.self)
    .wait()

(I also helped add support for this kind of decoding to another Swift Postgres libary. We have a great community.)

I’ve been using the tool for over 3 years now (I know, I’ve been very remiss in my blogging) and it’s been going great. Because you can see the output of the tool before you commit it, there’s very little risk in using it.

One final component that would normally be handled by ORM is migrations. For that, I want to explore mig, which would fill this gap nicely.

There’s plenty of future work here — running this at build time on the server (so that your build literally won’t complete if it’s building against a database that it can’t talk to!), using SPM’s extensible build tools, storing settings in a JSON file in your git root, type name prefixes and suffixes, and a module to hold all the generated code — these are all appealing ideas. If you end up needing or implementing one of these features, definitely drop me a line! I would love to integrate it.

RGB kind of sucks.

RGB, not unlike ASCII, memory addresses, and having 86,400 seconds in day, is one of those things that makes programming a little simpler for a bit, until it doesn’t anymore.

In theory, RGB is a group of color spaces that lets you tell the display how much voltage each subpixel needs. However, in practice, we now have phones with displays that let you show more than 100% red, which is a new type of red called super red. We have other displays that have twice as much blue as red or green. Your RGB values are not corresponding to display voltages and they probably haven’t for a while now.

RGB is also hard to think about. Red, green, and blue additive light don’t behave like much that we’re used to — you can see the individual colors up close but as you get further away, they blend together and you start to see only one color. From far enough away, you can’t convince your mind that there are three lights. You’re currently looking at millions of tiny little 3 light arrays, and yet the effect is so totalizing that you almost never think about it.

Finally, RGB is hard to manipulate. If you start from black, you can increase the amount of “red” in an RGB color picker, which will make things more red. So far so good. Then you start increasing the “green”, and you get…yellow? This is not a very intuitive color space to navigate around. There are other representations of colors that lend themselves to being changed more easily.

Colors for Years

I have a personal app where I need to show a graph of some years. Each year needs a different color on the graph, and so every new year I go into the code, find a nice new color for the new year, and deploy the app. How many years am I going to do this for until I find an algorithm with which to automate it?

I need some colors that are a) arbitrary feeling, b) nice looking, and c) determined purely by an integer for the year. We need to implement a function like this:

func color(for year: Int) -> Color

RGB can really only satisfy the first of my criteria — it can make random colors with random numbers:

Color(red: .random(in: 0..<1), blue: .random(in: 0..<1), green: .random(in: 0..<1))

Unfortunately, colors generated like this look really bad. They often come out muddy and ruddy, and generating more than one color doesn’t come with any pattern or structure. The colors are all over the place.

This is a structural problem with RGB. RGB is focused on how color is produced, rather than how it’s perceived.

Fortunately, the solution to this problem is well documented. There are a few blog posts out there (warning: JavaScript) that lay out an approach. The idea is this: by using a hue based color space, like HSL, you can hold two parameters constant (saturation and lightness), and modify only the hue, giving you multiple colors that live in the same “family”.

(There are subtle differences between HSL, HSB, HSV, and HWB, but the hue rotation is basically the same in all of the color models, and any of them will work well with this technique.)

For example, using 0.8 for both saturation and lightness gives you nice pastels:

Color(hue: .random(in: 0..<360), saturation: 0.8, lightness: 0.8)

You can play with this color picker; drag the “hue” slider to see lots of colors in this family.

On the other hand, 0.6 for the saturation and 0.5 for the lightness gives you more robust colors:

Color(hue: .random(in: 0..<360), saturation: 0.6, lightness: 0.5)

This color picker shows examples of these colors.

Astute readers will note that, while Apple’s own APIs take a number from 0 to 1, this fake initializer I made expects a hue from 0 to 360. I find this to be more illustrative, because this value represents some number of degrees. There’s a physical analogy here to a hue circle. Circles loop back on themselves, and therefore 359º is basically the same color as 1º. This lets you overshoot the end of the hue circle and mod by 360º to get back to a reasonable color.

This lets us implement most of our color(for year: Int) function.

func color(for year: Int) -> Color {
	let spacing = ...
	return Color(hue: (year * spacing) % 360, saturation: 0.8, lightness: 0.5)
}

The spacing represents the number of degrees to go around the hue wheel each time we need to pick the next color.

What is the optimal number to chose here?

Rotating in Hue Space

If we make this angle too close to zero, the colors will be too close together on the hue wheel, making them too similar. However, if we make it too close to 360º (a full revolution), once the degrees are modded by 360, they’ll still be too similar, except they’ll go backwards around the hue wheel. Maybe we want to try 180º? That makes every other color the exact same, so that’s not quite right either.

In fact, any rotation that divides evenly into 360º will result in repeats after a while. And 360 has a lot of factors!

One solution is to space things out by the 360 divided by the number of years we have, but then the colors would change every time there’s a new year. It makes a rainbow, which, while it does look nice, doesn’t quite have the random look I’m going for.

However, there’s a better way to do this, and the answer is in a YouTube video I watched over 10 years ago. The remarkable Vi Hart published a series of videos (one, two, three) about how plants need to grow their new leaves in such a way that they won’t be blocked by the leaves above, which lets them receive maximum sunlight. The second video in the series is where the relevant bit is.

The number of degrees around the stalk that a plant decides to grow its next leaf from is the exact number we are looking for: some fraction of a turn to rotate by which will give us non-overlapping leaves — I mean, colors.

Because any rational number will result in repeat colors — or overlapping leaves — she seeks an irrational number; ideally the “most” irrational number. She finds it in ϕ, or roughly 1.618. We want to go 1/1.618th of the hue circle each time we need a new color, and this will give us the colors we want.

func color(for year: Int) -> Color {
	let spacing = 360/1.618
	return Color(hue: (year * spacing) % 360, saturation: 0.8, lightness: 0.5)
}

If the colors are not to your liking, you can add a little extra rotation by adding a phase shift to the equation:

func color(for year: Int) -> Color {
	let spacing = 360/1.618
	return Color(hue: 300 + (year * spacing) % 360, saturation: 0.8, lightness: 0.5)
}

This function meets our criteria: colors that come out of it a) are arbitrary, b) look pretty good, and c) are purely determined by the year.

A Step Further

If your only goal is some simple colors for a prototype or for a side project, what I’ve covered so far will suffice. But if you want to use this in more serious and wide-ranging applications, you can take one more step.

HSL has some some serious drawbacks. It, like RGB, was designed for ease of computation rather than precision in the underlying colors. Specifically, when rotating the hue value (which is what we’re doing with this technique), you’ll find that some hues are tinted much lighter than others, even holding saturation and lightness constant. These colors look lighter, even though they’re technically the same “lightness”.

The LCh color space (luminance, chroma, hue) solves this problem. As far as I can tell, it’s the gold standard for colors on a display. It gives you perceptual uniformity, which lets you rotate the hue and get colors that are even more similar to each other than you’d get with HSL; it also confers some benefits when it comes to contrast for reading text.

In fact, if you look closely at the colors above (which represent the colors for the years 2015–2023 using our algorithm), that lime green is looking a little muted relative to its purple neighbor.

You can play with an LCh color picker here. To make LCh work with UIColor, you can use these four useful gists.

Using LCh to generate my colors with the hue rotation technique above yielded beautiful colors.

func color(for year: Int) -> Color {
	let spacing = 360/1.618
	return Color(luminance: 0.7, chroma: 120, hue: 300 + (year * spacing) % 360)
}

These colors all have similar lightness to me, and they look great for something totally procedurally generated. They’re vibrant, uniform, and wonderful.

The model you choose to inhabit creates constraints that you may not have intended to be constrained by. Any color from any of these color spaces can be (more or less) translated to any other color space with a little bit of math, so the colors we ended up with could be written in terms of red, green, and blue values (again, hand-waving a little here). But while RGB can represent these colors, that doesn’t mean you can easily move through the space in a way that yields colors that look good together. Picking the right color space to start out makes the problem at least tractable.

Tractable, but still not solved. These arbitrary beautiful colors can be generated using a process stochastically discovered by evolution, discovered by scientists in 1830, and brought to practice using a robust set of web standards that let me show them to you in a browser.

At the end of it all, a plant’s desire for sunlight held the key to making nice colors for my little chart.

I recently had occasion to give my old Sudoku talk again. For those who haven’t seen the talk, I live-code my way through a Sudoku problem and together we write a solver that can solve any valid grid. It’s a very fun talk to give and I hope it’s enjoyable to watch as well.

While preparing for the talk, I took the chance to update and modernize some of the code.

I was able to use multi-line strings to represent the grid in a graphical way and get rid of an old helper that is now part of the standard library as allSatisfy(_:). More important than those changes, though, I was able to incorporate the new “primary associated types” for protocols, which had a way bigger impact on the code than I expected.

Let dig in. Here’s how the grid is structured:

public struct Grid {
     private var rows: [[Cell]]
}

There are a few ways to represent the data, but I chose an array of arrays, which represent the rows of the grid.

However, the underlying data structure is not as important, because the primary mode of interaction with this object is through the “units” of the grid, like rows, columns, and boxes. Here are some helpers to extract those:

extension Grid { 
     public var cells: [Cell] {
         return Array(rows.joined())
     }

     public func row(forIndex index: Int) -> [Cell] {
         let rowIndex = index / 9
         return rows[rowIndex]
     }

     public func column(forIndex index: Int) -> [Cell] {
         let columnIndex = index % 9
         return self.rows.map({ row in
             return row[columnIndex]
         })
     }

     public func box(forIndex index: Int) -> [Cell] {
         let rowIndex = index / 9
         let columnIndex = index % 9
         let boxColumnIndex = columnIndex / 3
         let boxRowIndex = rowIndex / 3
         return (0..<3).flatMap({ rowOffset in
             return self.rows[boxRowIndex*3+rowOffset][boxColumnIndex*3..<boxColumnIndex*3+3]
         })
     }
}

The box function is definitely the most daunting here, but what I want to look at here is the return types of all these functions. They all return arrays. Every time you request any of these, the relevant data is copied into a new array. This happens so many times during the solve that it starts to add up. For example, isSolved looks like this:

 public var isSolved: Bool {
     return cells.allSatisfy({ $0.isSettled })
 }

It asks for all the cells in the grid, which creates an 81-cell array, copies all the data to it, and then iterates that array exactly one time, looking for a cell that isn’t settled.

Before Swift 5.7, for the cells property, you could return some Collection. This enables you get rid of the of conversion to an Array, which saves you the copying.

 public var cells: some Collection {
     return rows.joined()
 }

Sadly, this pre-Swift-5.7 code is nearly useless. Even though Swift can infer the full return type of this function, the contract with external callers doesn’t include the type of the element. They get to know it’s a collection and that it has some element type, but not what the element is. They have to use it as an opaque value. It was a pretty big shortcoming when working with the some keyword.

Swift 5.7 changes all that. Protocols can now provide some “primary” associated types, which callers can rely on using the angle-bracket generics syntax that we all know and love:

 public var cells: some Collection<Cell> {
     return rows.joined()
 }

This is huge. Now, since .joined() produces a “lazy” collection, the collection stays lazy, and doesn’t have to copy its items to a new array. Less work, big upside, and the contract with callers includes the element type.

Now, you could say this isn’t anything new. After all, I could have written the concrete type of rows.joined() for the return type of the cells property, and seen the same benefit:

 public var cells: JoinedSequence<[Cell]> {
     return rows.joined()
 }

Not too bad, right? However, here’s what the return type for box(forIndex:) looks like when you add a .lazy to get the intended performance:

public func box(forIndex index: Int) -> LazySequence<
    FlattenSequence<
        LazyMapSequence<
            LazySequence<(Range<Int>)>.Elements,
            ArraySlice<Cell>
        >
    >
> {
    let rowIndex = index / 9
    let columnIndex = index % 9
    let boxColumnIndex = columnIndex / 3
    let boxRowIndex = rowIndex / 3
    return (0..<3).lazy.flatMap({ rowOffset in
        return self.rows[boxRowIndex*3+rowOffset][boxColumnIndex*3..<boxColumnIndex*3+3]
    })
}

This isn’t usable. Not only is it hideous, these are implementation details, fully on display for the world to see. If you ever change the implementation of the function or the internal structure of the Grid type, you’ll have to change the return type as well. If this was a library, callers could even rely on details of the return type that you didn’t intend to expose.

The typical pre-5.7 way around the problem of messy return types was to use AnyCollection. Using AnyCollection has performance trade offs though, since it boxes its value, slowing down accesses and other operations.

How bad is that trade off? My expectation was that not returning an Array and returning an AnyCollection instead (which helps you avoid the expensive copy) would get you almost all of the way there. However, it turns out that you get about a 15% improvement in the solver’s runtime with AnyCollection. Switching to some Collection<Cell> gets you an additional 15% improvement. This means that boxing up the collection is about half as bad as fully copying every item to a new buffer. These are ultimately pretty short arrays (under 100 items each) so algorithms that operate on bigger data will benefit even more from these tweaks.

Another lesson (that I always feel like I have to learn afresh) is that you shouldn’t make assumptions about performance. Always profile! Your intuitions are fallible.

So here’s all of the helpers, returning some Collection<Cell> with the new syntax:

public var cells: some Collection<Cell> {
    return rows.joined()
}

public func row(forIndex index: Int) -> some Collection<Cell> {
    let rowIndex = index / 9
    return rows[rowIndex]
}

public func column(forIndex index: Int) -> some Collection<Cell> {
    let columnIndex = index % 9
    return self.rows.lazy.map({ row in
        return row[columnIndex]
    })
}

public func box(forIndex index: Int) -> some Collection<Cell> {
    let rowIndex = index / 9
    let columnIndex = index % 9
    let boxColumnIndex = columnIndex / 3
    let boxRowIndex = rowIndex / 3
    return (0..<3).lazy.flatMap({ rowOffset in
        return self.rows[boxRowIndex*3+rowOffset][boxColumnIndex*3..<boxColumnIndex*3+3]
    })
}

The code is nicer, better reflects its intent, and hides implementation details from the caller. These are all worthwhile ends in themselves, but we also see substantial performance improvements. This feature is a huge boon for collection-heavy code. By making the simple thing the fast thing and the safe thing for libraries you end up with a win on all fronts.

Here’s a fun experiment: if your app has a designer, ask them how many colors they think are in your app. Then, count the number of colors that you actually use in your app. The bigger the app, the more comical the difference will be.

I’ve got a solution for this which is pretty fun to boot. You should name your colors.

I find that a lot of people do a good job picking colors, but when it comes time to put those colors into practice, the names that get picked are somewhat boring. UIColor.gray40? .appRed? Where is the joy in that? Instead of really generic names, try to find short, unique, fun names for each color.

Finding names can be a challenge. Fortunately, there are tools that can help. For example, if you have a hex code, you can plug it into a website like Chirag Mehta’s Name that Color, Robert Cooper’s Color Name or colornames, the last of which seeks to name every single color (to sometimes hilarious effect). There’s also a GitHub repo with 30,000 colors in it that even has a public API for retrieving color names.

From those sites, you’ll get a name, which you may or may not like. If you don’t like the name, you can use the color picker to explore nearby names in the colorspace and try to find a name you you do like.

For example, given #3FAC38, I see “Green Seduction”, “Apple”, and “Grass Stain Green”, on each of the three websites linked above. Exploring a little bit, I can find names like Clover, Lima, or Hedge, any of which I think make reasonable names.

Another random color: #174FC5 gives “Denim”, “Mourning Blue”, and “Indigo Fork”. Denim is quite good, but we can also find Pacific, Sapphire, and Mariner if we want more options.

A great name is usually one word (which makes it short to type and easy to say), very different from the names of any other colors in your app, and is evocative. You want to feel the color as much as possible when you read its name.

The name doesn’t even really have to mean the color, as long as it feels good. For example, there was a blueish color in an app that we called “bunting” which I think we got from the Indigo bunting. We kept the “bunting” and tossed the indigo, even though you would think the other way makes more sense. Do what feels good.

Why name your colors at all? If I’m being completely honest, I mostly do it because it makes my work slightly more fun. You can find practical benefits, however: a small one is that they’re a lot easier to remember when writing code and easier to visualize when reading code.

A bigger benefit is that it forces you to stick to a color palette with a fixed number of colors. If your designer uses a new one in a design (an accident, I’m sure), you can innocuously ask them to name it, which either forces them to think hard about whether they want to add a new color to the roster, or makes them update the mock to reuse an existing color.

To close, a note on grays: I find that in practice, an app has more grays than any other color, and so this is the part that you’ll spend the most time in. Great gray color names come from metals and stones (“silver”, “boulder”, “slate”, “obsidian” “granite”), atmospheric conditions (“mist”, “fog”, “smoke”, “cloud”), and animals (“panther”, “dove”, “wolf”), but feel it out. You’ll find some good names.

Async/await is here!

Five (5!!) years ago, I wrote about what async/await might look like in Swift:

async func getCurrentUsersFollowers() throws -> [User] {
    let user = try await APIClient.getCurrentUser()
    let followers = try await APIClient.getFollowers(for: user)
    return followers
}

I put the async keyword in the wrong place (it actually goes next to the throws), but otherwise, pretty close to the final feature!

Today, I want to look at adopting some of the new async/await features. I have an app that’s already on iOS 15, so it’s a great testbed for these goodies. One of the parts of this app requires a download progress bar.

Normally, the easiest way to build a progress bar is to observe the progress property on the URLSessionDataTask object that you get back when setting up a request:

let task = self.dataTask(with: urlRequest) { (data, response, error) in
    ...
}
self.observation = task.progress.observe(\.fractionCompleted) { progress, change in
    ...
}
task.resume()

Unfortunately, the await-able methods on URLSession don’t return a task anymore, since they just return the thing you want, in an async fashion:

func data(from url: URL, delegate: URLSessionTaskDelegate? = nil) async throws -> (Data, URLResponse)

One would think the URLSessionTaskDelegate would have some affordance that calls you back when new bytes come in, but if that exists, I couldn’t find it.

However, iOS 15 brings a new API that can be used for this purpose — a byte-by-byte asynchronous for loop that can do something every time a new byte comes in from the network — called AsyncBytes.

Using it is a little strange, so I wanted to detail my experience using it. The first thing I had to do was kick off the request.

let (asyncBytes, urlResponse) = try await URLSession.shared.bytes(for: URLRequest(url: url))

This returns two things in a tuple: the asynchronously iterable AsyncBytes and a URLResponse. The URLRepsonse is kind of like the header for the HTTP request. I can get things like the mimeType, a suggestedFilename, and, since my goal is to keep track of progress, the expectedContentLength.

let length = Int(urlResponse.expectedContentLength)

Next, before I could await any bytes, I need to set up a place to put them. Because I now know how many bytes I’m expecting, I can even reserve some capacity in my new buffer so that it doesn’t have too many resizes.

let length = Int(urlResponse.expectedContentLength)
var data = Data()
data.reserveCapacity(length)

Now, with all that set up, I can await bytes, and store them as they come in:

for try await byte in asyncBytes {
    data.append(byte)
}

This for loop is a little different from any for loop you’ve ever written before. asyncBytes produces bytes, and calls the for loop’s scope every time it has something to give it. When it’s out of bytes, the for loop is over and execution continues past the for loop.

One question that this API raises: why does it call your block with every byte? The (very) old NSURLConnectionDelegate would give you updates with chunks of data, so why the change? I too was a little confused about this, but at the end of the day, any API you call with a chunk of data is just going to have a byte-by-byte for loop inside it, iterating over the bytes and copying them somewhere or manipulating them somehow.

Now that I have the basic structure of my data downloader, I can add support for progress. It’s just a for loop, so I can calculate the percent downloaded for each cycle of the for loop and assign it to a property.

for try await byte in asyncBytes {
    data.append(byte)
    self.downloadProgress = Double(data.count) / Double(length)
}

In this case, self.downloadProgress is some SwiftUI @State, and it turns out assigning a new value to that property slows the download by 500%. Not great.

I think this example highlights something important about this new API. The file I was trying to download was about 20MB. That means my for loop is going to spin 20 million times. Because of that, it’s extremely sensitive to any slow tasks that take place in the loop. If you do something that is normally pretty fast — let’s say 1 microsecond — 20 million of those will take 20 seconds. This loop is tight.

My next instinct was to read the data every tick, but only write to it every time the percentage changed.

for try await byte in asyncBytes {
    data.append(byte)
    let currentProgress = Double(data.count) / Double(length)

    if Int(self.downloadPercentage * 100) != Int(currentProgress * 100) {
        self.downloadPercentage = currentProgress
    }
}

This, sadly, also moved at a crawl. Even just reading some @State from SwiftUI is too slow for this loop.

The last thing I did, which did work, is to keep a local variable for the progress, and then only update the progress in SwiftUI when the fastRunningProgress had advanced by a percent.

var fastRunningProgress: Double = 0
for try await byte in asyncBytes {
    data.append(byte)
    let currentProgress = Double(data.count) / Double(length)

    if Int(fastRunningProgress * 100) != Int(currentProgress * 100) {
        self.downloadPercentage = currentProgress
        fastRunningProgress = currentProgress
    }
}

Not the most beautiful code I’ve ever written, but gets the job done!

AsyncBytes is probably the premier new AsyncSequence in iOS 15. While it does feel like a very low-level API that you generally wouldn’t choose, it yields such fine-grained control that it’s flexible enough to handle whatever trickles in over the network. It’s also great to get a little experience with AsyncSequences early. Because of its byte-by-byte interface, it’s very straightforward to write things like progress indicators. Great stuff!

As a professional programmer, there are two main types of tasks you work on. I’ve started thinking about them as the context and the logic.

The logic is what you think this job is going to be about when you first start. How do I slice this collection up? How do I find all the paid invoices for this client and sum up their amounts? How does this date get turned into a string to be displayed on the screen? What floor should this elevator go to next? The logic is what they grill you on in interviews. The logic is algorithms. The logic is sometimes specific to your business. The logic is sometimes reusable. The logic has inputs and outputs that are testable.

The context is…everything else. How do I get this data from that service into this client? How do I make this code from this library talk to that code in that library? How do I make my build compile faster? What UI testing framework are we going to use? How do I fill this view controller up with the dependencies it needs? How do I talk to this nifty new e-ink display I bought? Which compile flags will give me useful stack traces when the app crashes? How do I perform this database migration? The context is everything that’s necessary to get your logic to run successfully, consistently, efficiently.

In Structure and Interpretation of Computer Programs, Abelson and the Sussmans write that an algorithm, the logic, “is not composed of matter at all. However, it is very real. It can perform intellectual work. It can answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm in a factory.” It can’t do any of that stuff without a context in which to run. The context lets it communicate to hardware over protocols, send information to a distant database, and even defines how the code is converted into an intermediate representation, CPU instructions, and finally voltage that plays across the silicon. Without context, the logic is purely abstract.

If I wanted to write a dynamic controller for my HVAC — a thermostat! — in Swift, I probably could do it. I could get some hardware like this to talk to the HVAC over 24V, a Raspberry Pi to run it on, maybe a few Raspberry Pis with thermometer sensors around the house to figure out when to turn the HVAC on and off, probably connect everything over Wi-Fi. But while this is possible, think about how much of your energy would be spent soldering hardware, connecting components, testing, writing servers, defining protocols and wire formats, and then compare that to how much time and energy you’d spend actually writing the dynamic control software. It wouldn’t be easy to write the logic, but it would take a fraction of the time that setting up the context would. (Hmm, now I just have to convince my partner that building our own custom thermostat will somehow be better than our Ecobee. It’ll at least be more fun, that’s for sure.)


How much of your time at your job is actually spent on writing the logic, and how much of it is spent preparing an environment in order for that logic to run? I wouldn’t be surprised at all if I found out that 98% of my time was spent on context.

I think a slightly different (and more familiar) way to think about this is in terms of essential versus accidental complexity, a division first suggested by Fred Brooks in 1986. Essential complexity is the logic, accidental is the context. Dan Luu writes about Brooks’s essay: “while this will vary by domain, I’ve personally never worked on a non-trivial problem that isn’t completely dominated by accidental complexity, making the concept of essential complexity meaningless on any problem I’ve worked on that’s worth discussing.”

Nonetheless, logic is not quite the same thing is as essential complexity, and context is not the same as accidental complexity. One example of something that is logic but still potentially accidental complexity is writing an algorithm like Ruby’s #squish in Swift. It’s still logic, it behaves like something you might ask in an interview question, you have to manipulate abstract symbols to get the right output, but it’s a total accident of history that Ruby has made it so you can use it without thinking about it logically, but Swift hasn’t. Another way to look at it: all context is accidental, but not all logic is essential.

Dan estimates 1% as an upper bound of his time spent on essential complexity.

Another question: how much of your code is logic, and how much of it is an environment in which that code can run? To take a quick example, let’s look at a table view in UIKit and then in SwiftUI:

class CountriesTableViewController: UITableViewController {

    let countries: [Country]
    
    override func viewDidLoad() {
        super.viewDidLoad()
        tableView.register(UITableViewCell.self, forCellReuseIdentifier: "cellIdentifier")
    }
    
    override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
        return countries.count
    }

    override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
        let cell = tableView.dequeueReusableCell(withIdentifier: "CountryCell", for: indexPath)
        
        cell.textLabel?.text = countries[indexPath.row].name

        return cell
    }    
}

I count 3 lines of real, core logic. Defining the countries array, telling it the count of countries, and assigning the label’s text property.

And the same thing in SwiftUI:

struct CountriesList: View {

    let countries: [Country]

    var body: some View {
        List(countries) { country in
            Text(country.name)
        }
    }
}

Where did all of that other stuff go? The 3 lines of core logic are still there, but everything else seems to have disappeared. It was all context, non-essential for this purpose. A different tool made it vanish. This problem only gets worse as you codebase gets larger; your time becomes dominated by the context. From stem to stern, a typical feature might need a new database table on the server, some queries for that table, some endpoints that call those queries, some networking code on the client to hit those endpoints, a ton of routing code to get the user to a new view controller, and finally dozens of lines of table view controller code, all so you can put a label on the screen with the contents of a database field.

The context even has social and political elements. Who is writing the endpoint? What are their priorities? How do they earn promotions and how will that affect their writing the endpoint you need? Every time you read a tweet about how you “can learn the coding part while on the job, but the empathy and human components you need to have before you get there” is exactly about this.

This framing, context vs logic, illustrates two things for me:

First, that we all tell ourselves a lie: this job is primarily about the logic, interview candidates should mainly be tested on their ability to think about the logic, a “good” programmer is someone who can write the logic really well. In fact, an overwhelming amount of the job is making the context work. That’s not to say that the logic isn’t important; without the logic, the context doesn’t do anything and you won’t be able to do the job! But without the context, you still can’t do the job, and sadly there’s a lot more context than logic. I’m primarily a context programmer. I wish I weren’t — I enjoy writing the logic a lot more — but it is the reality. I should embrace that and treat the context as my job, rather than as an impediment to “my real job”.

Second, if you can make your context simpler and smaller, you can spend less time on it. Simplifying and unifying your context (where possible) is valuable, since you can recoup value by spending less time working in the context. Don’t use two technologies where one will do.

Some examples of this:

  1. Using multiple ORMs/data access patterns. The lava layer anti-pattern hurts you specifically because it adds so much extra context to work with. If you can make your context simpler and smaller, you can spend less time on it.
  2. I moved my server set-up from dynamic-sites-on-Heroku/static-sites-on-Linode to everything on Linode (using Dokku). One tool, one server, everything gets treated the same.
  3. Use clever tools, languages, and libraries to make the context become less and less impactful. You can see this in Dan’s essay and with the SwiftUI example. A tool like Fastlane brings code-signing, testing, deploying, and integrations all under one roof and lets you manipulate any of them with short Ruby commands. (In addition to unifying disparate things, this also lets you logic-ify your context, which is neat, too.)

You’ll always have a lot of context to wade around in. This is, sadly, your job. Try to minimize this context as much as possible and you can spend a little less time on it and more time on the good stuff.

“Telling a programmer there’s already a library to do X is like telling a songwriter there’s already a song about love.” - Pete Cordell

Sometimes, it feels like questions that arise during my time programming take years to answer. In particular, my journey to a web framework that I vibe with has been a long one.

A few years ago, I wrote about trying to squeeze the behavior I want out of Vapor. I (rather poorly) named the concept Commands, and I wrote a blog post about the pattern.

Generally speaking, this style of routing, where you use a closure for each route, has been used in frameworks like Flask, Sinatra, and Express. It makes for a pretty great demo, but a project in practice often has complicated endpoints, and putting everything in one big function doesn’t scale.

Going even further, the Rails style of having a giant controller which serves as a namespace for vaguely related methods for each endpoint is borderline offensive. I think we can do better than both of these. (If you want to dig into Ruby server architecture, I’ve taken a few ideas from the Trailblazer project.)

Here’s what the Command protocol basically looked like:

protocol Command {
    
    init(request: Request, droplet: Droplet) throws
    
    var status: Status { get }
    
    func execute() throws -> JSON
	
}

(This is Vapor 2 code, so I think the above code won’t compile with modern versions of Vapor. RIP to “droplets”.)

The Command protocol represented an instance of something that responds to a request. There was a lot I liked about the Command pattern. It was one object per request, meaning each object had strong sense of self, and the code was always nicely structured and easy for me to read.

However, it had some downsides. First, it was tacked on to Vapor. There was a fair bit of code to ensure that things stayed compatible with Vapor, and when they released an update, I was obliged to migrate all my stuff over as well.

In addition, the initializers for Commands always had a ton of work in them:

public init(request: Request, droplet: Droplet) throws {
    self.apnsToken = try request.niceJSON.fetch("apnsToken")
    self.user = try request.session.ensureUser()
}

Anything you needed to extract out of the Request had to be done here, so there was always a lot of configuration and set up code here. For complex requests, this gets huge. Another subtle thing — because it relies on Swift’s error handling, it can only ever report one error.

This initialization code looks to me like it could get simpler still, and with a little configuration, you could get the exact results you wanted, good errors if something went wrong, and a hefty dose of optimization that I could sprinkle in by controlling the whole stack.

Meridian

Enter Meridian.

struct TodoDraft: Codable {
    let name: String
    let priority: Priority
}

struct CreateTodo: Responder {

    @JSONBody var draft: TodoDraft

    @EnvironmentObject var database: Database
    
    @Auth var auth

    func execute() throws -> Response {

        try database.addTodo(draft)

        return try JSON(database.getTodos())
            .statusCode(.created)
    }
}

Server(errorRenderer: JSONErrorRenderer())
    .register {
        CreateTodo()
            .on(.post("/todos"))
    }
.environmentObject(try Database())
.listen()

Property Wrappers

Meridian uses property wrappers to grab useful components from the request so that you can work them into your code without having to specify how to get them. You declare that you want a @JSONBody and give it a type, it handles the rest:

@JSONBody var draft: TodoDraft

Because the value is a non-optional, when your execute() function is called, you’re guaranteed to have your draft. If something is wrong with the JSON payload, your execute() function won’t be called (because it can’t be called). You deal in the exact values you want, and nothing else.

You want a single value from the JSON instead, because making a new Codable type to get a single value is annoying? Bam:

@JSONValue("title") var title: String

You can make the type of title be optional, and then the request won’t fail if the value is missing (but will fail if the value is the wrong type).

URL parameters: very important. Sprinkle in some type safety, and why don’t we make them named (instead of positional) for good measure?

@URLParameter(\.id) var id

Query parameters, including codable support for non-string types:

@QueryParameter("client-hours") var clientHours: Double?

A rich environment, just like SwiftUI, so you can create one database and share it among many responders.

@EnvironmentObject var database: Database

You can use EnvironmentValues for value types or storing multiple objects of the same type.

@Environment(\.shortDateFormatter) var shortDateFormatter

The cherry on top? You can define your own custom property wrappers that can be used just like the first-class ones. I’ve primarily been using this for authentication:

@Auth var auth

You can define them in your app’s module, and use it anywhere in your app, just like a first class property wrapper.

All of these property wrappers seek to have great errors, so users of your API or app will always know what to do to make things work when they fail.

It’s a real joy to simply declare something at the top of your responder, and then use it as though it were a regular value. Even though I wrote the code, and I know exactly how much complexity goes into extracting one of these values, there’s still a magical feeling when I write @QueryParameter("sortOrder") var sortOrder: SortOrder and that value is available for me to use with no extra work from me.

Outgoing Responses

Property wrappers represent information coming in to the request. However, the other side of Meridian is what happens to data on the way out.

For this, Meridian has Responses:

public protocol Response {
    func body() throws -> Data
}

Responses know how to encode themselves, so JSON takes a codable object and returns JSON data. All the user does is return JSON(todos) in their execute() function, and the data is encoded and relevant headers are attached.

EmptyResponse, Redirect, and StringResponse are all pretty straightforward. It’s also not too hard to add your own Response. In one project, I needed to serve static files, so I added a File response type.

struct File: Response {
    let url: URL
    
    func body() throws -> Data {
        try Data(contentsOf: url)
    }
}

This might get more advanced in the future (maybe we could stream the file’s data, let’s say), but this gets the job done.

Responses are a type-aware way of packaging up useful common behavior for data coming out of a request.

Where things are headed

Meridian is nowhere close to done. I’ve written three different JSON APIs with it and a rich web app (using Rob Böhnke’s Swim package to render HTML). The basic concepts are working really well for me, but there’s quite a few things it doesn’t do yet.

  • Parsing multipart input
  • Serving static files
  • Side effects (like the aforelinked Commands)
  • Localization
  • HTML nodes that can access the environment
  • Middleware?

Meridian also knows so much about the way your requests are defined that it should be able to generate an OpenAPI/Swagger spec just from your code, which would be an amazing feature to add.

Meridian is currently synchronous only. The options for async Swift on the server are not great at the moment (because you either end up exposing NIO details or rewriting everything yourself). I’d rather have the migration to async code be as simple as putting await in front of everything, instead of changing all of your code from a block-based callback model to a flat async/await model. I’m focused more on developer experience than sheer request throughput.

I also have a lot of docs to write. I also wrote some docs.

While it’s a bit passé to not blog for over a year, and then dive right back in by talking about your blog setup, I’m going to do it anyway. I recently moved all of my static sites and Swift services over to Dokku, and I am really enjoying it.

Dokku is a collection of shell scripts that act as a self-hosted platform-as-a-service. Basically, it’s your own personal Heroku. Chris and I actually discussed it a few years ago on a (Patreon-only) episode of Fatal Error. I’ve wanted to move my stuff to it for a while, and finally spent some time over the break getting things moved over.

First, I want to run down how things were running before, and why I wanted something like Dokku. First, I manage 4 static sites, including this one. I, through painful trial/error/googling, carefully set up nginx virtual hosting to host all 4 sites on a single Linode instance. I also run 4 git remotes on that server, with clever post-receive hooks that accept pushes, do bundle install and jekyll build (when appropriate) and copy the HTML files to their final destination. This works pretty well and feels Heroku-esque (or Github pages-esque, if you like). I git push, and the rest happens for me automatically. I particularly like this model because it hits the trifecta — I get to host the site myself, I get to use Jekyll (including plugins!), and its interface is just a simple git push. I can even push changes from my phone using Working Copy.

Recently, I also started running a Swift API for a chore app for me and my girlfriend. I hope to blog more about this app soon for a few reasons, not least of which is that I wrote my own property wrapper-based web framework on top of Swift NIO, which has been working great.

This API has been running on Heroku. Because only two people use it, it doesn’t really make sense to pay $7/mo for a dyno, especially given that I run 4 static sites for $5/mo on Linode. Heroku does have a free tier they call “Hobby”, but using it incurs a performance penalty — your app spins down if you don’t use it for a little while (about 30 minutes or so?), and spinning back up takes 5-10 seconds. This makes for a pretty intolerable experience in the app. This too was a good candidate for moving to my own infrastructure.

I wrote one more small Swift-based web service for myself, which also shouldn’t have a ton of usage but does need its own Postgres database, and that cinched it. I wanted to move off of this patchwork system onto something more consistent.

I spun up a new Linode instance, installed Dokku, and got to work. Each Dokku site had three major components for me:

Buildpacks

Dokku uses buildpacks and Procfiles, just like Heroku, to describe how to install and run any app. Buildpacks can be set as an environment variable for the app, or included in the repo in a .buildpacks file. For Jekyll sites, you actually need two buildpacks:

https://github.com/heroku/heroku-buildpack-nginx.git
https://github.com/inket/dokku-buildpack-jekyll3-nginx.git

On the other hand, for Swift sites, the default Vapor buildpack works great:

https://github.com/vapor-community/heroku-buildpack

Encryption

Dokku has a concept of “plugins”, which add functionality to Dokku. The Let’s Encrypt plugin worked flawlessly for me. Installing a plugin is easy:

dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git

You install it, set your email as an environment variable, and ask the plugin to add encryption to your site. (DNS for the relevant domains needs to be pointing at the server already for this to work.) Love it.

Databases

The two Swift services both need Postgres. No problem, just one more plugin:

dokku plugin:install https://github.com/dokku/dokku-postgres.git postgres

From the Postgres plugin, you can create databases, link them to apps, expose them to the internet, and manage them as needed.

Debugging

I want to end on one final note about debugging. Dokku itself is not a particularly complex tool, since it delegates all the isolation and actual deployment of code to Docker. (Side question: is it “dock-oo”, because of Docker, or doe-koo, like Heroku? The world may never know.) The reliance on Docker means that it can be really helpful to know how to debug Docker containers.

Two specific tips I have here:

  1. You can dokku enter my-app-name web to enter the virtualized machine’s console, which can let you explore the structure of things.
  2. You can use docker ps to list all the currently running containers, and use that to watch the logs of a build in progress, by using docker logs -f <container id>. This is super useful for debugging failing builds. (Big thanks to Sam Gross for giving me some insight here.)

Conclusion

Dokku basically checks all my boxes. I get to run things on my own servers, it’s cheap, and deploys are easy as a git push. I’m paying the same amount as before ($5/mo! I can hardly believe it!) My previously manually managed static sites all have the exact same interface as they used to, with a lot less work from me to set up and manage, and now I’m hosting Swift-backed sites as well. Put one in the W column.