In one of my favorite posts of all time, Mike Hoye looks up exactly why we use zero indexed arrays. To spoil it, he finds that it was to optimize for compilation speed (rather than execution speed) on hardware where you only had a limited time to run your job. He writes:

The social reason is that we had to save every cycle we could, because if the job didn’t finish fast it might not finish at all and you never know when you’re getting bumped off the hardware because the President of IBM just called and fuck your thesis, it’s yacht-racing time.

So now, because Mike did the research, we can dispense with all of the arguments about fenceposts and pointer arithmetic. Off-by-one mistakes caused by zero-indexing are common, especially to newcomers who haven’t yet acclimated to the twisted way in which we bend our minds to make things easier for computers. Null-dereferencing bugs have been called a billion-dollar mistake, and zero-indexing seems decidedly worse and more common.

In the comments of that post, though, tons of programmers chime in with comments about “logical minds” and the “nature” of numbers. It’s as though these programmers didn’t digest the post. They want to justify the status quo as intentional, even when faced with cited evidence of the history: zero-indexing was just one more optimization from the early days of computing that found itself integrated into the heart of programming.

We come up with post-hoc reasons to defend things that are bad. Look at this pull request. A programmer suggests a change to the dash between years to be more precise. It’s a bit of light-hearted nerdery, which I would have expected everyone could get behind. But a few comments in, one contributor claims, “Personally, I’d prefer to keep the file as plain ASCII text.” Is there any computer on which Bootstrap is usable but which can only display ASCII text? We’re not talking about changing functionality, or even names — just a simple punctuation mark. And someone still has to find a way to disagree and try to maintain the status quo.

Reactions like these are constricting. And while constraints can be great sometimes, constraining ourselves unintentionally is when we limit ourselves from the full breadth of our thinking. They prevent us from critically analyzing about our world. The obsession with the status quo causes stuff to solidify over time. Certain elements of technology become canonized and mythologized.

The finest example of this is Model View Controller. I’ve written before about how each framework treats the “controller” portion of MVC differently, but I want to stress that even the even the other two parts of MVC, which we would normally think of as straightforward, are meaningless and inconsistent. For example, in Rails, a “view” is a template that is filled in with values from the model to generate HTML. In Cocoa Touch, UIView is a layer-backed object that can render itself into a pixel buffer. If you squint, they’re both representations of what the user will see, albeit in such an abstract way that it’s surprising that they have the same name.

And we can’t even rely on the model to be consistent, or even present! Imagine a sign up view controller. Are any of us actually moving the values from the form into a model object before sending it to the server? Why has MVC attained this state of reverence when it’s trivial to find cases where it doesn’t apply? And it’s not as though MVC is a mere suggestion: the Cocoa documentation actually states that every object has to be either a Model, a View, or a Controller, even though there are objects within Cocoa that don’t even fit in one of those three categories! “MVC” isn’t a rigorously-defined term anymore, but rather a necessary, low-information signal to other developers that our new framework fits into their worldview.

Some parts of the status quo have solidified so much that they can’t be changed. In a post called “The Next Big Language”, Steve Yegge argues:

C(++)-like syntax is the standard. Your language’s popularity will fall off as a direct function of how far you deviate from it.

If you want a language to be successful, you have to invoke methods with dot, use curly braces to make blocks of code, etc. He argues that C-style syntax is so hard to parse that it’s made it hard to create tools to manipulate code:

Most programmers don’t realize how often they need to deal with code that processes code. It’s an important case that comes up in many, many everyday situations, but we don’t have particularly good tools for dealing with it, because the syntax itself prevents it from being easy.

That’s why it’s bad.

C-style syntax probably will never go away as long as we have text-based programming languages. Even Swift, which tries to fix tons of flaws in programming, doesn’t even begin to move away from C-style syntax. It’s too ingrained in the way we view code.

This stuff isn’t a meritocracy. It just isn’t. Worse wins over better always repeatedly. PHP is the most dominant web language. A world where PHP is more successful than literally any other language isn’t a fair world. And I’m not upset because I expected this world to be fair; I’m upset because anyone dares to claim that it is.

This blog has become an exercise in questioning the way we make apps. I don’t want to be bound by mythology or ideology. Some ambitious ideas start out as laughable. And other ideas really are impractical. But the way we determine which is which is by trying them and sharing our results.

If someone tells you not to link your table cells and model objects directly, you have two good options: either find the blog post that convinces you that it’s bad idea, or try it and learn for yourself. (And then blog about the results so that we can learn from your mistake or success.)

It’s too easy to continue doing what we’ve always done. I want to question all the constants in my programming career. The things that are already status quo are don’t need cheerleading: they’re already winning. But the weird ideas, the undersung ones, those are the ones we should be championing.