Tests versus specs

It’s been popular for some years now to say that tests are “executable specifications.” I think this is a wrong way to to think about programming and leads to buggier programs than the traditional view that tests are tests.

Saying that tests are specs implies that you don’t need separate specifications. If this isn’t what you mean when you call a “test” a “spec,” then my arguments mainly won’t apply, but remember: regardless of what you mean, people will hear, “replace your specifications.”

Programs are three-legged stools that stand on the trio of specs, tests and program code. Stable programs require that each leg receives equal care. When done well, a useful tension between specs, tests and program code improves quality.

The form and labels can vary: specifications can be formal requirements or notes in a bug tracker; tests can be automated or manual. Whatever the form, every program has these three parts.

“Tests are specs” refers to a specific form: automated tests written in a style called “behavior driven.” Behavior-driven means that tests look like this:

describe Frobber... it 'frobs'...

Instead of

TestFrobber... test_frobs()...

From this style, we can infer the reasoning for replacing specs with tests:

Axiom: specs describe what a program should do

Axiom: tests verify that the program does what the specs say

Assume that you can write tests in a style where they describe what programs should do. Or, equivalently, assume you can write specifications as executable code that can verify compliance.

When so written, specs and a tests serve the same purpose. Thus, by the principle that you should eliminate redundancy, they should be the same thing.

The first problem with this argument is that there’s no real basis for believing that you can write specifications in executable language as clearly as if you wrote them in natural language. Things like using describe... it... and expect(x).toBe(y) (instead of assert x == y) superficially make program code look English-like (if you squint) but it’s not at all obvious that they make things more clear. If this style really is more clear, why are tests special? Why not write all code in quasi-English?

By a dubious appeal to Whorfianism – the idea that words we use affect what we do – behavior-driven style supposedly encourages people to write tests more declaratively because you “describe” what the program should do. It is true that programs are nearly always more understandable when written declaratively than imperatively, but again, there is no reason this should be specific to tests; all program code should be written as clearly and declaratively as possible.

Even if we assume that somehow we can write executable language as clearly as natural language, it doesn’t follow that we need only one or the other. We justify combining test and specs because “redundancy is bad.” By that logic, however, we don’t need test-specs to be separate from program code either. If the specification is executable, it doesn’t need to be the test because it could just as easily be the program. And then there was one.

In a sense, you truly can combine tests, specs and program. A program with unwritten program code is nothing but an idea. A program with unwritten tests and specs, is still a program. It’s just not a very good program and we know why.

[When programming] one must perform perfectly. The computer resembles the magic of legend in this respect, too. If one character, one pause, of the incantation is not strictly in proper form, the magic doesn’t work.

– The Mythical Man Month

Automated tests are programs too and just as capricious. A thorough test suite must often be as large as the program it tests and, therefore, will have as many errors.

Moreover, programs are notoriously hard to change without breaking, which brings us to the second major flaw in the logic behind combining tests with specs. It’s true that tests exist to verify conformance with specification, but that is not the only reason; tests also verify that changes don’t break things.

Remembering that tests are programs too, and just as hard to change (safely) as any program, it should be clear that the only way to avoid breaking tests is to avoid changing them. Furthermore, the only way to test the tests themselves is the write them in lock-step with the program under test. This is called the “red-green” cycle and it goes like this:

  1. Write a failing test
  2. Write the code necessary to make the test pass
  3. Repeat

In step one, if you check that the test fails in the way you expect, you know that it tests the code you wrote in step two. Then back away. So long as you don’t change the test, you can be reasonably sure it’s testing for the intended problem.

The benefit of a specification, by contrast, is that you write it before you write the program. It clarifies the problem and the goal and forestalls misguided coding.

Specs also will be incomplete and sometimes plain wrong, because writing a flawless description of program is as hard as writing a flawless program. So, to remain relevant, specs must change as programs change. If your specs are also your tests, not only will they be incomplete and wrong at the beginning, errors introduced by the changes will make them wrong in different ways at the end.

I still hope for executable specifications though I think “property-based” testing is a more promising avenue than “behavior-driven.” However they come, executable specifications will arrive – at latest, computers will learn to understand English better than humans – and even then, specifications cannot replace tests.

Do you need a Model?

It’s common for graphical programs to work something like this:

  1. Pull data from persistent storage: database, web server and so on
  2. Tuck that data away in a “Model” object
  3. Display the data
  4. Receive updates from either the user interface or the persistent storage
  5. Hope that the screen or the storage also updates, respectively
  6. Goto 4

The role of “hope” in this scenario is played by something often called “Data Bindings.” The problem with this design is that it’s an obviously bad idea: by keeping a Model like this, you’ve created a cache and cache invalidation is known to be a hard problem.

I’ll assume you’re writing a webapp, because, who isn’t? I claim you don’t need any Model client-side at all. If you have a long memory, you might say that I’m just being ornery and yearning for a time when we didn’t need a megabyte of JavaScript to display a form with one field:

Google's homepageYou’ll say you need these Models and Bindings because modern.

You’d be partly right. This does make me ornery, but let’s step back and look at what might be the number one killer of program maintainability: mutable state. A graphical interface is nothing if not a big pile of mutable state and the job of its programmers is to wrangle the ways it mutates.

In a sense, every useful program needs to grapple with this problem. Programs that lack persistent data or ways to display it are pretty useless, in the way that a tree might fall but nobody knows whether it makes noise and most people don’t care.

I advocate that you scrap the client-side Model, but I don’t claim this will make your webapp easy to write. No matter what, you’re dealing with a pile of mutable state and that’s bound to be tricky.

Why make it trickier than necessary? You really do need widgets on the screen: checkboxes that can be checked, or not, spans that could say one thing, or another. Do you need a shadow copy of those things as well?

Now, not all state duplication is called a Model and not everything called a Model is state duplication. Elm, for example, takes the view that Model is the application state but avoids state duplication by virtue of being purely functional.

Most programs aren’t written in purely functional languages, so the Model is state duplication, but users don’t see Models. They see widgets and to them, the state of the widgets is the state of the data. In a programmer’s ivory tower, you might argue that the Model is the source of truth, but your arguments matter not at all to actual people.

So don’t fight it. The path to enlightenment lies in realizing that once you’ve rendered the data, it doesn’t matter anymore. Throw it away.

Wait, that might work for your Web 2.0 site and its DHTML, but I have a Web APP. It’s modern. It has a span over here and a checkbox over there and the span says ‘frobbing’ or ‘unfrobbing’ depending on whether the checkbox is checked.

Ok, but I’m not seeing how Models and magic Bindings are simpler than this:

checkbox.onchange =
  span.textContent = checkbox.checked ? 'frobbing' : 'unfrobbing'

Well, it’s not just this one span, this blink tag over here needs to appear if you’re not frobbing. With your plan, I need to add another line of code to handle that.

True, but you’d have to add something when backing it with Models and Bindings as well. It wouldn’t be less code, just different code and more indirection.

There are more ways frobbingness can change though. I don’t want to copy this logic in every place.

And well you should not, but you already have the tool to solve that problem: a function. Move the interface update logic to a function and call it anytime frobbingness changes.

I see what you’re saying, but I don’t like it. The span and the blink are in logically different components. I’ll end up with a bunch of functions that update both, but don’t sensibly belong to either.

That’s a very good point.

Partly, the appeal of the client-side Model is that it seems we ought to be able to bind all the widgets to the same object, giving us a kind of fan-in-fan-out approach to updating them. In principle, you’d have just one Frobber and you’d change its isFrobbing property, then all the dependent widgets would get notified. People have been trying to implement this concept since the invention of the computer and it’s gone by many names. Recently, the buzzword is “reactive” programming.

This idea can work well in some systems. Spreadsheets and style sheets are examples of very successful reactive programming models, for example. It works well in limited domains with declarative or purely functional languages.

In non-functional practice, the same data structures aren’t convenient for all widgets. Imagine that Frobbers are parts of Whatzits, but in one place you display all your Frobbers and in another you display the content of a select Whatzit. Your Models probably look something like this:

frobbers:
  - id: 1
    isFrobbing: true
  - id: 2
    isFrobbing: false

selectedWhatzit:
  frobbers:
    - id: 1
      isFrobbing: true

Now which copy of Frobber number 1 is the true Frobber? Ideally, you’ll make them point to the same object so it doesn’t matter. Your Binding magic may or may not be able to make sense of that arrangement and the temptation to duplicate data is strong. Models and Bindings just moved the concrete problem of what appears in widgets to the abstract problem of what goes where in the invisible Model.

With or without magic Models, as the application grows, you’ll need to put things in sensible places and call them in sensible ways. That is the whole job of user interface programming.

If you keep doing this for all the data and all the widgets in your app, you’ll notice you keep doing the same kind of tedious things over and over. This thing over here updates some data. Need to show it over there. Where’s the right place to put that code? Here, there, somewhere else? Decide, repeat.

Naturally, as programmers, we think we can automate away the repetition, but some complexity is fundamental.

Evolution

In the beginning, there was html


<form method=post action=signup>
  <label>Username       <input name=user></label>
  <label>Password       <input name=pw type=password></label>
  <label>Password again <input name=pwv type=password></label>
  <button>Submit</button>
</form>

Then came JavaScript

<form id=signup method=post action=signup>
  <label>Username       <input name=user></label><br>
  <label>Password       <input name=pw type=password></label><br>
  <label>Password again <input name=pwv type=password></label><br>
  <button>Submit</button>
</form>

<script>
<!--
document.forms.signup.onsubmit = function () {
  if (this.pw.value != this.pwv.value) {
    alert("Passwords don't match")
    return false
  }
}
// -->
</script>

And jQuery

<script src="https://code.jquery.com/jquery-1.1.4.js"></script>

<form id="signup" method="post" action="signup">
    <label>Username       <input name="user"></label><br/>
    <label>Password       <input name="pw" type="password"></label><br/>
    <label>Password again <input name="pwv" type="password"></label><br/>
    <button>Submit</button>
</form>

<script>
// <![CDATA[
$('#signup').submit(function () {
  if (this.pw.value != this.pwv.value) {
    alert("Passwords don't match")
    return false
  }
})
// ]]>
</script>

And json

<script src="https://code.jquery.com/jquery-1.5.js"></script>

<form id="signup">
    <label>Username       <input name="user"></label><br/>
    <label>Password       <input name="pw" type="password"></label><br/>
    <label>Password again <input name="pwv" type="password"></label><br/>
    <button>Submit</button>
</form>

<script>
/*jslint browser: true */
/*global $, alert, window */

$("#signup").submit(function (event) {
    "use strict";

    var form = $(event.target);
    var formData = {};

    form.find("input").each(function (ignore, el) {
        var input = $(el);
        var name = input.attr("name");
        formData[name] = input.val();
    });

    if (formData.pw !== formData.pwv) {
        alert("Passwords don't match");
    } else {
        $.ajax("signup", {
            type: "POST",
            contentType: "application/json",
            data: JSON.stringify(formData)
        }).done(function () {
            window.location = "signup";
        }).fail(function () {
            alert("Error");
        });
    }

    return false;
});
</script>

And Ecma 6

<form id="signup">
    <label>Username       <input name="user"></label><br>
    <label>Password       <input name="pw" type="password"></label><br>
    <label>Password again <input name="pwv" type="password"></label><br>
    <button>Submit</button>
</form>

<script>
/*jshint esversion: 6 */
document.querySelector("#signup").addEventListener("submit", submitEvent => {
    "use strict";
    
    const form = submitEvent.target;
    const formData = {};

    submitEvent.preventDefault();

    for (const input of form.querySelectorAll("input")) {
        var name = input.getAttribute("name");
        formData[name] = input.value;
    }

    if (formData.pw !== formData.pwv) {
        alert("Passwords don't match");
    } else {
        const xhr = new XMLHttpRequest();
        xhr.addEventListener("load", () => {
            if (xhr.status === 200) {
                window.location = "signup";
            } else {
                alert("Error");
            }
        });
        xhr.addEventListener('error', _ => {
            alert('Error');
        });
        xhr.open("POST", "signup");
        xhr.setRequestHeader("Content-Type", "application/json");
        xhr.send(JSON.stringify(formData));
    }
});
</script>

Progress?

Linting JavaScript considered harmful

I am in a minority of programmers so small that I might be the only member. I can’t find anyone on the Internet advocating my position: you should not lint your JavaScript.

I don’t mean you shouldn’t use this or that flavor of lint. I mean that you shouldn’t use any JavaScript linter. At all.

First, the arguments for linting…

Lint catches bugs?

How many bugs does it really catch? Only a few. Rules against unused variables can be useful when you’ve renamed something in one place and forgotten another place, for example. Trouble is, I’ve only noticed lint catching bugs in code that wasn’t complete or tested anyway, and therefore already broken by definition.

The major category of bugs caught by lint can be caught instead by a simple statement:

use strict

JavaScript strict mode really does find bugs, almost always results of accidentally overwriting globals. It’s tragic that we need lint to tell us about missing strict mode declarations when browsers could warn us.

But is it worth bringing in lint for the few bugs it can find?

I think not; the better approach is simple: learn to use strict reflexively. Then spend the effort you were going to use typing semicolons on testing instead.

Lint enforces consistency?

So what? Consistency for its own sake is a pursuit of feeble minds.

In writing, particularly writing computer programs, consistency is a proxy for something much more important: readability. And readability is not something that computers yet understand well.

It is perfectly possible, common, in fact, to write incomprehensible code that passes a linter.

But, you say, “lint rules can help a little, so we should use them. We just need to pick a ruleset.”

Lint reduces bikeshedding?

In the beginning, lint was an inflexible representation of one man’s own preferences. Next, everyone bought into the idea they should lint their JavaScript, then adopted Crockford Style and moved on argued endlessly about which rules were important.

We went from the ultra-rigid linter, to the ultra-configurable, to the ultra-pluggable. At each step, we introduced more and more time-consuming opportunities to argue about picayune issues.

To paraphrase Tim Harford:

Lint is such a temping distraction because it feels like work, but it isn’t. When you’re arguing about lint rules or fixing lint errors, you’re editing code, but you’re not getting things done.

So should I never lint?

All that said, I recommend lint for one purpose: it can be a useful way to learn about the idioms and pitfalls of a language. Run your code through a linter and learn why it complains.

This use of lint as a teaching tool is actively discouraged by the way most people advocate using it. Imagine you set up a hook that says “all code must pass lint before commit.” Those who actually could benefit from the lint suggestions are blocked by them, thus encouraged to “fix” the “problem” as quickly as possible: obey the tool and never learn why.

Painless Android releases revisited

Previously, I described a Gradle script that handily generates release version codes for Android apps. The generated version codes take the form [date][number].

I finished that article with a litany of Gradle bugs. Today: fresh Google bugs!

In May, Google added automatic crash reporting to the Google Play developer console. Before auto-reporting, users had to explicitly send reports when apps crashed. So far so good, but if you’re testing on a physical device, you might notice something alarming: reports of bugs you already fixed, or crashes you only saw in development.

Apparently, Google forgot to filter out reports from debug-mode applications. Perhaps Google would claim this is a feature, but it means that you can’t tell which crashes are actually happening in the wild.

Google says crash reporting is “opt-in.” This is meant ironically, since the option to turn it off doesn’t actually exist on, for example, the Samsung S8. (There is a different option, “report diagnostic information.” As far I can tell, it’s a placebo.)

To work around this, we need to make crash reports from the debug version look somehow different from the production version. Crash reports include the version code, so remember that suffix? We can use that. Instead of using one number per release, use two: one for the release, one for the next development version:

// Version code updates when released to a date-based format. Even-numbered version codes are
// release builds, odd-numbered version codes are debug builds. MAX five releases per day.
def releaseVersionCode = null
def writeVersionCode(versionCode) {
    def releaser = project.plugins[net.researchgate.release.ReleasePlugin]
    def propsFile = releaser.findPropertiesFile()
    def props = new Properties()
    propsFile.withInputStream { props.load(it) }
    props.versionCode = versionCode.toString()
    propsFile.withOutputStream { props.store(it, null) }
}

task nextDebugVersionCode { doLast {
    // Even though this runs after the release build, project.versionCode is still the version
    // code *before* release. The Release plugin runs the release build in a separate Gradle
    // invocation, so the release package picks up version changes in gradle.properties. When
    // control returns here though, it's the original Gradle invocation, and has *not* reloaded
    // gradle.properties.
    writeVersionCode(releaseVersionCode + 1)
}}
updateVersion.dependsOn nextDebugVersionCode

task setReleaseVersionCode { doLast {
    def current = project.versionCode.toInteger()
    releaseVersionCode = new Date().format('YYMMdd0', TimeZone.getTimeZone('UTC')).toInteger()
    if (releaseVersionCode &amp;amp;lt;= current) {
        // Should only happen when there is more than one release in a day
        releaseVersionCode = current + 1
    }
    writeVersionCode(releaseVersionCode)
}}
unSnapshotVersion.dependsOn setReleaseVersionCode

So, now the first release of the day gets suffix zero, the debug version that follows gets suffix one, and so on. I’m writing this on July 26, so if I cut two releases today, my version codes will be:

  • 1707260, production
  • 1707261, debug
  • 1707262, production
  • 1707263, debug

It’s subtle, but at least now we can tell which crashes actually happened to people using your app: they are even numbers.

Or are they?

It appears that Google stores the crash data on the phone and reports it only once per day. The version code it reports is the version running on the phone when it sends the report, not when the crash actually happened.

If the app updates in the interim, we can still get crash reports for bugs already fixed and they will seem to come from a version that includes the fix.

I don’t know of any workaround.

Painless Android releases

Android apps require not one, but two version numbers:

  • Version code: an integer that Android uses to check whether one version is more recent than another
  • Version name: a friendly version to display to the user, conventionally something like 1.2.3

This means that when you want to build a new release of your app, you have two things to manually update, and that is two things too many. You will make mistakes.

Luckily, it’s not too hard to automate this away in your Gradle build script.

Gradle inherited much of its design from Apache Maven. Maven defined a standard release feature that automatically handles typical pitfalls and mindless details of making a release: tagging in source control and incrementing your version number. For Gradle, there is a nice third-party implementation, the gradle-release plugin. So long as you don’t fight Maven-style version conventions, it can make cutting releases almost entirely automatic, modulo prompting you to confirm that it guessed correct version numbers.

If your project only has one version number, you just apply the release plugin and you’re done, but Android’s two-version-number system takes some customization.

I only discuss version numbers here, but the release plugin also does several other useful sanity checks.

First, move the versions out of your app/build.gradle into app/gradle.properties. They should look like so:

app/gradle.properties

version=1.0-SNAPSHOT
versionCode=1

app/build.gradle

android {
    // ...
    defaultConfig {
        versionCode project.versionCode.toInteger()
        versionName project.version
        // ...

“SNAPSHOT” is Maven’s convention for “between releases”.¬†Version 1.0-SNAPSHOT means the code leading up to version 1.0. This convention is how the release plugin guesses what version number you are releasing: it just lops off the suffix.

When you run ./gradlew release, the release plugin updates the version thus:

  1. Edits gradle.properties, removing the “snapshot” part
    1.0-SNAPSHOT becomes 1.0
  2. Commits the change and tags this as version 1.0 in source control
  3. Builds the release
  4. Edits gradle.properties again, to next dev version
    1.0 becomes 1.1-SNAPSHOT
  5. Commits so you can immediately start working on version 1.1

Thus, out of the box, this handles the user-friendly version number, but not the “version code.”

Updating the version code

When Android installs an update to an app, it knows by version code whether the update is newer than what it currently has installed. 3 is newer than 2 and so on.

Thus, the obvious strategy for updating your version code is to add one on every release. If using the release plugin, you might do this as a manual step after it finishes a release. If you forget, you’ll accidentally build your next release with the same version code as you just used. If you have other branches, you need to remember to update them as well. Ouch.

There is a better way. Version codes need not be sequential, so instead of incrementing 1,2,3…, we can derive it from the date. A format like [2-digit year][month][day][0-9] works nicely. A release today gets version code 1704080, tomorrow, 1704090.

This format will cover you for 82 years at up to ten releases a day. If that’s not enough for you, use a four-digit year and a two-digit suffix, but watch out for integer overflow in 130 years or so.

The date-based strategy, however, means that you have to set your “version code” immediately before you release, instead of after. To do this, add a Gradle task right before updating version name.

app/build.gradle

task setVersionCode { doLast {
    // Add a task that updates version code
    def current = project.versionCode.toInteger()
    def releaseAs = new Date().format('YYMMdd0', TimeZone.getTimeZone('UTC'))
    if (releaseAs.toInteger() <= current) {
        // More than one release today
        releaseAs = current + 1
    }
    def releaser = project.plugins[net.researchgate.release.ReleasePlugin]
    def propsFile = releaser.findPropertiesFile()
    def props = new Properties()
    propsFile.withInputStream { props.load(it) }
    props.versionCode = releaseAs.toString()
    propsFile.withOutputStream { props.store(it, null) }
}}
// Execute our task before unSnapshotVersion, provided by the release plugin:
unSnapshotVersion.dependsOn setVersionCode

With this simple build script change (plus applying the release plugin), a single command updates both version numbers:

./gradlew release

The release plugin also runs the “build” task at the point of release, so this single command leaves you with both a release .apk and your working directory updated to the tip (snapshot) code ready to start work on the next release. There’s still a problem though: if you haven’t configured your build script to sign the build, you won’t be able to publish the release .apk.

Signing the build

To make Gradle sign a build, you need to add a “signingConfig”:

android {
    // ...
    signingConfigs {
        release {
            storeFile file('/home/myname/.javakeys/mykeys.jks')
            keyAlias 'myappsigningkey'
            // These two lines make gradle believe that the signingConfigs
            // section is complete. Without them, tasks like installRelease
            // will not be available! (see http://stackoverflow.com/a/19350401)
            storePassword "notYourRealPassword"
            keyPassword "notYourRealPassword"
        }
    }
    buildTypes {
        release {
            signingConfig signingConfigs.release
            // ...

This fails, so you put your real password in the “password” config place and get pwned. Your wife leaves you, and your dog dies. You didn’t do that, right?

So where should you put your password? The top-voted answer on Stack Overflow says ~/.gradle/gradle.properties, presumably protected by 600 permissions. I don’t see the point. If you’re relying on file system permissions to keep the password secure, why have the password at all? You could just protect the keystore with file system permissions.

What you need is a prompt for the password.

Thanks to bug 1251, Gradle running in daemon mode (the default) doesn’t let you use System.console().readPassword("Password:"). You can disable daemon mode, but then you run afoul of (orphaned?) bug 2357 because Android Studio generates a default gradle.properties that includes jvmargs. Once you remove that configuration, you find that prompts don’t display when you build not in daemon mode (bug 869). That’s a pain because you can’t see the version number confirmation prompts.

As a result of this epic adventure, you’ll eventually find that the only reliable way to prompt for password is via Swing. No, I’m not joking. It’s not as gruesome as it sounds, thanks to Groovy’s Swing builder, so pop over to where Tim Roes documented how to do it.

Update: there’s a new version of this build script

Inheritance: is-a has-a

Lots of things we learn in school turn out to be naive simplifications of how the real world works, and sometimes we later learn, to our chagrin, that the way we thought about the world really isn’t true at all. Take that familiar organization of life into a giant tree: kingdom, phylum, genus, species. It seems neat enough, but in the grown-up world, people can spend lifetimes arguing about where things fit in this classification.

A related simplification that I learned in school was the rule of when to use inheritance versus composition. It went like so: in this assignment, you simulate a world full of monsters. Zombie is a type of monster, so zombie should inherit from monster. On the other hand, vampires have a coffin, so vampire should have a field that refers to coffin. Now make a UML diagram.

This makes sense as far as it goes, but there’s a major problem: it’s not usually a useful way to think about inheritance when building real programs.

Is-a v has-a perspective makes most sense when thinking about type systems. If a function takes an argument of type monster, it can also take any type of monster, either vampire or zombie. The trouble starts when you use the same reasoning to design a program and it comes back to our taxonomy problem.

You start designing a system by figuring out what your different things are: zombies, vampires, ghosts, coffins and so on. It’s easy enough: three types of monsters, each a class that inherits from monster, and coffin, its own thing. Naturally, you also need people; people need places to live and ghosts need places to haunt, so you have houses. But wait, people aren’t monsters, but they have a lot in common, so they need a base class, say living things. But that’s not quite right; the monsters aren’t technically alive, so maybe they are dead things. Also, houses and coffins seem to be of a non-living type, so that’s another base class. Should it be dead things? If the coffin is made of wood, it used be alive, so maybe that makes sense.

Most real-world characteristics of things are completely irrelevant to most programs. In our simulation, perhaps the only thing ghosts do is haunt, whereas vampires and zombies bite people but don’t haunt. It’s confusing and wasteful to worry about how they are all types of monsters, who are types of dead things and so forth.

Now, occasionally, it does make sense to think of inheritance as an is-a relationship. The cf0x10 parse tree, for example, is a pile of subclasses. When this type of design makes sense, however, it will be obvious; no need to shoehorn everything into it.

What about other metaphors? It’s common, for example, to say that instances of classes are receivers while method calls are messages to that receiver. That’s a useful perspective for language design and it’s useful to have a name for that bit before the dot – receiver.message() – but, again it’s not so helpful a metaphor when designing a program.

In real programs, metaphors like these just tend to cause trouble. Software isn’t made of physical things. A class, in reality, is just a way to group related bits of a program. I prefer not to start by creating any design for a class hierarchy; instead I write code that does the things I need it to do. A class hierarchy, if any, usually emerges from unifying the bits that make sense to put together.

Exception Rules IV: The Voyage Stack

Something I wrote seven years ago and I’m publishing now to see if learned anything since then.

This is part of a series where I review common wisdom about Java error handling. The series is in no particular order, but the first installment explains my categorization method.

Clean up with finally
Truth: high
Importance: high

Joshua Bloch explains the reasoning behind this in Effective Java as “strive for failure atomicity.” Whatever happens, clean up after yourself. Checked exceptions give us one of their rare benefits by just maybe reminding us to write finally blocks.

One way not to write a finally block, however, is like this:

try {
	connection = DriverManager.getConnection("stooges");
	// Snip other database code
} catch (SQLException e) {
	throw new RuntimeException(e);
} finally {
	try {
		// Bad. Do not do this.
		connection.close();
	} catch (SQLException e) {
		throw new RuntimeException(e);
	}
}

If opening the connection throws SQLException, closing the connection throws NullPointer and obscures the original cause. Usually, people suggest wrapping a conditional around the connection.close() call to check for null, but that gets ugly fast and is easy to forget. Instead, follow the next rule.

Make try blocks as small as possible
Truth: high
Importance: medium

Consider this code that obscures which file could not open:

try {
	// Smelly. Don't do this.
	curly = new FileReader("Curly");
	shemp = new FileReader("Shemp");
} catch (FileNotFoundException e) {
	throw new RuntimeException("Whoopwoopwoopwoop", e);
} finally {
	// The close method translates IOException
	// to RuntimeException
	if (curly != null) {
		close(curly);
	}
	if (shemp != null) {
		close(shemp);
	}
}

We programmers wrap multiple lines in the same handler because we are lazy, but like the hare napping during the race, that laziness hurts us in the end; it forces us to reason much more carefully about the application state during cleanup.

Instead, handle the exception as close to its cause as possible:

try {
	curly = new FileReader("Curly");
} catch (FileNotFoundException e) {
	throw new RuntimeException("Missing: Curly", e);
}
try {
	shemp = new FileReader("Shemp");
} catch (FileNotFoundException e) {
	throw new RuntimeException("Missing: Shemp", e);
} finally {
	close(curly);
}
close(shemp);

The benefits of the shrunken try block may not be obvious from this tiny example, but consider how you can now factor out the file opening blocks to a method that simply throws an unchecked exception for missing files. Once done, this code simplifies down to:

curly = open("Curly");
try {
	shemp = open("Shemp");
} finally {
	close(curly);
}
close(shemp);

Note that all these examples lose the original exception when the close method throws an exception. Most applications can afford that minimal risk, but if you believe it likely that your cleanup code will throw further exceptions, a log and re-throw might be appropriate.

Do not rely on getCause()
Truth: high
Importance: high

Peeking at an exception’s cause is equivalent to using something’s privates or parsing the string representation to find a field value. It makes code brittle; moreover, you can only test it by forcing errors in your collaborators.

If you find that some library forces you to do this, consider avoiding that function completely; if you still have no way around it, adorn your code liberally with “XXX” comments, and test as best you can.

Do not catch top-level exceptions
Truth: low
Importance: low

Top-level exception classes like Exception live close to the root of the exception hierarchy. The argument against catching these says that you should avoid it because the specific lower type, such as FileNotFound, traditionally conveys information necessary to handling the exception, so by catching the top-level exception, your handler is dealing with an unknown error, which it probably knows little about.

Actually, the advice should say “do not try to recover from top-level exceptions.” Catching top-level exceptions is not fundamentally wrong or even bad, but because you do not really know how severe the exception was, you should usually do no more than report and organize a crash.

Exception Rules III: The Search for Cause

Something I wrote seven years ago. Did I learn anything in that time?

This is part of a series where I review common wisdom about Java error handling. The series is in no particular order, but the first installment explains my categorization method.

Do not use empty catch blocks
Truth: high
Importance: high

This is the most obvious of the category of exception handling rules that address how to avoid losing error information. The usual example for when you might justifiably ignore an exception goes like so:

static void closeQuietly(Closeable closeable) {
  // Smelly. Do not do this.
  try {
    closeable.close();
  } catch (IOException e) {
    // I've already done what I needed, so I don't care
  }
}

Yes, the comment makes this better than the completely empty catch block, but that is like saying that heroin is fine because you wear long sleeves.

Very rarely, suppressing an exception actually is the right thing to do, but never unless you absolutely know why it happened. Do you know when close() throws IOException? I thought not.

Do not catch and return null
Truth: medium
Importance: medium

Catching and returning null is a minor variation on the exception-suppression theme. Consider the Stooges class, which contains this method:

public String poke(String stooge)
              throws StoogeNotFoundException {
  if (stooges.contains(stooge)) {
    return "Woopwoopwoopwoop";
  } else {
    throw new StoogeNotFoundException("Wise guy, eh");
  }
}

Suppose you want to write another method that checks a Stooge’s reaction to a poke, but Stooges gives you no isStooge method. Instead, it forces you to write this:

static String getStoogeReaction(Stooges stooges, String name) {
  try {
    return stooges.poke(name);
  } catch (StoogeNotFoundException e) {
    return null;
  }
}

If you have to use an API that uses exceptions for flow control, something like this might be your best option, but never write an API that makes your clients do it.

Log only once
Truth: medium
Importance: low

You can also state this rule as “Log or re-throw, not both.” Redundant logging is certainly impolite to those maintaining your application, but hardly the worst you could do. You might log and re-throw for legitimate reasons:

  • Your framework swallows exceptions you throw at it
  • Your application logs in a specific location, different from your container
  • This is part of a larger application, and you worry that your clients might ignore the exception

Do not log unless you need to, but if in doubt, log it.

Always keep the cause when chaining exceptions
Truth: high
Importance: high

Only the very naive intentionally do this, but it is easy to do accidentally, and a very easy way to lose the information about what went wrong.

In 2016, I still think exception chaining is a very important feature and I’ve been surprised by how many mainstream languages lack exception chaining.

Exception Rules II: The Wrath of Checked

Something I wrote seven years ago; I’m publishing now to see if I’ve learned anything.

This is part of a series where I review common wisdom about Java error handling. The series is in no particular order, but the first installment explains my categorization method.

Use checked exceptions when the client might recover
Truth:
low
Importance: medium

The checked exception experiment tested a compelling ideal. Stated in Sun’s tutorial:

Any Exception that can be thrown by a method is part of the method’s public programming interface. Those who call a method must know about the exceptions that a method can throw so that they can decide what to do about them. These exceptions are as much a part of that method’s programming interface as its parameters and return value.

The more Java I write, the more convincing I find Bruce Eckel’s argument that the experiment proved its hypothesis false.

The tutorial writers tell us to use checked exceptions whenever our clients can do something useful to recover. They fail to mention the abstraction-destroying effects of checked exceptions.

To be fair, checked exceptions only destroy abstractions the way alcohol destroys families; if daddy stopped using so much we would be fine. But programmers are human, and humans are lazy. Especially programmers.

Laziness makes programmers suppress errors, but they hide exceptions for good reason. In the typical example, when trying to abstract away the database connection, avoid subjecting your client to SQLException. If SQLException were unchecked, abstractions that neglected its handling it would leak on error, but their programmers would not add the leakiness to their signatures.

Yes, checking the error code and determining whether to retry after waiting or email an administrator or call Ghostbusters is ideal, but only a small fraction of programs actually need that depth of error tolerance. For the majority, wrapping in RuntimeException is often best, but Sun’s tutorial will make you feel guilty about that:

Do not throw a RuntimeException or create a subclass of RuntimeException simply because you don’t want to be bothered with specifying the exceptions your methods can throw.

Ignore it. This is the sort of thinking that encourages silly specifications like FileNotFound, which indicates that the file does not exist. Or is read-only. Or locked. Or a directory. Or for some other reason inaccessible.

All languages I know other than Java work perfectly well without checked exceptions, implying that you can legitimately throw unchecked exceptions only and stop wasting brain cycles on whether you should make the exception checked or not. If, however, you still want to use checked exceptions, follow this simple guideline:

Use checked exceptions only when client code could not have anticipated the error.

FileInputStream, for example, makes itself more irritating by ignoring this advice. Its constructors should not throw FileNotFound because client code should have checked for the file’s existence before trying to open it. [Retraction: I don’t recommend check-then-act style so much any more. Better to ask forgiveness than get permission, thanks again, Python.]

I consider this guideline true even for multi-threaded use because errors of improper synchronization still land in the bucket of exceptions the client should have anticipated.