Exception Rules III: The Search for Cause

Something I wrote seven years ago. Did I learn anything in that time?

This is part of a series where I review common wisdom about Java error handling. The series is in no particular order, but the first installment explains my categorization method.

Do not use empty catch blocks
Truth: high
Importance: high

This is the most obvious of the category of exception handling rules that address how to avoid losing error information. The usual example for when you might justifiably ignore an exception goes like so:

static void closeQuietly(Closeable closeable) {
  // Smelly. Do not do this.
  try {
    closeable.close();
  } catch (IOException e) {
    // I've already done what I needed, so I don't care
  }
}

Yes, the comment makes this better than the completely empty catch block, but that is like saying that heroin is fine because you wear long sleeves.

Very rarely, suppressing an exception actually is the right thing to do, but never unless you absolutely know why it happened. Do you know when close() throws IOException? I thought not.

Do not catch and return null
Truth: medium
Importance: medium

Catching and returning null is a minor variation on the exception-suppression theme. Consider the Stooges class, which contains this method:

public String poke(String stooge)
              throws StoogeNotFoundException {
  if (stooges.contains(stooge)) {
    return "Woopwoopwoopwoop";
  } else {
    throw new StoogeNotFoundException("Wise guy, eh");
  }
}

Suppose you want to write another method that checks a Stooge’s reaction to a poke, but Stooges gives you no isStooge method. Instead, it forces you to write this:

static String getStoogeReaction(Stooges stooges, String name) {
  try {
    return stooges.poke(name);
  } catch (StoogeNotFoundException e) {
    return null;
  }
}

If you have to use an API that uses exceptions for flow control, something like this might be your best option, but never write an API that makes your clients do it.

Log only once
Truth: medium
Importance: low

You can also state this rule as “Log or re-throw, not both.” Redundant logging is certainly impolite to those maintaining your application, but hardly the worst you could do. You might log and re-throw for legitimate reasons:

  • Your framework swallows exceptions you throw at it
  • Your application logs in a specific location, different from your container
  • This is part of a larger application, and you worry that your clients might ignore the exception

Do not log unless you need to, but if in doubt, log it.

Always keep the cause when chaining exceptions
Truth: high
Importance: high

Only the very naive intentionally do this, but it is easy to do accidentally, and a very easy way to lose the information about what went wrong.

In 2016, I still think exception chaining is a very important feature and I’ve been surprised by how many mainstream languages lack exception chaining.

Exception Rules II: The Wrath of Checked

Something I wrote seven years ago; I’m publishing now to see if I’ve learned anything.

This is part of a series where I review common wisdom about Java error handling. The series is in no particular order, but the first installment explains my categorization method.

Use checked exceptions when the client might recover
Truth:
low
Importance: medium

The checked exception experiment tested a compelling ideal. Stated in Sun’s tutorial:

Any Exception that can be thrown by a method is part of the method’s public programming interface. Those who call a method must know about the exceptions that a method can throw so that they can decide what to do about them. These exceptions are as much a part of that method’s programming interface as its parameters and return value.

The more Java I write, the more convincing I find Bruce Eckel’s argument that the experiment proved its hypothesis false.

The tutorial writers tell us to use checked exceptions whenever our clients can do something useful to recover. They fail to mention the abstraction-destroying effects of checked exceptions.

To be fair, checked exceptions only destroy abstractions the way alcohol destroys families; if daddy stopped using so much we would be fine. But programmers are human, and humans are lazy. Especially programmers.

Laziness makes programmers suppress errors, but they hide exceptions for good reason. In the typical example, when trying to abstract away the database connection, avoid subjecting your client to SQLException. If SQLException were unchecked, abstractions that neglected its handling it would leak on error, but their programmers would not add the leakiness to their signatures.

Yes, checking the error code and determining whether to retry after waiting or email an administrator or call Ghostbusters is ideal, but only a small fraction of programs actually need that depth of error tolerance. For the majority, wrapping in RuntimeException is often best, but Sun’s tutorial will make you feel guilty about that:

Do not throw a RuntimeException or create a subclass of RuntimeException simply because you don’t want to be bothered with specifying the exceptions your methods can throw.

Ignore it. This is the sort of thinking that encourages silly specifications like FileNotFound, which indicates that the file does not exist. Or is read-only. Or locked. Or a directory. Or for some other reason inaccessible.

All languages I know other than Java work perfectly well without checked exceptions, implying that you can legitimately throw unchecked exceptions only and stop wasting brain cycles on whether you should make the exception checked or not. If, however, you still want to use checked exceptions, follow this simple guideline:

Use checked exceptions only when client code could not have anticipated the error.

FileInputStream, for example, makes itself more irritating by ignoring this advice. Its constructors should not throw FileNotFound because client code should have checked for the file’s existence before trying to open it. [Retraction: I don’t recommend check-then-act style so much any more. Better to ask forgiveness than get permission, thanks again, Python.]

I consider this guideline true even for multi-threaded use because errors of improper synchronization still land in the bucket of exceptions the client should have anticipated.

Exception Rules

I drafted this long ago, then quit my job where I was writing Java and never looked back… till now, since I’m writing for Android. I thought it would be fun to see if I’ve learned anything in the seven years since I wrote this.

The not-very-secret secret to simplifying code is really very simple: just remove error handling. One hobbled dialect of Java burdened with bulky XML syntax built its success on removing the constraints of compile-time type checking and exceptions [I meant Spring].

Fortunately, those who handle errors formed an elite group of code writers. They stand between us mere mortals and chasm of infinite code failure. I know this because Bjarne Stroustrup appeared to me in a dream and directed me to where I found the Silicon tablets that contained this group’s Java wisdom.

That is, at least, what I wish happened. Actually, no one really knows the best way to handle exceptions. I suspect this somehow relates to them being exceptional; guidelines scattered around the internet are usually incomplete and often contradictory. To make things worse, my own ideas on the matters of error handling best practices vary with the situation.

Nevertheless, I add my noise about how you should use Java’s exceptions to the rest. In this series, I summarize and categorize many of the Java error handling best practices I have heard.

I categorize each rule based on arbitrary axes of “Truth” and “Importance,” which roughly match how religiously you should follow the guideline and how severe the consequences if you do not.

Truth indicates how often you should follow the rule.

  • Low truth: Ignore the rule
  • Medium truth: Follow the rule sometimes
  • High truth: Always follow the rule

Importance indicates what happens when you do not follow the truth. That is, the damage code suffers by either following a low-truth rule or breaking a high-truth rule.

  • Low importance: No serious risks
  • Medium importance: Sometimes very dangerous
  • High importance: Always risks horrible consequences

I begin with a simple one:

Do not specify  “throws Exception”
Truth
: high
Importance: low

Throwing “Exception” pesters client coders without providing any useful information. It ranks high on truth because only sloppy laziness and stupidity cause people to violate it. Still, I rank it as a low importance because annoyance is worst consequence of violation unless coupled with breaking another rule, like using an empty catch block to suppress the Exception.

Do not use exceptions for flow control
Truth:
high
Importance: medium

Although this is the rare rule where everyone agrees, some people still break it.

Author’s note, seven years later: the linked api throws an exception to indicate login failure. I still consider that a poor design, but at the time I didn’t know about Python’s StopIteration, which actually makes sense.

Despite the universal agreement on this principle, most writers fail to give any better reason than blustering about the expense of generating stack traces. While not really premature optimization, that argument smells like it because the two are barely related. Improving performance by avoiding exception-based flow control is like improving your sex life by brushing your teeth.

The real danger of exception-based flow control lies in exception suppression. For example, the poke method lets you jab a stooge in the eye.

public String poke(String stooge)
              throws StoogeNotFoundException {
  if (stooges.contains(stooge)) {
    return "Woopwoopwoopwoop";
  } else {
    throw new StoogeNotFoundException("Wise guy, eh");
  }
}

You can write a hideous, dangerous, unforgivable isStooge method like so.

public boolean isStooge(String name) {
  // Evil. Never do this.
  try {
    poke(name);
    return true;
  } catch (Exception e) {
    return false;
  }
}

This isStooge hides and forgets any exception poke throws, not just StoogeNotFound. On average, however, exception-based flow control is not this dangerous, though still unbearably ugly.

If the snippet specified “catch (StoogeNotFoundException),” the try-catch would just be a structured replacement for goto. When done correctly, using exceptions for flow control is merely poor style used by those who long for the good old days of goto. Stay away from it for the same reason you stay away from top hats and morning coats.

Programming Android: first impressions

I suspended work on Comefrom0x10 for a little while to start my first attempt at a serious Android app. It is tentatively called “Text Collector” and essentially just makes a pdf of your text messages.

So, how is Android as a platform?

Well, first, it’s Java. This means that half my code is type declarations, the other half is keywords; we all saw that coming, move along…

The Android core api is unpleasant to use, but it could have been worse. Its main problem is severe under-documentation, apparently thanks to a bad case of “source code is the documentation” syndrome.

Though technically Java, for better or worse, it feels like an api designed by people who would rather write C. Integer constants and bitmasks are everywhere; there is even the occasional “out” parameter. On the bright side, there is a refreshing lack of abstract factory singletons. There is no xml standing in for “dependency injection” code.

There is plenty of xml for defining layouts, though. Layout xml is attribute-heavy, which means less verbose than it could have been, but also that you can’t put comments in many places where they ought to go:


<Frobnicator
  android:foo="bar" <!-- could use a comment here, but that's illegal -->
...

Thankfully, layouts and resource definitions appear to be the only places you have to use xml. In principle, you could define layouts entirely in Java, but frying pan, meet fire.

As far as I can tell, the entire Java standard library is available, but I’ve used only a few small parts of it. There are bizarro-world Android replacements of some parts. Methods that expect uris take android.net.Uri instead of java.net.URI. Bundle of Parcelable looks like it probably could just have been Map<String,Serializable>. I haven’t spent enough time with Android code to judge whether there are good reasons for this seeming duplication.

Like many apis, the core library is a mix of surprisingly easy juxtaposed with surprisingly difficult. There are some nice included layouts and widgets, like a date picker, but try hooking up a date picker to a TextView with inputType=date, and you are in for nasty surprises. Writing and displaying pdf is almost trivial, but if you want zoom and two-dimensional scrolling while you display it, expect pain.

Use the string literals

I think that some of my college classes took points off your grade for using literal strings instead of #define. Likewise, linters and coding standards typically want to prevent programmers from using literals.

Many indoctrinated programmers, therefore, insist that the correct way of writing a select statement is something like this:

"select " + COLUMN_NAME + ", " + COLUMN_EMAIL
+ " from " + TABLE_PEOPLE
+ " where " + COLUMN_ID + " = ? "

This jihad gains its supposed righteousness from the idea that literals make your code less maintainable. But, aside from thorough tests, what does make code maintainable? In order of importance:

  1. Readability
  2. Fewer dependencies
  3. All else equal, shorter is better

So, assume for a moment that the most maintainable code is the most readable code, and compare:

"select name, email from people where id = ?"

I suggest that the second statement is far more readable, and therefore more maintainable. It is also shorter, even without including the constant declarations, and it removes a dependency on constants defined somewhere else.

Ha! You’re ignoring the dependency on the database structure. If I want to change a column or table name, the first code is dryer.

Perhaps, but structural database changes are complex and should not be undertaken with the flippant attitude that you can do them simply by changing the value of a constant somewhere. Consider, just for starters, that if you change column name “email” to, say, “primary_email”, you will probably want to change the name of the constant to match and you will still have to search for any places the string literal might have been used, just in case.

Ok, but at least I will spend less time tracking down bugs related to spelling errors.

Sure, but seriously, how much time do you spend on spelling errors? They are typically among the easiest bugs to fix. The cumulative effort of fixing typos is less than the effort involved in managing your constants.

Life without literals is burdensome. Where should the constant declaration go? Does a definition for the same value already exist in scope? If it does, and has the same name as what you want to use, does it refer to the same thing? You add a “department” table that also has a “name” column; should you (a) use the existing constant, (b) make a new constant with a non-conflicting name, (c) rename the existing constant or, (d) both b and c?

But we have a coding standard that says what to do, so I can just follow that.

That’s just one more thing you need to remember. Do you remember precisely what your coding standard says to do? Does it even cover every case?

Does your coding standard say that uppercase identifiers must never be built from external data? Even if it does, unless you can see the constant definition in the same screenful of text as where it’s used, your code is not obviously safe from injection.

Instead of taking on this travail, just funnel your energy into picking good names for your tables, urls, or whatever in the first place. You’ll need fewer structural changes later.

Good points, but constants can add meaning, instead of being “magic numbers” sprinkled about.

True, adding meaning (see “readability”) is the one good reason for a constant instead of a literal, but usually this applies only to numbers, not strings.

What about localization?

Well… there are some good arguments for externalizing strings that appear to users, but that’s another article.

Beware string concatenation

The Owasp top 10 webapp vulnerability list is due to update soon; I wager its top three won’t change from the 2013 list:

  1. Injection [mainly sql injection]
  2. Broken Authentication and Session Management
  3. Cross-Site Scripting (XSS)

What do two of the top three vulnerabilities have in common? Cross-site scripting is a form of injection attack that is so popular it deserves its own category; both cross-site scripting and “injection” often result from unsafe string concatenation:

"select name from students where " + ...
html = "<div>" + ...

I bet one simple rule would stop most webapp compromises:

Avoid string concatenation.

Production code should very rarely concatenate strings; when it must be done, it should be written to make it obvious that the concatenation is safe.

Assuming most programmers do not intend to write vulnerable code, it is safe to say that they consider the vulnerable code they are writing to be obviously safe. That is just a special case of “programmers don’t think about the bugs they are writing.” By extension, they don’t think about the vulnerabilities they are writing.

So, I dislike rigid coding standards, but if I were to implement one, it would be to say that any place string concatenation is used must be accompanied by comments to answer:

  • Where is the data coming from?
  • What could happen if a malicious actor provided the data?
  • What makes this safe?

If you have so much string concatenation in your webapp that this seems onerous, I guarantee your site is full of security holes.

Stop Twiddling My Bits

Googling for how to compute checksums with Java might return insanity. Quick. What does this function do?

// Java
static String twiddleDee(byte[] data) {
  StringBuffer buf = new StringBuffer();
  for (int i = 0; i < data.length; i++) {
    int halfbyte = (data[i] >>> 4) & 0x0F;
    int two_halfs = 0;
    do {
      if ((0 <= halfbyte) && (halfbyte <= 9))
        buf.append((char) ('0' + halfbyte));
      else
        buf.append((char) ('a' + (halfbyte - 10)));
      halfbyte = data[i] & 0x0F;
    } while (two_halfs++ < 1);
  }
  return buf.toString();
}

Compute a SHA-1 and output a hex string. This is my first public service code donation:

// Java
public static String sha1(byte[] itsAllBitsAfterAll) {
  MessageDigest digester = newSha1Digester();
  digester.update(itsAllBitsAfterAll);
  return bytesAsHex(digester.digest());
}

// This might make a good future post about senseless
// factories
private static MessageDigest newSha1Digester() {
  try {
    return MessageDigest.getInstance("SHA-1");
  } catch (NoSuchAlgorithmException e) {
    throw new RuntimeException(
        "How many times must exceptions be thrown?", e);
  }
}

static String bytesAsHex(byte[] bytes) {
  Formatter result = new Formatter();
  for (byte nextByte : bytes) {
    result.format("%02x", nextByte);
  }
  return result.toString();
}

Like the countless optical illusions where the lines turn out to really be straight or the colors are actually the same, the first snippet matches the “bytesAsHex” method. The first is twenty times faster, but the second is twenty times clearer.

Always write the second. If you really need to squeeze those few extra milliseconds out of your code, use a library. If you think you can improve the library, use something open source, write it better, benchmark, and contribute.

Update, seven years later: In the intervening time, I’ve become somewhat more comfortable with bitwise operations and much more wary of dependencies. Today, I would not include a library only to optimize a little bit of string formatting.