Murdering collections

I sometimes get emails about Text Collector asking something like, “How long does it take? I’ve been waiting more than an day.” Or, “My collection keeps saying ‘Interrupted,’ what do I do?” These look like symptoms of the phone pausing or killing my app, and my gut says they’ve been coming more and more in the five years since I released it. I’m not alone in my suspicion:

We see a reverse evolution of Android. Every new version is less capable than it’s predecessors. Petr Nalevka at Droidcon 2022

Have Petr and I fallen into the trap of looking at the past through rosy glasses, or is this true? I am that guy who thinks newfangled JavaScript mostly just makes simple things complicated, but this time, I have data.

Fraction of collections interrupted increasing from 0.04 in sdk 23 to 0.14 in sdk 33, with anomolous spike to 0.14 in sdk 28

In this chart, the x-axis represents increasing Android versions from “sdk version code 23,” (Android 6) through sdk version code 33, (Android 13). Lower is better, so it’s trending worse.

When you start a collection, Text Collector copies your messages and arranges them into documents, all of which takes time and I have an idea how much time because you can choose to report anonymous telemetry to me. Successful collections usually finish in less than five minutes; they typically take one second per 500 plain text messages or 20 picture messages, though the timing varies widely.

Android isn’t like a movie villain who explains his scheme before killing you: if you’re an app Android wants to kill, you get no warning, but Text Collector can see remnants of unfinished collections when it starts and infer that it was killed during collection. I count each unfinished collection as an interruption, so the chart above shows the proportion1 of interrupted collections increasing steadily from four percent in Android 6 to 14% by Android 13, with a notable spike in Android 9 that we’ll revisit.

This is a huge problem for the hapless folks using Text Collector on increasingly modern Android. Today, more than one in ten collections fail because something aborts them.

The something aborting collections can be either human, or Android of its own accord. I can’t differentiate which was which directly, but it seems implausible that people using Android 8 are twice as patient as those using Android 12. A plausible reason to pin these on the user might be that if Android has gotten slower, collections could be taking longer and that might provoke people to stop them more often. My collection timing data, however, doesn’t show a clear trend to collections getting slower over time. So the most plausible reason for increasing interruptions is that Android is doing it without user consent.

In other words Petr Nalevka was right: newer versions of Android mostly appear less capable of archiving your messages than their predecessors.

By brand

In the second half of his talk, Petr moves from blaming this trend on Android proper to the various manufacturers, and if you visit https://dontkillmyapp.com/, you’ll see Samsung ranked as worst, Google and Nokia best. Does my data agree? In a couple words, not really:

Fraction of collections interrupted bar chart by manufacturer. Largest to smallest bar: samsung, Google, LGE, OnePlus, motorola, ZTE, TCL, FIH

Looking at the same metric, fraction interrupted, by manufacturer, Samsung indeed ranks worst, but Google comes in second to worst. This view, however, is a little too simplistic. Breaking it down further, we can see that this ranking depends on Android version:

Fraction of collections interrupted by brand and by Android version. Worst offenders by sdk are 25: ZTE, 26: LGE, 27: LGE, 28: Samsung, 29: Google, 30: Samsung

I’ve restricted this to only those combinations of sdk version and brand that have at least 500 collections, so it covers a smaller span of sdk versions, but reveals more nuance.

First, we can blame the apparently disastrous performance of sdk 28 (Android 9) on Samsung: Samsung did something in that version that aborted collections much more often than in any other scenario, and because Samsung phones are so common, it hurts the average for all Android 9 phones.

Second, although Don’t Kill My App endorses the idea that we can blame this on the Chinese, there’s no evidence in this data that Chinese brands are worse than any other. Only two of the brands I show here are Chinese, Zte and Tcl and if anything, they look better than most. If I reduce the threshold to only 200 collections, there is a scenario where Huawei does worse than Google, but that sample is so small I hesitate to infer anything from it.1

Which brings us to the final point: if anything, Google is among the worst offenders, not the gold standard.

Don’t Kill My App disagrees, ranking Samsung as most likely to kill your app and Google as least likely. The difference may partly be because their interest is different from mine: they focus on low-power background tasks like alarms and health monitors whereas Text Collector is doing a job that’s inevitably power-intensive. Another problem, however, is that Don’t Kill My App ranks manufacturers subjectively:

The info on the site is gathered from multiple sources. The big part is from the experience of the Urbandroid Team, but increasingly info is added from FAQs of other developers, and from personal experience shared on the GitHub repo. Ibid.

For a more objective measure, Don’t Kill My App also provides a benchmark app, but I’ve run it several times on a couple Samsung phones and it scored a perfect 100% on both. I find that hard to reconcile with ranking Samsung as the worst offender. I suspect Samsung ranks worst because Samsung makes most Android phones; if a problem is inherent to Android, therefore, it’s most likely to be seen on a Samsung phone.

Does it matter?

Choices are good. We might all benefit from variety and competition if prevailing information about strengths of each brand were based more on facts than rumor. Instead of “reverse evolution,” survival of the fittest.

The freedom to change the program is Essential Freedom 1, but Open Source often doesn’t relish this freedom: the mainstream view in the Android community says that diversity – under the pejorative “fragmentation” – is a bad thing. Sycophantic headlines like “Google is finally helping developers fight back against smartphone manufacturers” play into Google’s narrative. Google want us to see Google as white knight: benevolent stewards of a healthy ecosystem. Meanwhile, they make Android measurably worse, year after year.

The truth is that the manufacturers, possibly excepting Samsung, are just as much Google’s victims as app developers. We’re all clinging to a raft called Android while Google shoots holes in it.

What can be done?

First, appealing to Google won’t help: that comes from the “fool me 13 times, shame on me,” school of thought. Why should Google care? From their perspective, a phone doing something other than showing ads is wasting cycles. Crippling Android is, in some ways, useful to Google: it gives their own privileged applications an advantage. But in the spirit of never ascribing to malice what can be explained by incompetence, even if they did care, they demonstrate all the symptoms of having no coherent idea how Android ought to behave.

Which brings us to the second tactic that won’t help: the “Compatibility Test Suite.” These types of errors are devilish to reproduce in controlled environments: the error statistics I’ve presented above clearly show a problem in the wild, but one I have never seen on a phone that I’m using. Likewise, the Don’t Kill My App benchmark doesn’t repeatably support the rankings on its site and Sms Backup+, which does something more like what Text Collector does, has much conjecture on the causes of a related problem, back and forth “works for me,” “doesn’t work for me…” This is typical of building on a system that is over its programmers’ complexity horizon: having never developed a clear plan, it’s impossible for the Android developers to implement a comprehensive test suite.

Both of the strategies above also reinforce Google’s monopolistic rhetoric. Google isn’t competing with Apple – that duopoly is too cozy to disrupt – Google is competing with the Android manufacturers, a space where its Play Store monopoly gives it such leverage that, but for fear of making it too easy for antitrust regulators to gather evidence of their tactics, they should have been able to squeeze out the oems already. In the long term, the most obvious step is for regulators to break up Google so that Android has to compete on merit, which requires political will from a public understanding that Google is the root of Android’s problems.

For Text Collector’s near future, I have to change something to complete more collections for more people. Probably I’ll have to shove a notification into the top of the screen. Right now, Text Collector uses a Wake Lock in a Thread without a Service and some will say, that’s the problem: that I’m doing it the “wrong way,” so the fault is my own, not Google’s. That’s a facile objection based on a selective reading of the documentation and I may dive deep into why another time. For now, though, it’s clear that if I’m guilty of something, it is that I’ve been the proverbial frog in the water in failing to deal with this increasing problem.

Notes

  1. “Successful” but empty collections excluded as I assume they are mostly app crawlers.
  2. I’m not sure how to put confidence intervals on these numbers: the big problem is that these aren’t independent observations. I don’t collect anything that links a report to a particular person, so it’s likely that many of these reports are clusters of a single person trying multiple times and being interrupted multiple times. I do record retries, though, and that suggests that the fraction of interruptions retried doesn’t significantly vary by brand.

Archive links

Don’t Kill My App Droidcon
Don’t Kill My App
Don’t Kill My App’s mission statement
Anti-Chinese bug report
Essential freedoms
Google-loving article
Android privileged applications
Sms Backup+ bug report

Horizontal alignment

Or, how I learned to stop worrying and love the second dimension

We programmers are mostly trained to write code from top to bottom, hardly considering the horizontal dimension. Sure, we indent to delimit blocks, but that’s it. We habitually waste opportunities for the forgotten dimension to make our code easier to read.

Consider a standard solution to FizzBuzz that looks like this:

for (i = 1; i <= 100; i++) {
  if (i % (3 * 5) == 0) {
    print('FizzBuzz')
  } else if (i % 3 == 0) {
    print('Fizz')
  } else if (i % 5 == 0) {
    print('Buzz')
  } else {
    print(i)
  }
}

There’s nothing “wrong” with this code, in the sense that it gives the right answer and passes the rules of many style guides, but we can do better:

for (i = 1; i <= 100; i++) {
  if      (i % (3 * 5) == 0) { print('FizzBuzz') }
  else if (i % 3 == 0      ) { print('Fizz')     }
  else if (i % 5 == 0      ) { print('Buzz')     }
  else                       { print(i)          }
}

Nothing changed except that I rearranged the whitespace, yet the second version is much easier to read. Horizontal alignment draws our eyes to the patterns; it makes the if-else block look united, emphasizing which parts of the code are the same and which parts are different.

When you start looking for these kinds of opportunities, they appear everywhere. Don’t try this in php, but in any other language with a ternary operator, we can make our if-else stack look even more like a table:

for (i = 1; i <= 100; i++) print
  ( i % (3 * 5) == 0 ? 'FizzBuzz'
  : i % 3 == 0       ? 'Fizz'
  : i % 5 == 0       ? 'Buzz'
  : /*else          */ i
  )

Aside from the ugly C-style for-loop, this is even more clear than the English specification: Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz.” For numbers which are multiples of both three and five print “FizzBuzz.”

There are two reasons we rarely write programs this way.

First, it makes diffs harder to read: if you align your code like this, when you change part of the block, you need to change surrounding lines just to maintain alignment. The diff, therefore, includes distracting changes that are nothing but indentation. At first, the diff-readability argument looks strong because when we consider the readability of diffs versus the readability of source code, it’s not obvious which should win. Look a little closer though and the argument turns specious. We have the technology to solve that problem, you see. Any decent diff or blame tool can ignore whitespace changes, so if whitespace changes are a problem for you, you’re using the wrong tool.

The second argument, by contrast, looks easy to dismiss at first, but is more troublesome in practice: it takes time to write code that uses horizontal alignment effectively. Every competent programmer knows that readability trumps writeability, so obviously you should take the time to align things neatly if it makes for easier reading, right? In the heat of coding, it’s not so easy. When you’re focused on solving a problem, “in the zone,” little distractions like realigning your ascii tables can be real impedements.

Luckily, there’s a strategy to work around this problem that we all should employ anyway: read our own code and edit for readability. It’s ok to leave it a little messy on the first draft, just be sure to revisit and clean up.

Still, there’s no reason our tools couldn’t shoulder some of the burden, particularly when we edit existing code that uses tabular structures. Text editors could recognize these types of structures and adjust column widths as we type…

Calculations for pinch to zoom

In which I discover how to correct for things moving around when you zoom, using only elementary algebra.

Text Collector uses a pinch-pan-zoom view to let people preview how their messages will look in pdf format. Inexplicably, Android provides no pinch-pan-zoom view built-in, so a quick look online reveals implementations to fill that gap littered everywhere. Those that aren’t broken, however, can only handle ImageView content.

If you need pinch-to-zoom for something other than pictures, you need to reinvent it.

I struggled with this implementation for an embarrassing amount of time, and judging by the number of wonky zooms I’ve seen in Android games, I’m not alone in finding it tricky.

Android does give us ScaleGestureDetector to detect pinches; it reports a “scale factor” that is a ratio representing how far our fingers move apart or together. The obvious thing to do is to scale your content, using View.setScale(), something like setScale(getScale() * scaleFactor). That’s the right idea, but insufficient.

Scaling a view transforms it around its “pivot,” an arbitrary point somewhere in the view. What we really want is to scale it around the “focus” of the zoom, that is, the bit of content between our fingers. Focus and pivot don’t line up, so, as we zoom, the content we want to see rushes away offscreen.

Model

We have two different coordinate systems because we need a fixed-size touchable area to detect fingers and a changing-size area to display content. I call these the “window” and the “content,” respectively. As reported by Android, focus is in the window grid and pivot is in the content grid.

Misaligned pivot and focus cause scaling to shift the view content away from wherever it’s supposed to be after the zoom. To correct, we need to translate back by an amount t.

  • t: translation needed to correct for scaling, window units

Android gives us these measurements:

  • f: focal point of the zoom, window units
  • m: margin outside the content, window units
  • s: starting scale, window units per content unit
  • z: scale factor, that is, change in scale, unitless

Two measurements change during scaling. I will denote them with a tick mark meaning “prime:”

  • m': margin after scaling, window units
  • s': scale after scaling, window units per content unit

Scale factor is the ratio of scale before to scale after, so:

s' = zs

Actually, the scale factor and focus used here are approximations that work well, but could be refined in a more complete model.

We’ll use a couple measurements in the content grid as well:

  • P: pivot around which scaling happens, content units
  • D: content that aligns with the zoom focal point when zoom begins, content units

When scaling, measurements in the content grid do not change. Upon reflection, this should be obvious because the content can draw itself without knowing it’s been zoomed. So, even though it looks like P grows in this diagram, remember this diagram shows the window perspective. From the content perspective, P does not change.

Diagram of measurementsAndroid gives us P but we need to calculate D for ourselves. Since f and m are in different coordinates than D we cannot say that D = f - m.

This makes me wish for a language like Frink that attaches units to numbers. You actually can add measurements of different units together, but only if there’s a defined conversion. So, something, like D = f - m could do something sane.

In Java and all mainstream languages, numbers are unitless, so it’s easy to add numbers nonsensically.

For both grids, the origin is at the left side. To convert between coordinates on the window grid (subscript r) and the content grid (subscript c):

x_r = sx_c + m \newline s' = zs \newline x'_r =s'x_c + m'

So:

f = sD + m \newline \Rightarrow D = \frac{f-m}{s}

Given these things, we need to solve for t, the translation that will rescue the content we want to see from wherever it went during scaling.

It is important that even though we call a View function, setTranslation(), on the content to translate it, the number we pass that function is in window coordinates, not content coordinates.

Derivation

So far, the things we know, given by the Android api are f, m, s, z and P, from which we know how to calculate D and s'.

Next, we need m', the margin after scaling.

In software, you don’t actually have to calculate m' yourself. You can setScale() then getLocationOnScreen() to ask the view where it would place its corner, but that’s cheating.

To find m' in terms of things that we know, another variable helps to translate pivot from content to window:

  • w: position of the pivot, P, in the window grid, that is, w=Ps+m.

w = Ps + m \newline w' = Ps' + m'

Relationship of w to m and PThe t correction will move the pivot in window space, but only after scaling. By definition, the pivot does not move due to scaling alone, so w = w'. This implies:

Ps + m = Ps' + m' \newline \Rightarrow m' = Ps - Ps' + m \newline\Rightarrow m' = Ps(1 - z) + m

Next, we use m' to calculate the thing we really want, the translation t, to compensate for zoom. Define a variable translating the content position under the focus, D, to window coordinates:

  • h: position of the content at D after scaling, in the window grid, so h = Ds'+m'

Relationship of m, h, f and D to t

Because the translation t is in window coordinates, t = f - h. Recalling that s' = zs, we know:

t = f - h \newline t = f - Dzs - m'

Plugging in D:

t = f - \frac{f-m}{s}zs - m' \newline t = f - z(f-m) - m'

Since we previously derived m', we are now done:

t = f - z(f-m) - (Ps(1 - z) + m)

To make this Android-executable code, we just need to translate to Java and do the same for each axis. Source is on sourcehut.

But wait, there’s more

An eager kid in the front row is waving his hand to tell me about affine transformations. We can simplify further, you see:

t = (f - Ps)(1 - z) + m(z - 1)

I didn’t go this far because it’s no cleaner in Java, but does look more symmetrical in Math: tantalizingly like a dot product. Unfortunately, I forgot most of my linear algebra long ago, so I have no idea why. Best go watch 3Blue1Brown.

Tests versus specs

It’s been popular for some years now to say that tests are “executable specifications.” I think this is a wrong way to to think about programming and leads to buggier programs than the traditional view that tests are tests.

Saying that tests are specs implies that you don’t need separate specifications. If this isn’t what you mean when you call a “test” a “spec,” then my arguments mainly won’t apply, but remember: regardless of what you mean, people will hear, “replace your specifications.”

Programs are three-legged stools that stand on the trio of specs, tests and program code. Stable programs require that each leg receives equal care. When done well, a useful tension between specs, tests and program code improves quality.

The form and labels can vary: specifications can be formal requirements or notes in a bug tracker; tests can be automated or manual. Whatever the form, every program has these three parts.

“Tests are specs” refers to a specific form: automated tests written in a style called “behavior driven.” Behavior-driven means that tests look like this:

describe Frobber... it 'frobs'...

Instead of

TestFrobber... test_frobs()...

From this style, we can infer the reasoning for replacing specs with tests:

Axiom: specs describe what a program should do

Axiom: tests verify that the program does what the specs say

Assume that you can write tests in a style where they describe what programs should do. Or, equivalently, assume you can write specifications as executable code that can verify compliance.

When so written, specs and a tests serve the same purpose. Thus, by the principle that you should eliminate redundancy, they should be the same thing.

The first problem with this argument is that there’s no real basis for believing that you can write specifications in executable language as clearly as if you wrote them in natural language. Things like using describe... it... and expect(x).toBe(y) (instead of assert x == y) superficially make program code look English-like (if you squint) but it’s not at all obvious that they make things more clear. If this style really is more clear, why are tests special? Why not write all code in quasi-English?

By a dubious appeal to Whorfianism – the idea that words we use affect what we do – behavior-driven style supposedly encourages people to write tests more declaratively because you “describe” what the program should do. It is true that programs are nearly always more understandable when written declaratively than rather than imperatively, but again, there is no reason this should be specific to tests; all program code should be written as clearly and declaratively as possible.

Even if we assume that somehow we can write executable language as clearly as natural language, it doesn’t follow that we need only one or the other. We justify combining test and specs because “redundancy is bad.” By that logic, however, we don’t need test-specs to be separate from program code either. If the specification is executable, it doesn’t need to be the test because it could just as easily be the program. And then there was one.

In a sense, you truly can combine tests, specs and program. A program with unwritten program code is nothing but an idea. A program with unwritten tests and specs, is still a program. It’s just not a very good program and we know why.

[When programming] one must perform perfectly. The computer resembles the magic of legend in this respect, too. If one character, one pause, of the incantation is not strictly in proper form, the magic doesn’t work.

– The Mythical Man Month

Automated tests are programs too and just as capricious. A thorough test suite must often be as large as the program it tests and, therefore, will have as many errors.

Moreover, programs are notoriously hard to change without breaking, which brings us to the second major flaw in the logic behind combining tests with specs. It’s true that tests exist to verify conformance with specification, but that is not the only reason; tests also verify that changes don’t break things.

Remembering that tests are programs too, and just as hard to change (safely) as any program, it should be clear that the only way to avoid breaking tests is to avoid changing them. Furthermore, the only way to test the tests themselves is to write them in lock-step with the program under test. This is called the “red-green” cycle and it goes like this:

  1. Write a failing test
  2. Write the code necessary to make the test pass
  3. Repeat

In step one, if you check that the test fails in the way you expect, you know that it tests the code you wrote in step two. Then back away. So long as you don’t change the test, you can be reasonably sure it’s testing for the intended problem.

The benefit of a specification, by contrast, is that you write it before you write the program. It clarifies the problem and the goal and forestalls misguided coding.

Specs also will be incomplete and sometimes plain wrong, because writing a flawless description of program is as hard as writing a flawless program. So, to remain relevant, specs must change as programs change. If your specs are also your tests, not only will they be incomplete and wrong at the beginning, errors introduced by the changes will make them wrong in different ways at the end.

I still hope for executable specifications though I think “property-based” testing is a more promising avenue than “behavior-driven.” However they come, executable specifications will arrive – at latest, computers will learn to understand English better than humans – and even then, specifications cannot replace tests.

Do you need a Model?

It’s common for graphical programs to work something like this:

  1. Pull data from persistent storage: database, web server and so on
  2. Tuck that data away in a “Model” object
  3. Display the data
  4. Receive updates from either the user interface or the persistent storage
  5. Hope that the screen or the storage also updates, respectively
  6. Goto 4

The role of “hope” in this scenario is played by something often called “Data Bindings.” The problem with this design is that it’s an obviously bad idea: by keeping a Model like this, you’ve created a cache and cache invalidation is known to be a hard problem.

I’ll assume you’re writing a webapp, because, who isn’t? I claim you don’t need any Model client-side at all. If you have a long memory, you might say that I’m just being ornery and yearning for a time when we didn’t need a megabyte of JavaScript to display a form with one field:

Google's homepageYou’ll say you need these Models and Bindings because modern.

You’d be partly right. This does make me ornery, but let’s step back and look at what might be the number one killer of program maintainability: mutable state. A graphical interface is nothing if not a big pile of mutable state and the job of its programmers is to wrangle the ways it mutates.

In a sense, every useful program needs to grapple with this problem. Programs that lack persistent data or ways to display it are pretty useless, in the way that a tree might fall but nobody knows whether it makes noise and most people don’t care.

I advocate that you scrap the client-side Model, but I don’t claim this will make your webapp easy to write. No matter what, you’re dealing with a pile of mutable state and that’s bound to be tricky.

Why make it trickier than necessary? You really do need widgets on the screen: checkboxes that can be checked, or not, spans that could say one thing, or another. Do you need a shadow copy of those things as well?

Now, not all state duplication is called a Model and not everything called a Model is state duplication. Elm, for example, takes the view that Model is the application state but avoids state duplication by virtue of being purely functional.

Most programs aren’t written in purely functional languages, so the Model is state duplication, but users don’t see Models. They see widgets and to them, the state of the widgets is the state of the data. In a programmer’s ivory tower, you might argue that the Model is the source of truth, but your arguments matter not at all to actual people.

So don’t fight it. The path to enlightenment lies in realizing that once you’ve rendered the data, it doesn’t matter anymore. Throw it away.

Wait, that might work for your Web 2.0 site and its DHTML, but I have a Web APP. It’s modern. It has a span over here and a checkbox over there and the span says ‘frobbing’ or ‘unfrobbing’ depending on whether the checkbox is checked.

Ok, but I’m not seeing how Models and magic Bindings are simpler than this:

checkbox.onchange =
  span.textContent = checkbox.checked ? 'frobbing' : 'unfrobbing'

Well, it’s not just this one span, this blink tag over here needs to appear if you’re not frobbing. With your plan, I need to add another line of code to handle that.

True, but you’d have to add something when backing it with Models and Bindings as well. It wouldn’t be less code, just different code and more indirection.

There are more ways frobbingness can change though. I don’t want to copy this logic in every place.

And well you should not, but you already have the tool to solve that problem: a function. Move the interface update logic to a function and call it anytime frobbingness changes.

I see what you’re saying, but I don’t like it. The span and the blink are in logically different components. I’ll end up with a bunch of functions that update both, but don’t sensibly belong to either.

That’s a very good point.

Partly, the appeal of the client-side Model is that it seems we ought to be able to bind all the widgets to the same object, giving us a kind of fan-in-fan-out approach to updating them. In principle, you’d have just one Frobber and you’d change its isFrobbing property, then all the dependent widgets would get notified. People have been trying to implement this concept since the invention of the computer and it’s gone by many names. Recently, the buzzword is “reactive” programming.

This idea can work well in some systems. Spreadsheets and style sheets are examples of very successful reactive programming models, for example. It works well in limited domains with declarative or purely functional languages.

In non-functional practice, the same data structures aren’t convenient for all widgets. Imagine that Frobbers are parts of Whatzits, but in one place you display all your Frobbers and in another you display the content of a select Whatzit. Your Models probably look something like this:

frobbers:
  - id: 1
    isFrobbing: true
  - id: 2
    isFrobbing: false

selectedWhatzit:
  frobbers:
    - id: 1
      isFrobbing: true

Now which copy of Frobber number 1 is the true Frobber? Ideally, you’ll make them point to the same object so it doesn’t matter. Your Binding magic may or may not be able to make sense of that arrangement and the temptation to duplicate data is strong. Models and Bindings just moved the concrete problem of what appears in widgets to the abstract problem of what goes where in the invisible Model.

With or without magic Models, as the application grows, you’ll need to put things in sensible places and call them in sensible ways. That is the whole job of user interface programming.

If you keep doing this for all the data and all the widgets in your app, you’ll notice you keep doing the same kind of tedious things over and over. This thing over here updates some data. Need to show it over there. Where’s the right place to put that code? Here, there, somewhere else? Decide, repeat.

Naturally, as programmers, we think we can automate away the repetition, but some complexity is fundamental.

Evolution

In the beginning, there was html


<form method=post action=signup>
  <label>Username       <input name=user></label>
  <label>Password       <input name=pw type=password></label>
  <label>Password again <input name=pwv type=password></label>
  <button>Submit</button>
</form>

Then came JavaScript

<form id=signup method=post action=signup>
  <label>Username       <input name=user></label><br>
  <label>Password       <input name=pw type=password></label><br>
  <label>Password again <input name=pwv type=password></label><br>
  <button>Submit</button>
</form>

<script>
<!--
document.forms.signup.onsubmit = function () {
  if (this.pw.value != this.pwv.value) {
    alert("Passwords don't match")
    return false
  }
}
// -->
</script>

And jQuery

<script src="https://code.jquery.com/jquery-1.1.4.js"></script>

<form id="signup" method="post" action="signup">
    <label>Username       <input name="user"></label><br/>
    <label>Password       <input name="pw" type="password"></label><br/>
    <label>Password again <input name="pwv" type="password"></label><br/>
    <button>Submit</button>
</form>

<script>
// <![CDATA[
$('#signup').submit(function () {
  if (this.pw.value != this.pwv.value) {
    alert("Passwords don't match")
    return false
  }
})
// ]]>
</script>

And json

<script src="https://code.jquery.com/jquery-1.5.js"></script>

<form id="signup">
    <label>Username       <input name="user"></label><br/>
    <label>Password       <input name="pw" type="password"></label><br/>
    <label>Password again <input name="pwv" type="password"></label><br/>
    <button>Submit</button>
</form>

<script>
/*jslint browser: true */
/*global $, alert, window */

$("#signup").submit(function (event) {
    "use strict";

    var form = $(event.target);
    var formData = {};

    form.find("input").each(function (ignore, el) {
        var input = $(el);
        var name = input.attr("name");
        formData[name] = input.val();
    });

    if (formData.pw !== formData.pwv) {
        alert("Passwords don't match");
    } else {
        $.ajax("signup", {
            type: "POST",
            contentType: "application/json",
            data: JSON.stringify(formData)
        }).done(function () {
            window.location = "signup";
        }).fail(function () {
            alert("Error");
        });
    }

    return false;
});
</script>

And Ecma 6

<form id="signup">
    <label>Username       <input name="user"></label><br>
    <label>Password       <input name="pw" type="password"></label><br>
    <label>Password again <input name="pwv" type="password"></label><br>
    <button>Submit</button>
</form>

<script>
/*jshint esversion: 6 */
document.querySelector("#signup").addEventListener("submit", submitEvent => {
    "use strict";
    
    const form = submitEvent.target;
    const formData = {};

    submitEvent.preventDefault();

    for (const input of form.querySelectorAll("input")) {
        var name = input.getAttribute("name");
        formData[name] = input.value;
    }

    if (formData.pw !== formData.pwv) {
        alert("Passwords don't match");
    } else {
        const xhr = new XMLHttpRequest();
        xhr.addEventListener("load", () => {
            if (xhr.status === 200) {
                window.location = "signup";
            } else {
                alert("Error");
            }
        });
        xhr.addEventListener('error', _ => {
            alert('Error');
        });
        xhr.open("POST", "signup");
        xhr.setRequestHeader("Content-Type", "application/json");
        xhr.send(JSON.stringify(formData));
    }
});
</script>

Progress?

Linting JavaScript considered harmful

I am in a minority of programmers so small that I might be the only member. I can’t find anyone on the Internet advocating my position: you should not lint your JavaScript.

I don’t mean you shouldn’t use this or that flavor of lint. I mean that you shouldn’t use any JavaScript linter. At all.

First, the arguments for linting…

Lint catches bugs?

How many bugs does it really catch? Only a few. Rules against unused variables can be useful when you’ve renamed something in one place and forgotten another place, for example. Trouble is, I’ve only noticed lint catching bugs in code that wasn’t complete or tested anyway, and therefore already broken by definition.

The major category of bugs caught by lint can be caught instead by a simple statement:

use strict

JavaScript strict mode really does find bugs, almost always results of accidentally overwriting globals. It’s tragic that we need lint to tell us about missing strict mode declarations when browsers could warn us.

But is it worth bringing in lint for the few bugs it can find?

I think not; the better approach is simple: learn to use strict reflexively. Then spend the effort you were going to use typing semicolons on testing instead.

Lint enforces consistency?

So what? Consistency for its own sake is a pursuit of feeble minds.

In writing, particularly writing computer programs, consistency is a proxy for something much more important: readability. And readability is not something that computers yet understand well.

It is perfectly possible, common, in fact, to write incomprehensible code that passes a linter.

But, you say, “lint rules can help a little, so we should use them. We just need to pick a ruleset.”

Lint reduces bikeshedding?

In the beginning, lint was an inflexible representation of one man’s own preferences. Next, everyone bought into the idea they should lint their JavaScript, then adopted Crockford Style and moved on argued endlessly about which rules were important.

We went from the ultra-rigid linter, to the ultra-configurable, to the ultra-pluggable. At each step, we introduced more and more time-consuming opportunities to argue about picayune issues.

To paraphrase Tim Harford:

Lint is such a temping distraction because it feels like work, but it isn’t. When you’re arguing about lint rules or fixing lint errors, you’re editing code, but you’re not getting things done.

So should I never lint?

All that said, I recommend lint for one purpose: it can be a useful way to learn about the idioms and pitfalls of a language. Run your code through a linter and learn why it complains.

This use of lint as a teaching tool is actively discouraged by the way most people advocate using it. Imagine you set up a hook that says “all code must pass lint before commit.” Those who actually could benefit from the lint suggestions are blocked by them, thus encouraged to “fix” the “problem” as quickly as possible: obey the tool and never learn why.

Painless Android releases revisited

Previously, I described a Gradle script that handily generates release version codes for Android apps. The generated version codes take the form [date][number].

I finished that article with a litany of Gradle bugs. Today: fresh Google bugs!

In May, Google added automatic crash reporting to the Google Play developer console. Before auto-reporting, users had to explicitly send reports when apps crashed. So far so good, but if you’re testing on a physical device, you might notice something alarming: reports of bugs you already fixed, or crashes you only saw in development.

Apparently, Google forgot to filter out reports from debug-mode applications. Perhaps Google would claim this is a feature, but it means that you can’t tell which crashes are actually happening in the wild.

Google says crash reporting is “opt-in.” This is meant ironically, since the option to turn it off doesn’t actually exist on, for example, the Samsung S8. (There is a different option, “report diagnostic information.” As far I can tell, it’s a placebo.)

To work around this, we need to make crash reports from the debug version look somehow different from the production version. Crash reports include the version code, so remember that suffix? We can use that. Instead of using one number per release, use two: one for the release, one for the next development version:

// Version code updates when released to a date-based format. Even-numbered version codes are
// release builds, odd-numbered version codes are debug builds. MAX five releases per day.
def releaseVersionCode = null
def writeVersionCode(versionCode) {
    def releaser = project.plugins[net.researchgate.release.ReleasePlugin]
    def propsFile = releaser.findPropertiesFile()
    def props = new Properties()
    propsFile.withInputStream { props.load(it) }
    props.versionCode = versionCode.toString()
    propsFile.withOutputStream { props.store(it, null) }
}

task nextDebugVersionCode { doLast {
    // Even though this runs after the release build, project.versionCode is still the version
    // code *before* release. The Release plugin runs the release build in a separate Gradle
    // invocation, so the release package picks up version changes in gradle.properties. When
    // control returns here though, it's the original Gradle invocation, and has *not* reloaded
    // gradle.properties.
    writeVersionCode(releaseVersionCode + 1)
}}
updateVersion.dependsOn nextDebugVersionCode

task setReleaseVersionCode { doLast {
    def current = project.versionCode.toInteger()
    releaseVersionCode = new Date().format('YYMMdd0', TimeZone.getTimeZone('UTC')).toInteger()
    if (releaseVersionCode &amp;amp;lt;= current) {
        // Should only happen when there is more than one release in a day
        releaseVersionCode = current + 1
    }
    writeVersionCode(releaseVersionCode)
}}
unSnapshotVersion.dependsOn setReleaseVersionCode

So, now the first release of the day gets suffix zero, the debug version that follows gets suffix one, and so on. I’m writing this on July 26, so if I cut two releases today, my version codes will be:

  • 1707260, production
  • 1707261, debug
  • 1707262, production
  • 1707263, debug

It’s subtle, but at least now we can tell which crashes actually happened to people using your app: they are even numbers.

Or are they?

It appears that Google stores the crash data on the phone and reports it only once per day. The version code it reports is the version running on the phone when it sends the report, not when the crash actually happened.

If the app updates in the interim, we can still get crash reports for bugs already fixed and they will seem to come from a version that includes the fix.

I don’t know of any workaround.

Painless Android releases

Android apps require not one, but two version numbers:

  • Version code: an integer that Android uses to check whether one version is more recent than another
  • Version name: a friendly version to display to the user, conventionally something like 1.2.3

This means that when you want to build a new release of your app, you have two things to manually update, and that is two things too many. You will make mistakes.

Luckily, it’s not too hard to automate this away in your Gradle build script.

Gradle inherited much of its design from Apache Maven. Maven defined a standard release feature that automatically handles typical pitfalls and mindless details of making a release: tagging in source control and incrementing your version number. For Gradle, there is a nice third-party implementation, the gradle-release plugin. So long as you don’t fight Maven-style version conventions, it can make cutting releases almost entirely automatic, modulo prompting you to confirm that it guessed correct version numbers.

If your project only has one version number, you just apply the release plugin and you’re done, but Android’s two-version-number system takes some customization.

I only discuss version numbers here, but the release plugin also does several other useful sanity checks.

First, move the versions out of your app/build.gradle into app/gradle.properties. They should look like so:

app/gradle.properties

version=1.0-SNAPSHOT
versionCode=1

app/build.gradle

android {
    // ...
    defaultConfig {
        versionCode project.versionCode.toInteger()
        versionName project.version
        // ...

“SNAPSHOT” is Maven’s convention for “between releases”. Version 1.0-SNAPSHOT means the code leading up to version 1.0. This convention is how the release plugin guesses what version number you are releasing: it just lops off the suffix.

When you run ./gradlew release, the release plugin updates the version thus:

  1. Edits gradle.properties, removing the “snapshot” part
    1.0-SNAPSHOT becomes 1.0
  2. Commits the change and tags this as version 1.0 in source control
  3. Builds the release
  4. Edits gradle.properties again, to next dev version
    1.0 becomes 1.1-SNAPSHOT
  5. Commits so you can immediately start working on version 1.1

Thus, out of the box, this handles the user-friendly version number, but not the “version code.”

Updating the version code

When Android installs an update to an app, it knows by version code whether the update is newer than what it currently has installed. 3 is newer than 2 and so on.

Thus, the obvious strategy for updating your version code is to add one on every release. If using the release plugin, you might do this as a manual step after it finishes a release. If you forget, you’ll accidentally build your next release with the same version code as you just used. If you have other branches, you need to remember to update them as well. Ouch.

There is a better way. Version codes need not be sequential, so instead of incrementing 1,2,3…, we can derive it from the date. A format like [2-digit year][month][day][0-9] works nicely. A release today gets version code 1704080, tomorrow, 1704090.

This format will cover you for 82 years at up to ten releases a day. If that’s not enough for you, use a four-digit year and a two-digit suffix, but watch out for integer overflow in 130 years or so.

The date-based strategy, however, means that you have to set your “version code” immediately before you release, instead of after. To do this, add a Gradle task right before updating version name.

app/build.gradle

task setVersionCode { doLast {
    // Add a task that updates version code
    def current = project.versionCode.toInteger()
    def releaseAs = new Date().format('YYMMdd0', TimeZone.getTimeZone('UTC'))
    if (releaseAs.toInteger() <= current) {
        // More than one release today
        releaseAs = current + 1
    }
    def releaser = project.plugins[net.researchgate.release.ReleasePlugin]
    def propsFile = releaser.findPropertiesFile()
    def props = new Properties()
    propsFile.withInputStream { props.load(it) }
    props.versionCode = releaseAs.toString()
    propsFile.withOutputStream { props.store(it, null) }
}}
// Execute our task before unSnapshotVersion, provided by the release plugin:
unSnapshotVersion.dependsOn setVersionCode

With this simple build script change (plus applying the release plugin), a single command updates both version numbers:

./gradlew release

The release plugin also runs the “build” task at the point of release, so this single command leaves you with both a release .apk and your working directory updated to the tip (snapshot) code ready to start work on the next release. There’s still a problem though: if you haven’t configured your build script to sign the build, you won’t be able to publish the release .apk.

Signing the build

To make Gradle sign a build, you need to add a “signingConfig”:

android {
    // ...
    signingConfigs {
        release {
            storeFile file('/home/myname/.javakeys/mykeys.jks')
            keyAlias 'myappsigningkey'
            // These two lines make gradle believe that the signingConfigs
            // section is complete. Without them, tasks like installRelease
            // will not be available! (see http://stackoverflow.com/a/19350401)
            storePassword "notYourRealPassword"
            keyPassword "notYourRealPassword"
        }
    }
    buildTypes {
        release {
            signingConfig signingConfigs.release
            // ...

This fails, so you put your real password in the “password” config place and get pwned. Your wife leaves you, and your dog dies. You didn’t do that, right?

So where should you put your password? The top-voted answer on Stack Overflow says ~/.gradle/gradle.properties, presumably protected by 600 permissions. I don’t see the point. If you’re relying on file system permissions to keep the password secure, why have the password at all? You could just protect the keystore with file system permissions.

What you need is a prompt for the password.

Thanks to bug 1251, Gradle running in daemon mode (the default) doesn’t let you use System.console().readPassword("Password:"). You can disable daemon mode, but then you run afoul of (orphaned?) bug 2357 because Android Studio generates a default gradle.properties that includes jvmargs. Once you remove that configuration, you find that prompts don’t display when you build not in daemon mode (bug 869). That’s a pain because you can’t see the version number confirmation prompts.

As a result of this epic adventure, you’ll eventually find that the only reliable way to prompt for password is via Swing. No, I’m not joking. It’s not as gruesome as it sounds, thanks to Groovy’s Swing builder, so pop over to where Tim Roes documented how to do it.

Update: there’s a new version of this build script

Inheritance: is-a has-a

Lots of things we learn in school turn out to be naive simplifications of how the real world works, and sometimes we later learn, to our chagrin, that the way we thought about the world really isn’t true at all. Take that familiar organization of life into a giant tree: kingdom, phylum, genus, species. It seems neat enough, but in the grown-up world, people can spend lifetimes arguing about where things fit in this classification.

A related simplification that I learned in school was the rule of when to use inheritance versus composition. It went like so: in this assignment, you simulate a world full of monsters. Zombie is a type of monster, so zombie should inherit from monster. On the other hand, vampires have a coffin, so vampire should have a field that refers to coffin. Now make a UML diagram.

This makes sense as far as it goes, but there’s a major problem: it’s not usually a useful way to think about inheritance when building real programs.

Is-a v has-a perspective makes most sense when thinking about type systems. If a function takes an argument of type monster, it can also take any type of monster, either vampire or zombie. The trouble starts when you use the same reasoning to design a program and it comes back to our taxonomy problem.

You start designing a system by figuring out what your different things are: zombies, vampires, ghosts, coffins and so on. It’s easy enough: three types of monsters, each a class that inherits from monster, and coffin, its own thing. Naturally, you also need people; people need places to live and ghosts need places to haunt, so you have houses. But wait, people aren’t monsters, but they have a lot in common, so they need a base class, say living things. But that’s not quite right; the monsters aren’t technically alive, so maybe they are dead things. Also, houses and coffins seem to be of a non-living type, so that’s another base class. Should it be dead things? If the coffin is made of wood, it used be alive, so maybe that makes sense.

Most real-world characteristics of things are completely irrelevant to most programs. In our simulation, perhaps the only thing ghosts do is haunt, whereas vampires and zombies bite people but don’t haunt. It’s confusing and wasteful to worry about how they are all types of monsters, who are types of dead things and so forth.

Now, occasionally, it does make sense to think of inheritance as an is-a relationship. The cf0x10 parse tree, for example, is a pile of subclasses. When this type of design makes sense, however, it will be obvious; no need to shoehorn everything into it.

What about other metaphors? It’s common, for example, to say that instances of classes are receivers while method calls are messages to that receiver. That’s a useful perspective for language design and it’s useful to have a name for that bit before the dot – receiver.message() – but, again it’s not so helpful a metaphor when designing a program.

In real programs, metaphors like these just tend to cause trouble. Software isn’t made of physical things. A class, in reality, is just a way to group related bits of a program. I prefer not to start by creating any design for a class hierarchy; instead I write code that does the things I need it to do. A class hierarchy, if any, usually emerges from unifying the bits that make sense to put together.