Murdering collections

I sometimes get emails about Text Collector asking something like, “How long does it take? I’ve been waiting more than an day.” Or, “My collection keeps saying ‘Interrupted,’ what do I do?” These look like symptoms of the phone pausing or killing my app, and my gut says they’ve been coming more and more in the five years since I released it. I’m not alone in my suspicion:

We see a reverse evolution of Android. Every new version is less capable than it’s predecessors. Petr Nalevka at Droidcon 2022

Have Petr and I fallen into the trap of looking at the past through rosy glasses, or is this true? I am that guy who thinks newfangled JavaScript mostly just makes simple things complicated, but this time, I have data.

Fraction of collections interrupted increasing from 0.04 in sdk 23 to 0.14 in sdk 33, with anomolous spike to 0.14 in sdk 28

In this chart, the x-axis represents increasing Android versions from “sdk version code 23,” (Android 6) through sdk version code 33, (Android 13). Lower is better, so it’s trending worse.

When you start a collection, Text Collector copies your messages and arranges them into documents, all of which takes time and I have an idea how much time because you can choose to report anonymous telemetry to me. Successful collections usually finish in less than five minutes; they typically take one second per 500 plain text messages or 20 picture messages, though the timing varies widely.

Android isn’t like a movie villain who explains his scheme before killing you: if you’re an app Android wants to kill, you get no warning, but Text Collector can see remnants of unfinished collections when it starts and infer that it was killed during collection. I count each unfinished collection as an interruption, so the chart above shows the proportion1 of interrupted collections increasing steadily from four percent in Android 6 to 14% by Android 13, with a notable spike in Android 9 that we’ll revisit.

This is a huge problem for the hapless folks using Text Collector on increasingly modern Android. Today, more than one in ten collections fail because something aborts them.

The something aborting collections can be either human, or Android of its own accord. I can’t differentiate which was which directly, but it seems implausible that people using Android 8 are twice as patient as those using Android 12. A plausible reason to pin these on the user might be that if Android has gotten slower, collections could be taking longer and that might provoke people to stop them more often. My collection timing data, however, doesn’t show a clear trend to collections getting slower over time. So the most plausible reason for increasing interruptions is that Android is doing it without user consent.

In other words Petr Nalevka was right: newer versions of Android mostly appear less capable of archiving your messages than their predecessors.

By brand

In the second half of his talk, Petr moves from blaming this trend on Android proper to the various manufacturers, and if you visit https://dontkillmyapp.com/, you’ll see Samsung ranked as worst, Google and Nokia best. Does my data agree? In a couple words, not really:

Fraction of collections interrupted bar chart by manufacturer. Largest to smallest bar: samsung, Google, LGE, OnePlus, motorola, ZTE, TCL, FIH

Looking at the same metric, fraction interrupted, by manufacturer, Samsung indeed ranks worst, but Google comes in second to worst. This view, however, is a little too simplistic. Breaking it down further, we can see that this ranking depends on Android version:

Fraction of collections interrupted by brand and by Android version. Worst offenders by sdk are 25: ZTE, 26: LGE, 27: LGE, 28: Samsung, 29: Google, 30: Samsung

I’ve restricted this to only those combinations of sdk version and brand that have at least 500 collections, so it covers a smaller span of sdk versions, but reveals more nuance.

First, we can blame the apparently disastrous performance of sdk 28 (Android 9) on Samsung: Samsung did something in that version that aborted collections much more often than in any other scenario, and because Samsung phones are so common, it hurts the average for all Android 9 phones.

Second, although Don’t Kill My App endorses the idea that we can blame this on the Chinese, there’s no evidence in this data that Chinese brands are worse than any other. Only two of the brands I show here are Chinese, Zte and Tcl and if anything, they look better than most. If I reduce the threshold to only 200 collections, there is a scenario where Huawei does worse than Google, but that sample is so small I hesitate to infer anything from it.1

Which brings us to the final point: if anything, Google is among the worst offenders, not the gold standard.

Don’t Kill My App disagrees, ranking Samsung as most likely to kill your app and Google as least likely. The difference may partly be because their interest is different from mine: they focus on low-power background tasks like alarms and health monitors whereas Text Collector is doing a job that’s inevitably power-intensive. Another problem, however, is that Don’t Kill My App ranks manufacturers subjectively:

The info on the site is gathered from multiple sources. The big part is from the experience of the Urbandroid Team, but increasingly info is added from FAQs of other developers, and from personal experience shared on the GitHub repo. Ibid.

For a more objective measure, Don’t Kill My App also provides a benchmark app, but I’ve run it several times on a couple Samsung phones and it scored a perfect 100% on both. I find that hard to reconcile with ranking Samsung as the worst offender. I suspect Samsung ranks worst because Samsung makes most Android phones; if a problem is inherent to Android, therefore, it’s most likely to be seen on a Samsung phone.

Does it matter?

Choices are good. We might all benefit from variety and competition if prevailing information about strengths of each brand were based more on facts than rumor. Instead of “reverse evolution,” survival of the fittest.

The freedom to change the program is Essential Freedom 1, but Open Source often doesn’t relish this freedom: the mainstream view in the Android community says that diversity – under the pejorative “fragmentation” – is a bad thing. Sycophantic headlines like “Google is finally helping developers fight back against smartphone manufacturers” play into Google’s narrative. Google want us to see Google as white knight: benevolent stewards of a healthy ecosystem. Meanwhile, they make Android measurably worse, year after year.

The truth is that the manufacturers, possibly excepting Samsung, are just as much Google’s victims as app developers. We’re all clinging to a raft called Android while Google shoots holes in it.

What can be done?

First, appealing to Google won’t help: that comes from the “fool me 13 times, shame on me,” school of thought. Why should Google care? From their perspective, a phone doing something other than showing ads is wasting cycles. Crippling Android is, in some ways, useful to Google: it gives their own privileged applications an advantage. But in the spirit of never ascribing to malice what can be explained by incompetence, even if they did care, they demonstrate all the symptoms of having no coherent idea how Android ought to behave.

Which brings us to the second tactic that won’t help: the “Compatibility Test Suite.” These types of errors are devilish to reproduce in controlled environments: the error statistics I’ve presented above clearly show a problem in the wild, but one I have never seen on a phone that I’m using. Likewise, the Don’t Kill My App benchmark doesn’t repeatably support the rankings on its site and Sms Backup+, which does something more like what Text Collector does, has much conjecture on the causes of a related problem, back and forth “works for me,” “doesn’t work for me…” This is typical of building on a system that is over its programmers’ complexity horizon: having never developed a clear plan, it’s impossible for the Android developers to implement a comprehensive test suite.

Both of the strategies above also reinforce Google’s monopolistic rhetoric. Google isn’t competing with Apple – that duopoly is too cozy to disrupt – Google is competing with the Android manufacturers, a space where its Play Store monopoly gives it such leverage that, but for fear of making it too easy for antitrust regulators to gather evidence of their tactics, they should have been able to squeeze out the oems already. In the long term, the most obvious step is for regulators to break up Google so that Android has to compete on merit, which requires political will from a public understanding that Google is the root of Android’s problems.

For Text Collector’s near future, I have to change something to complete more collections for more people. Probably I’ll have to shove a notification into the top of the screen. Right now, Text Collector uses a Wake Lock in a Thread without a Service and some will say, that’s the problem: that I’m doing it the “wrong way,” so the fault is my own, not Google’s. That’s a facile objection based on a selective reading of the documentation and I may dive deep into why another time. For now, though, it’s clear that if I’m guilty of something, it is that I’ve been the proverbial frog in the water in failing to deal with this increasing problem.

Notes

  1. “Successful” but empty collections excluded as I assume they are mostly app crawlers.
  2. I’m not sure how to put confidence intervals on these numbers: the big problem is that these aren’t independent observations. I don’t collect anything that links a report to a particular person, so it’s likely that many of these reports are clusters of a single person trying multiple times and being interrupted multiple times. I do record retries, though, and that suggests that the fraction of interruptions retried doesn’t significantly vary by brand.

Archive links

Don’t Kill My App Droidcon
Don’t Kill My App
Don’t Kill My App’s mission statement
Anti-Chinese bug report
Essential freedoms
Google-loving article
Android privileged applications
Sms Backup+ bug report

On hosting a video

When I published my one and only video on how to use Text Collector, YouTube was friendlier to small-time creators than it is today. In the end cards, for example, I could link to my Play Store listing, but that’s no longer allowed. My end cards are grandfathered, but I can’t change them unless I want to lose the link. Much worse, a couple years ago, Google started playing advertisements before my video, a significant change quietly tucked into a November 2020 terms-of-service update:

Ads can now appear on videos from channels not in the YouTube Partner Program (YPP), and we will begin gradually placing ads on brand safe videos… Because these channels are not in YPP, there is no creator revenue share, but creators can still apply to YPP once they hit the eligibility criteria, which remains the same.

For now, YouTube “Partners” can still demonetize videos if they like, but I don’t qualify for the YouTube Partner Program and probably never will. So viewers are stuck with seeing a pre-roll advertisement on my video and there’s nothing I can do about it short of taking down the video.

This isn’t really surprising. Google isn’t much more than an advertising broker, so they pepper their properties with ads. Hosting isn’t free and it doesn’t bother me that they make a little money on ads in the sidebar, but interrupting my content is another level of nuisance.

This is the same video that appears at the top of my listing on the Google Play Store for Legal Text Collector.

Play Store listing

Google’s strategy being a Web of monopolies, they require that the video on my Google Play listing be hosted on YouTube and their own guidelines concur that I shouldn’t have ads:

Disable ads for your video to be shown on Google Play. When users browse Google Play, we want them to see a video about your app, not someone else’s ad, as this can be confusing for users.

Wish I could disable ads, Google. I guess one hand can wash the other without knowing what the other’s doing.

So what are my alternatives?

YouTube’s stranglehold on Internet Video makes removing my video from YouTube unrealistic. In addition to YouTube, I’ve hosted the same video elsewhere now for more than a year, yet a Bing video search for “Legal Text Collector” only finds the YouTube version.

Bing shows 'There are no results for "legal text collector" -site:youtube.com'

I considered adding a block of offensive terms to the video description, hoping that the Algorithm would remove ads in the name of brand safety, but more likely I’ll just get unsavory advertisers.

No, despite the apparent futility, the best I can do is give people some alternative place to watch.

Vimeo

Vimeo is the best-known competitor, in the sense that a gnat is a competitor to an elephant, and they promised not to do this:

Vimeo never puts ads before, after, or on top of videos. However, we do have limited display advertising below the player on some vimeo.com pages.

They’re targeting those of us who wants to host a video that isn’t a product unto itself, but an extended advertisement for a product, so I put my video on Vimeo.

How did it do? In its three years on YouTube, my original video accumulated just shy of fifty thousand views, or roughly thirty per day. In nearly two years on Vimeo, it’s received twenty-eight, total.

WordPress

I can drop the Vimeo link into a WordPress article and WordPress shows the video, easy:

But wait. In this blog, I don’t host pictures on a third party site: I can upload pictures directly to WordPress, I should be able to do the same with videos. And indeed I can, but it’s awkward. On WordPress.Com, the Classic editor lets me inline a video with the wpvideo shortcode:

But there are a couple problems. First, it doesn’t let me upload my .webm version, for “security reasons.” For those readers not steeped in computer jargon, “security reasons,” is an idiom  meaning, “can’t be bothered.”

WordPress.com’s rejection of .webm is a merely a nuisance; the important problem is that the shortcode provides no way at all to add captions1. So I have to abandon the shortcode and though I prefer to pretend the new “Gutenberg” editor doesn’t exist, Gutenberg does at least let me insert a video with captions:

At least, it lets me add captions to a video today. When I first drafted this post in 2021, it didn’t.

For a long time, the only way to add captions on a generic WordPress installation was via a tortuous workaround. With that in hand, I followed the steps its author generously called “needlessly opaque,” and I found myself blocked as WordPress.Com banned subtitle track upload, again for “security reasons.”

I wonder what they have to say for themselves.

Diversity typically includes, but is not limited to, differences in … physical disabilities and abilities… we welcome these differences and strive to increase the visibility of traditionally underrepresented groups.

we’ve established a Diversity and Inclusion committee

Hrm.

Captions on a video aren’t an esoteric issue, people. Sadly, the eight years between WordPress getting video hosting ability and being able to add captions on WordPress.Com is all too typical. Consider that to this day, Medium doesn’t allow tables, which is why so many Medium articles use images where they ought to use tables; for that matter, Medium didn’t even allow alt text on images for years. Why not punch a blind guy while you’re at it.

Self-hosting

But wait. I have a website for Legal Text Collector already. It serves files and videos are nothing but files. Browsers display them with a simple tag:

<video controls width=600 poster=_static/video/youtube_banner.png >
<source src=_static/video/howto.webm type=video/webm >
<source src=_static/video/howto.mp4 type=video/mp4 >
<track label="English" kind=captions srclang=en src=_static/video/howto-en.vtt >
</video>

Html doesn’t yet directly support “adaptive bitrate” streaming: the way that a video starts fuzzy and then gets more clear as the buffer catches up, but I don’t think I need it and if I really wanted it, there are JavaScript libraries out there that can do it.

Small price to pay for avoiding the web of arbitrary policies when you host elsewhere.

Notes

  1. In the time since I drafted this article, they seem to have retrofitted the shortcode to use the captioned video. I maintain this was embarrassingly late, but I’m happy to see a positive move.

Archive links

Horizontal alignment

Or, how I learned to stop worrying and love the second dimension

We programmers are mostly trained to write code from top to bottom, hardly considering the horizontal dimension. Sure, we indent to delimit blocks, but that’s it. We habitually waste opportunities for the forgotten dimension to make our code easier to read.

Consider a standard solution to FizzBuzz that looks like this:

for (i = 1; i <= 100; i++) {
  if (i % (3 * 5) == 0) {
    print('FizzBuzz')
  } else if (i % 3 == 0) {
    print('Fizz')
  } else if (i % 5 == 0) {
    print('Buzz')
  } else {
    print(i)
  }
}

There’s nothing “wrong” with this code, in the sense that it gives the right answer and passes the rules of many style guides, but we can do better:

for (i = 1; i <= 100; i++) {
  if      (i % (3 * 5) == 0) { print('FizzBuzz') }
  else if (i % 3 == 0      ) { print('Fizz')     }
  else if (i % 5 == 0      ) { print('Buzz')     }
  else                       { print(i)          }
}

Nothing changed except that I rearranged the whitespace, yet the second version is much easier to read. Horizontal alignment draws our eyes to the patterns; it makes the if-else block look united, emphasizing which parts of the code are the same and which parts are different.

When you start looking for these kinds of opportunities, they appear everywhere. Don’t try this in php, but in any other language with a ternary operator, we can make our if-else stack look even more like a table:

for (i = 1; i <= 100; i++) print
  ( i % (3 * 5) == 0 ? 'FizzBuzz'
  : i % 3 == 0       ? 'Fizz'
  : i % 5 == 0       ? 'Buzz'
  : /*else          */ i
  )

Aside from the ugly C-style for-loop, this is even more clear than the English specification: Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz.” For numbers which are multiples of both three and five print “FizzBuzz.”

There are two reasons we rarely write programs this way.

First, it makes diffs harder to read: if you align your code like this, when you change part of the block, you need to change surrounding lines just to maintain alignment. The diff, therefore, includes distracting changes that are nothing but indentation. At first, the diff-readability argument looks strong because when we consider the readability of diffs versus the readability of source code, it’s not obvious which should win. Look a little closer though and the argument turns specious. We have the technology to solve that problem, you see. Any decent diff or blame tool can ignore whitespace changes, so if whitespace changes are a problem for you, you’re using the wrong tool.

The second argument, by contrast, looks easy to dismiss at first, but is more troublesome in practice: it takes time to write code that uses horizontal alignment effectively. Every competent programmer knows that readability trumps writeability, so obviously you should take the time to align things neatly if it makes for easier reading, right? In the heat of coding, it’s not so easy. When you’re focused on solving a problem, “in the zone,” little distractions like realigning your ascii tables can be real impedements.

Luckily, there’s a strategy to work around this problem that we all should employ anyway: read our own code and edit for readability. It’s ok to leave it a little messy on the first draft, just be sure to revisit and clean up.

Still, there’s no reason our tools couldn’t shoulder some of the burden, particularly when we edit existing code that uses tabular structures. Text editors could recognize these types of structures and adjust column widths as we type…

Salting the earth

Humans have fought bugs since the dawn of civilization, and while we’ve dramatically improved our ability to wage chemical warfare, we still do close-quarters battle with melee weapons. The Bug-A-Salt changes all that.

Millenia of human innovation culminated in this, the pneumatic salt-firing shotgun. It alters the world’s balance of power, and with great power comes a great question: “Is it safe to shoot near potted plants?”

Be aware of what’s beyond your target, said Beth. If it’s a tiny herb garden, recall the phrase, “salting the earth.” Legend has it that ancient conquerors practiced scorched-earth tactics by sprinkling salt on enemy fields.

“Surely,” said I, “that’s just symbolic.” Salt is a rock. To become an herbicide, wouldn’t it take a huge quantity? In the age when fly swatters were still new technology, salt was relatively precious. Wouldn’t that be wasteful?

This begged for an experiment. First, we’d need a plant to test, something that grows reasonably fast and indoors so we could control the weather. Wheatgrass fit the bill, so we ordered a growing kit advertised as “ready to eat in 7 to 10 days.” Apparently wheatgrass is sold as food (for humans).

The kit came with four separate tubs, each with a pre-measured seed and soil quantity. Intending to control for variations in sunlight and other conditions, we distributed the soil and seeds from one of the four into an egg carton. Of the carton’s twelve cups, four egg cups received no salt, four received a sprinkling, and four received a heavy salt coat.

Likewise, the remaining three manufacturer-supplied supplied containers received received no salt, a sprinkling and heavy salt. We call “lightly salted” about the heaviest amount you might reasonably call “sprinkling.” The heavy amount was outrageous, enough that we will later see salt deposits climb the walls of its container, presumably thanks to evaporation and capillary action.

Three containers showing amount of salt
Before planting

We watered every three days and recorded growth. I expected we would need to do something tedious like count the blades of grass in each container, but the difference couldn’t have been more stark.

Control grew about an inch. Of the salted, only postsalted sprouted at all.
Day 4

In the egg carton, we salted the second column lightly and the third column heavily, repeating the pattern for the remaining six cups not shown. We salted the front row (presalted) before planting, and the back row (postsalted) when the seeds had sprouted, three days after planting.

The presalted egg cups never sprouted, and the postsalted cups withered dramatically. Even grass in unsalted cups adjacent to salted cups struggled, particularly one that I accidentally splashed while watering its salted neighbor.

Nearest cups, particularly on the left are unsalted, but grass is much shorter than control in background
Cups nearest and furthest from the camera are unsalted

None of the salted egg cups ever grew, but the lightly salted tub did manage to sprout on day 9. At first only a handful of sprouts appeared, but more followed for the next several days. By day 15, some of the hardier blades were catching up to their unsalted brethren.

Lots of grass in the unsalted tub, sparse grass in the lightly salted tub
Day 15

Despite its valiant effort, however, the salted grass remained sparse. We stopped the experiment on day 26.

While not that fabled “nothing will ever grow again” effect, it looks like a sprinkling of salt could cause devastating famine. In this experiment, we used about a teaspoon of salt on the lightly salted 3 3/4 inch square container. A 3 pound box of Morton’s kosher salt comes in a 90 cubic-inch box, so this works out to a little more than an ounce and a half of salt per square foot. The egg cup spillover suggests that a smaller amount could still be effective, so we’ll round down to 1 ounce per square foot.

Knowing this, how much salt will we need to starve our enemies if we aspired to warlording? We might decide to make a name for ourselves by sacking Carthage, with its population of 700 thousand, according to Strabo XVII Ch.3§15. According to Wikipedia, The Economics of Agriculture on Roman Imperial Estates in North Africa estimates that a family of six needed 7 to 12 acres of farmland to feed itself. Thus, at 2 acres of Carthagian farmland per person, we need to cover 1.4 million acres. If planted nearby, this would surround the city for some 30 miles in all directions and we would need about two million tons of salt to cover it.

We can use Emperor Diocletian’s Edict on Maximum Prices to put this in perspective. The edict listed salt at 100 Denarii per modius, a volume of around 530 cubic inches. Using our previous density estimate of 30 cubic inches per pound, this works out to 6 Denarii per pound or 12 thousand per ton. Since the edict was an attempt at price control, it’s safe to say that commodities it listed actually cost more, so a ton of salt might have cost about the same as a soldier’s annual wage. Cost-conscious generals would suggest using our two million ton salt fund to hire a gargantuan army instead, a million strong. With various sites estimating the Roman force that actually destroyed Carthage at 80 thousand infantry, our salt budget could scale it up by an order of magnitude.

Even using enslaved Carthagians to harvest and sprinkle, salting doesn’t look practical, but today’s budding emperor can take heart that Alibaba has road salt priced at a mere $100 per ton. On the other hand, today we have much more effective herbicides, so we don’t need to salt enemy fields. The important question, however, remains: will our potted plants become casualties of Bug-A-Salt conflict?

Suppose we have a planter that covers one square foot. The Bug-A-Salt’s magazine holds about a tablespoon of salt, good for 50 or so shots. Our ratio of 1 teaspoon per 14 square inches works out to 10 teaspoons per square foot, or three full Bug-A-Salt magazines per plant. Thus, if we kill 150 flies on our plant we would do collateral damage equivalent to this high, if not total, wheatgrass kill rate:

Bushy control beside sparse lightly-salted tub
Day 26

Calculations for pinch to zoom

In which I discover how to correct for things moving around when you zoom, using only elementary algebra.

Text Collector uses a pinch-pan-zoom view to let people preview how their messages will look in pdf format. Inexplicably, Android provides no pinch-pan-zoom view built-in, so a quick look online reveals implementations to fill that gap littered everywhere. Those that aren’t broken, however, can only handle ImageView content.

If you need pinch-to-zoom for something other than pictures, you need to reinvent it.

I struggled with this implementation for an embarrassing amount of time, and judging by the number of wonky zooms I’ve seen in Android games, I’m not alone in finding it tricky.

Android does give us ScaleGestureDetector to detect pinches; it reports a “scale factor” that is a ratio representing how far our fingers move apart or together. The obvious thing to do is to scale your content, using View.setScale(), something like setScale(getScale() * scaleFactor). That’s the right idea, but insufficient.

Scaling a view transforms it around its “pivot,” an arbitrary point somewhere in the view. What we really want is to scale it around the “focus” of the zoom, that is, the bit of content between our fingers. Focus and pivot don’t line up, so, as we zoom, the content we want to see rushes away offscreen.

Model

We have two different coordinate systems because we need a fixed-size touchable area to detect fingers and a changing-size area to display content. I call these the “window” and the “content,” respectively. As reported by Android, focus is in the window grid and pivot is in the content grid.

Misaligned pivot and focus cause scaling to shift the view content away from wherever it’s supposed to be after the zoom. To correct, we need to translate back by an amount t.

  • t: translation needed to correct for scaling, window units

Android gives us these measurements:

  • f: focal point of the zoom, window units
  • m: margin outside the content, window units
  • s: starting scale, window units per content unit
  • z: scale factor, that is, change in scale, unitless

Two measurements change during scaling. I will denote them with a tick mark meaning “prime:”

  • m': margin after scaling, window units
  • s': scale after scaling, window units per content unit

Scale factor is the ratio of scale before to scale after, so:

s' = zs

Actually, the scale factor and focus used here are approximations that work well, but could be refined in a more complete model.

We’ll use a couple measurements in the content grid as well:

  • P: pivot around which scaling happens, content units
  • D: content that aligns with the zoom focal point when zoom begins, content units

When scaling, measurements in the content grid do not change. Upon reflection, this should be obvious because the content can draw itself without knowing it’s been zoomed. So, even though it looks like P grows in this diagram, remember this diagram shows the window perspective. From the content perspective, P does not change.

Diagram of measurementsAndroid gives us P but we need to calculate D for ourselves. Since f and m are in different coordinates than D we cannot say that D = f - m.

This makes me wish for a language like Frink that attaches units to numbers. You actually can add measurements of different units together, but only if there’s a defined conversion. So, something, like D = f - m could do something sane.

In Java and all mainstream languages, numbers are unitless, so it’s easy to add numbers nonsensically.

For both grids, the origin is at the left side. To convert between coordinates on the window grid (subscript r) and the content grid (subscript c):

x_r = sx_c + m \newline s' = zs \newline x'_r =s'x_c + m'

So:

f = sD + m \newline \Rightarrow D = \frac{f-m}{s}

Given these things, we need to solve for t, the translation that will rescue the content we want to see from wherever it went during scaling.

It is important that even though we call a View function, setTranslation(), on the content to translate it, the number we pass that function is in window coordinates, not content coordinates.

Derivation

So far, the things we know, given by the Android api are f, m, s, z and P, from which we know how to calculate D and s'.

Next, we need m', the margin after scaling.

In software, you don’t actually have to calculate m' yourself. You can setScale() then getLocationOnScreen() to ask the view where it would place its corner, but that’s cheating.

To find m' in terms of things that we know, another variable helps to translate pivot from content to window:

  • w: position of the pivot, P, in the window grid, that is, w=Ps+m.

w = Ps + m \newline w' = Ps' + m'

Relationship of w to m and PThe t correction will move the pivot in window space, but only after scaling. By definition, the pivot does not move due to scaling alone, so w = w'. This implies:

Ps + m = Ps' + m' \newline \Rightarrow m' = Ps - Ps' + m \newline\Rightarrow m' = Ps(1 - z) + m

Next, we use m' to calculate the thing we really want, the translation t, to compensate for zoom. Define a variable translating the content position under the focus, D, to window coordinates:

  • h: position of the content at D after scaling, in the window grid, so h = Ds'+m'

Relationship of m, h, f and D to t

Because the translation t is in window coordinates, t = f - h. Recalling that s' = zs, we know:

t = f - h \newline t = f - Dzs - m'

Plugging in D:

t = f - \frac{f-m}{s}zs - m' \newline t = f - z(f-m) - m'

Since we previously derived m', we are now done:

t = f - z(f-m) - (Ps(1 - z) + m)

To make this Android-executable code, we just need to translate to Java and do the same for each axis. Source is on sourcehut.

But wait, there’s more

An eager kid in the front row is waving his hand to tell me about affine transformations. We can simplify further, you see:

t = (f - Ps)(1 - z) + m(z - 1)

I didn’t go this far because it’s no cleaner in Java, but does look more symmetrical in Math: tantalizingly like a dot product. Unfortunately, I forgot most of my linear algebra long ago, so I have no idea why. Best go watch 3Blue1Brown.

Tests versus specs

It’s been popular for some years now to say that tests are “executable specifications.” I think this is a wrong way to to think about programming and leads to buggier programs than the traditional view that tests are tests.

Saying that tests are specs implies that you don’t need separate specifications. If this isn’t what you mean when you call a “test” a “spec,” then my arguments mainly won’t apply, but remember: regardless of what you mean, people will hear, “replace your specifications.”

Programs are three-legged stools that stand on the trio of specs, tests and program code. Stable programs require that each leg receives equal care. When done well, a useful tension between specs, tests and program code improves quality.

The form and labels can vary: specifications can be formal requirements or notes in a bug tracker; tests can be automated or manual. Whatever the form, every program has these three parts.

“Tests are specs” refers to a specific form: automated tests written in a style called “behavior driven.” Behavior-driven means that tests look like this:

describe Frobber... it 'frobs'...

Instead of

TestFrobber... test_frobs()...

From this style, we can infer the reasoning for replacing specs with tests:

Axiom: specs describe what a program should do

Axiom: tests verify that the program does what the specs say

Assume that you can write tests in a style where they describe what programs should do. Or, equivalently, assume you can write specifications as executable code that can verify compliance.

When so written, specs and a tests serve the same purpose. Thus, by the principle that you should eliminate redundancy, they should be the same thing.

The first problem with this argument is that there’s no real basis for believing that you can write specifications in executable language as clearly as if you wrote them in natural language. Things like using describe... it... and expect(x).toBe(y) (instead of assert x == y) superficially make program code look English-like (if you squint) but it’s not at all obvious that they make things more clear. If this style really is more clear, why are tests special? Why not write all code in quasi-English?

By a dubious appeal to Whorfianism – the idea that words we use affect what we do – behavior-driven style supposedly encourages people to write tests more declaratively because you “describe” what the program should do. It is true that programs are nearly always more understandable when written declaratively than rather than imperatively, but again, there is no reason this should be specific to tests; all program code should be written as clearly and declaratively as possible.

Even if we assume that somehow we can write executable language as clearly as natural language, it doesn’t follow that we need only one or the other. We justify combining test and specs because “redundancy is bad.” By that logic, however, we don’t need test-specs to be separate from program code either. If the specification is executable, it doesn’t need to be the test because it could just as easily be the program. And then there was one.

In a sense, you truly can combine tests, specs and program. A program with unwritten program code is nothing but an idea. A program with unwritten tests and specs, is still a program. It’s just not a very good program and we know why.

[When programming] one must perform perfectly. The computer resembles the magic of legend in this respect, too. If one character, one pause, of the incantation is not strictly in proper form, the magic doesn’t work.

– The Mythical Man Month

Automated tests are programs too and just as capricious. A thorough test suite must often be as large as the program it tests and, therefore, will have as many errors.

Moreover, programs are notoriously hard to change without breaking, which brings us to the second major flaw in the logic behind combining tests with specs. It’s true that tests exist to verify conformance with specification, but that is not the only reason; tests also verify that changes don’t break things.

Remembering that tests are programs too, and just as hard to change (safely) as any program, it should be clear that the only way to avoid breaking tests is to avoid changing them. Furthermore, the only way to test the tests themselves is to write them in lock-step with the program under test. This is called the “red-green” cycle and it goes like this:

  1. Write a failing test
  2. Write the code necessary to make the test pass
  3. Repeat

In step one, if you check that the test fails in the way you expect, you know that it tests the code you wrote in step two. Then back away. So long as you don’t change the test, you can be reasonably sure it’s testing for the intended problem.

The benefit of a specification, by contrast, is that you write it before you write the program. It clarifies the problem and the goal and forestalls misguided coding.

Specs also will be incomplete and sometimes plain wrong, because writing a flawless description of program is as hard as writing a flawless program. So, to remain relevant, specs must change as programs change. If your specs are also your tests, not only will they be incomplete and wrong at the beginning, errors introduced by the changes will make them wrong in different ways at the end.

I still hope for executable specifications though I think “property-based” testing is a more promising avenue than “behavior-driven.” However they come, executable specifications will arrive – at latest, computers will learn to understand English better than humans – and even then, specifications cannot replace tests.

Do you need a Model?

It’s common for graphical programs to work something like this:

  1. Pull data from persistent storage: database, web server and so on
  2. Tuck that data away in a “Model” object
  3. Display the data
  4. Receive updates from either the user interface or the persistent storage
  5. Hope that the screen or the storage also updates, respectively
  6. Goto 4

The role of “hope” in this scenario is played by something often called “Data Bindings.” The problem with this design is that it’s an obviously bad idea: by keeping a Model like this, you’ve created a cache and cache invalidation is known to be a hard problem.

I’ll assume you’re writing a webapp, because, who isn’t? I claim you don’t need any Model client-side at all. If you have a long memory, you might say that I’m just being ornery and yearning for a time when we didn’t need a megabyte of JavaScript to display a form with one field:

Google's homepageYou’ll say you need these Models and Bindings because modern.

You’d be partly right. This does make me ornery, but let’s step back and look at what might be the number one killer of program maintainability: mutable state. A graphical interface is nothing if not a big pile of mutable state and the job of its programmers is to wrangle the ways it mutates.

In a sense, every useful program needs to grapple with this problem. Programs that lack persistent data or ways to display it are pretty useless, in the way that a tree might fall but nobody knows whether it makes noise and most people don’t care.

I advocate that you scrap the client-side Model, but I don’t claim this will make your webapp easy to write. No matter what, you’re dealing with a pile of mutable state and that’s bound to be tricky.

Why make it trickier than necessary? You really do need widgets on the screen: checkboxes that can be checked, or not, spans that could say one thing, or another. Do you need a shadow copy of those things as well?

Now, not all state duplication is called a Model and not everything called a Model is state duplication. Elm, for example, takes the view that Model is the application state but avoids state duplication by virtue of being purely functional.

Most programs aren’t written in purely functional languages, so the Model is state duplication, but users don’t see Models. They see widgets and to them, the state of the widgets is the state of the data. In a programmer’s ivory tower, you might argue that the Model is the source of truth, but your arguments matter not at all to actual people.

So don’t fight it. The path to enlightenment lies in realizing that once you’ve rendered the data, it doesn’t matter anymore. Throw it away.

Wait, that might work for your Web 2.0 site and its DHTML, but I have a Web APP. It’s modern. It has a span over here and a checkbox over there and the span says ‘frobbing’ or ‘unfrobbing’ depending on whether the checkbox is checked.

Ok, but I’m not seeing how Models and magic Bindings are simpler than this:

checkbox.onchange =
  span.textContent = checkbox.checked ? 'frobbing' : 'unfrobbing'

Well, it’s not just this one span, this blink tag over here needs to appear if you’re not frobbing. With your plan, I need to add another line of code to handle that.

True, but you’d have to add something when backing it with Models and Bindings as well. It wouldn’t be less code, just different code and more indirection.

There are more ways frobbingness can change though. I don’t want to copy this logic in every place.

And well you should not, but you already have the tool to solve that problem: a function. Move the interface update logic to a function and call it anytime frobbingness changes.

I see what you’re saying, but I don’t like it. The span and the blink are in logically different components. I’ll end up with a bunch of functions that update both, but don’t sensibly belong to either.

That’s a very good point.

Partly, the appeal of the client-side Model is that it seems we ought to be able to bind all the widgets to the same object, giving us a kind of fan-in-fan-out approach to updating them. In principle, you’d have just one Frobber and you’d change its isFrobbing property, then all the dependent widgets would get notified. People have been trying to implement this concept since the invention of the computer and it’s gone by many names. Recently, the buzzword is “reactive” programming.

This idea can work well in some systems. Spreadsheets and style sheets are examples of very successful reactive programming models, for example. It works well in limited domains with declarative or purely functional languages.

In non-functional practice, the same data structures aren’t convenient for all widgets. Imagine that Frobbers are parts of Whatzits, but in one place you display all your Frobbers and in another you display the content of a select Whatzit. Your Models probably look something like this:

frobbers:
  - id: 1
    isFrobbing: true
  - id: 2
    isFrobbing: false

selectedWhatzit:
  frobbers:
    - id: 1
      isFrobbing: true

Now which copy of Frobber number 1 is the true Frobber? Ideally, you’ll make them point to the same object so it doesn’t matter. Your Binding magic may or may not be able to make sense of that arrangement and the temptation to duplicate data is strong. Models and Bindings just moved the concrete problem of what appears in widgets to the abstract problem of what goes where in the invisible Model.

With or without magic Models, as the application grows, you’ll need to put things in sensible places and call them in sensible ways. That is the whole job of user interface programming.

If you keep doing this for all the data and all the widgets in your app, you’ll notice you keep doing the same kind of tedious things over and over. This thing over here updates some data. Need to show it over there. Where’s the right place to put that code? Here, there, somewhere else? Decide, repeat.

Naturally, as programmers, we think we can automate away the repetition, but some complexity is fundamental.

Schroot cheatsheet

I don’t always install software whose idea of installation instructions is curl ... | sudo, but when I do, I jail it. In this case, I’m setting up a chroot for Nodejs:

sudo apt install schroot debootstrap
sudo mkdir /srv/npm-chroot
sudo debootstrap stable /srv/npm-chroot
sudo mkdir -p /srv/npm-chroot/home/joe/projects
sudo chown -R joe:joe /srv/npm-chroot/home/joe/

In these examples, “joe” is my username.

The “s” in “schroot” stands for “securely,” but it might as well be “simple” because “schroot” handles fiddly bookkeeping tasks for setting up your environment, based on its config file.

Edit /etc/schroot/schroot.conf:

[npm]
description=npm projects
type=directory
directory=/srv/npm-chroot
root-users=joe
setup.fstab=joe-projects/fstab

Normally, schroot mounts /home from the host as /home in the chroot. I don’t want programs in jail to muck about with my home on the host though, so I edit the setup.fstab option. Its default lives in /etc/schroot/default/fstab.

For my purposes, the schroot’s default configuration is a good start, so:

sudo mkdir /etc/schroot/joe-projects
sudo cp /etc/schroot/default/fstab /etc/schroot/joe-projects/

Edit /etc/schroot/joe-projects/fstab, removing the /home line and adding instead:

/home/joe/projects /home/joe/projects none rw,bind 0 0

Finally, enter the chroot, as root.

schroot -c npm -u root

I like to install sudo so it feels like a normal Ubuntu:

# now in the schroot
apt update
apt install sudo
exit

Then log in as my normal user:

# in the host
schroot -c npm

From here I can install npm in relative isolation; this is not sufficient for isolating malicious software, but it’s a nice way to avoid inconsiderate programs from pooping all over your system.

Evolution

In the beginning, there was html


<form method=post action=signup>
  <label>Username       <input name=user></label>
  <label>Password       <input name=pw type=password></label>
  <label>Password again <input name=pwv type=password></label>
  <button>Submit</button>
</form>

Then came JavaScript

<form id=signup method=post action=signup>
  <label>Username       <input name=user></label><br>
  <label>Password       <input name=pw type=password></label><br>
  <label>Password again <input name=pwv type=password></label><br>
  <button>Submit</button>
</form>

<script>
<!--
document.forms.signup.onsubmit = function () {
  if (this.pw.value != this.pwv.value) {
    alert("Passwords don't match")
    return false
  }
}
// -->
</script>

And jQuery

<script src="https://code.jquery.com/jquery-1.1.4.js"></script>

<form id="signup" method="post" action="signup">
    <label>Username       <input name="user"></label><br/>
    <label>Password       <input name="pw" type="password"></label><br/>
    <label>Password again <input name="pwv" type="password"></label><br/>
    <button>Submit</button>
</form>

<script>
// <![CDATA[
$('#signup').submit(function () {
  if (this.pw.value != this.pwv.value) {
    alert("Passwords don't match")
    return false
  }
})
// ]]>
</script>

And json

<script src="https://code.jquery.com/jquery-1.5.js"></script>

<form id="signup">
    <label>Username       <input name="user"></label><br/>
    <label>Password       <input name="pw" type="password"></label><br/>
    <label>Password again <input name="pwv" type="password"></label><br/>
    <button>Submit</button>
</form>

<script>
/*jslint browser: true */
/*global $, alert, window */

$("#signup").submit(function (event) {
    "use strict";

    var form = $(event.target);
    var formData = {};

    form.find("input").each(function (ignore, el) {
        var input = $(el);
        var name = input.attr("name");
        formData[name] = input.val();
    });

    if (formData.pw !== formData.pwv) {
        alert("Passwords don't match");
    } else {
        $.ajax("signup", {
            type: "POST",
            contentType: "application/json",
            data: JSON.stringify(formData)
        }).done(function () {
            window.location = "signup";
        }).fail(function () {
            alert("Error");
        });
    }

    return false;
});
</script>

And Ecma 6

<form id="signup">
    <label>Username       <input name="user"></label><br>
    <label>Password       <input name="pw" type="password"></label><br>
    <label>Password again <input name="pwv" type="password"></label><br>
    <button>Submit</button>
</form>

<script>
/*jshint esversion: 6 */
document.querySelector("#signup").addEventListener("submit", submitEvent => {
    "use strict";
    
    const form = submitEvent.target;
    const formData = {};

    submitEvent.preventDefault();

    for (const input of form.querySelectorAll("input")) {
        var name = input.getAttribute("name");
        formData[name] = input.value;
    }

    if (formData.pw !== formData.pwv) {
        alert("Passwords don't match");
    } else {
        const xhr = new XMLHttpRequest();
        xhr.addEventListener("load", () => {
            if (xhr.status === 200) {
                window.location = "signup";
            } else {
                alert("Error");
            }
        });
        xhr.addEventListener('error', _ => {
            alert('Error');
        });
        xhr.open("POST", "signup");
        xhr.setRequestHeader("Content-Type", "application/json");
        xhr.send(JSON.stringify(formData));
    }
});
</script>

Progress?

Linting JavaScript considered harmful

I am in a minority of programmers so small that I might be the only member. I can’t find anyone on the Internet advocating my position: you should not lint your JavaScript.

I don’t mean you shouldn’t use this or that flavor of lint. I mean that you shouldn’t use any JavaScript linter. At all.

First, the arguments for linting…

Lint catches bugs?

How many bugs does it really catch? Only a few. Rules against unused variables can be useful when you’ve renamed something in one place and forgotten another place, for example. Trouble is, I’ve only noticed lint catching bugs in code that wasn’t complete or tested anyway, and therefore already broken by definition.

The major category of bugs caught by lint can be caught instead by a simple statement:

use strict

JavaScript strict mode really does find bugs, almost always results of accidentally overwriting globals. It’s tragic that we need lint to tell us about missing strict mode declarations when browsers could warn us.

But is it worth bringing in lint for the few bugs it can find?

I think not; the better approach is simple: learn to use strict reflexively. Then spend the effort you were going to use typing semicolons on testing instead.

Lint enforces consistency?

So what? Consistency for its own sake is a pursuit of feeble minds.

In writing, particularly writing computer programs, consistency is a proxy for something much more important: readability. And readability is not something that computers yet understand well.

It is perfectly possible, common, in fact, to write incomprehensible code that passes a linter.

But, you say, “lint rules can help a little, so we should use them. We just need to pick a ruleset.”

Lint reduces bikeshedding?

In the beginning, lint was an inflexible representation of one man’s own preferences. Next, everyone bought into the idea they should lint their JavaScript, then adopted Crockford Style and moved on argued endlessly about which rules were important.

We went from the ultra-rigid linter, to the ultra-configurable, to the ultra-pluggable. At each step, we introduced more and more time-consuming opportunities to argue about picayune issues.

To paraphrase Tim Harford:

Lint is such a temping distraction because it feels like work, but it isn’t. When you’re arguing about lint rules or fixing lint errors, you’re editing code, but you’re not getting things done.

So should I never lint?

All that said, I recommend lint for one purpose: it can be a useful way to learn about the idioms and pitfalls of a language. Run your code through a linter and learn why it complains.

This use of lint as a teaching tool is actively discouraged by the way most people advocate using it. Imagine you set up a hook that says “all code must pass lint before commit.” Those who actually could benefit from the lint suggestions are blocked by them, thus encouraged to “fix” the “problem” as quickly as possible: obey the tool and never learn why.