Does anyone know how to configure xkb?

I usually prefer to leave my keyboard layout and shortcuts in default configuration. Partly, this makes switching between machines easier and partly it helps me learn what the defaults are, so I can avoid breaking them in programs I write.

Some customizations, however, are just too valuable to forgo: super+arrow to move windows between monitors and shift+space to type an underscore. Kde has global shortcut configuration for the first, but what about the second?

I’ve used xmodmap before, but had problems where it wouldn’t stick throughout a session, apparently forgetting my configuration from time to time. So I end up using dumb tricks like a shell script with an infinite loop. Nowadays, though, the Internet says that xkb is the new and shinier replacement.

So, xkb it is. The best documentation I can find is An Unreliable Guide to Xkb Configuration. I learn that configuration lives in /usr/share/X11/xkb. There’s also something about /etc/X11/xorg.conf.d. Bad start… where does user-configuration go? I have no idea. I think the concept is probably that I should define a custom layout for me specifically, maybe naming it something like kingdom_of_joe, then pick that as my keyboard layout somewhere else in my window manager or login script. Screw it. I’ll just edit the files in /usr/share and if some barbarian who maps shift+space to backspace shares my computer, we’ll have to go to war.

Now, to look at the configuration. There are 275 config files. It’s slightly less than obvious where I should start.

Lampson attributes the aphorism that started our exploration (all problems in computer science can be solved by another level of indirection) to David Wheeler, the inventor of the subroutine. Significantly, Wheeler completed his quote with another phrase: “But that usually will create another problem.”

From Beautiful Code

Back to xkb. What do these six directories represent?

First layer of indirection is translating a scancode (some bytes the keyboard firmware generates) to a mysterious all-caps alphanumeric identifier that looks like FROB or AE01. This symbol is supposedly a mnemonic for the key’s physical position, except when it isn’t. The mapping happens via files in the “keycodes” directory and I think I can ignore it.

I can also ignore the “geometry” directory; it apparently contains specs for how to draw keyboards.

Thus, I eliminate 50 config files from consideration. Only 225 to go.

The “rules” directory seems like a promising place to look. This is hopeless. The files look like an almost-but-not-quite scripting language that refer to other parts of the configuration. Maybe the docs will enlighten me.

The main advantage of rules over formerly used keymaps is a possibility to simply parameterize (once) fixed patterns of configurations… A sample rules file looks like this:

! model = keycodes
 macintosh_old = macintosh
 ...
 * = xorg

! model = symbols
 hp = +inet(%m)
 microsoftpro = +inet(%m)
 geniuscomfy = +inet(%m)

! model layout[1] = symbols
 macintosh us = macintosh/us%(v[1])
 * * = pc/pc(%m)+pc/%l[1]%(v[1])

! model layout[2] = symbols
 macintosh us = +macintosh/us[2]%(v[2]):2
 * * = +pc/%l[2]%(v[2]):2

! option = types
 caps:internal = +caps(internal)
 caps:internal_nocancel = +caps(internal_nocancel)

I think the writer has a different idea of “simple” than I. Having given up on rules files, I move on to “compat”, “symbols” and “types”.

The docs make it sound like these configurations all do just about the same thing:

  • Types “…describe how the produced key is changed by active modifiers…”
  • Compat “…defines internal behaviour of modifiers…”
  • Symbols “…defines what values (=symbols) are assigned to what keycodes [depending] on a key type and on modifiers state…”

I cross my fingers and hope I won’t need compat, so I look at types. The files are full of incantations like this:

type "TWO_LEVEL" {
    modifiers = Shift;
    map[Shift] = Level2;
    level_name[Level1] = "Base";
    level_name[Level2] = "Shift";
};

It appears that xkb abstracts the concept of a modifier key to something called a “level.” Level one means no modifiers, level two is with shift pressed, level three is alt or ctrl or super or something, and so on. I guess, maybe, if I wanted space to behave as shift, I might do that in the types (or compat?) folder, but since those files don’t appear to mention specific keys like space, they probably are not what I want today.

On to symbols…

There are a mere 183 config files in symbols. They have names that look mostly like country codes, but some are a little odd. I’ve never heard of a country called “capslock”, for example. How do I know which symbols file applies to me? I have no clue; guessing it is.

I discovered in keycodes that the four-letter word for space in xkb-language is SPCE, so I break out grep to find where it appears in symbols… it is all over, but I sense a pattern. Also, I notice an oddly-named country called “pc”.

$ grep SPCE symbols/pc
key <SPCE> { [ space ] };

In other countries other than the republic of pc, it looks a bit different:

$ grep SPCE symbols/fr 
    key  { [ space, nobreakspace, underscore, U202F ] };
    // ␣ (espace insécable) _ (espace insécable fin)
    ...

The French seem to get four mappings for space. Combined with my knowledge of levels, I finally put this together. It seems that the symbols files are tables of what character to produce, given an key and a particular modifier, if the modifier is level two (shift), you use column two, and so forth. I think.

So, I edit the pc symbols:

$ sudo vim /usr/share/X11/xkb/symbols/pc 
...
    key <SPCE> { [ space, underscore ] };

I log off, and back on again and I finally can type shift+space=__wtf__ in comfort.

Inheritance: is-a has-a

Lots of things we learn in school turn out to be naive simplifications of how the real world works, and sometimes we later learn, to our chagrin, that the way we thought about the world really isn’t true at all. Take that familiar organization of life into a giant tree: kingdom, phylum, genus, species. It seems neat enough, but in the grown-up world, people can spend lifetimes arguing about where things fit in this classification.

A related simplification that I learned in school was the rule of when to use inheritance versus composition. It went like so: in this assignment, you simulate a world full of monsters. Zombie is a type of monster, so zombie should inherit from monster. On the other hand, vampires have a coffin, so vampire should have a field that refers to coffin. Now make a UML diagram.

This makes sense as far as it goes, but there’s a major problem: it’s not usually a useful way to think about inheritance when building real programs.

Is-a v has-a perspective makes most sense when thinking about type systems. If a function takes an argument of type monster, it can also take any type of monster, either vampire or zombie. The trouble starts when you use the same reasoning to design a program and it comes back to our taxonomy problem.

You start designing a system by figuring out what your different things are: zombies, vampires, ghosts, coffins and so on. It’s easy enough: three types of monsters, each a class that inherits from monster, and coffin, its own thing. Naturally, you also need people; people need places to live and ghosts need places to haunt, so you have houses. But wait, people aren’t monsters, but they have a lot in common, so they need a base class, say living things. But that’s not quite right; the monsters aren’t technically alive, so maybe they are dead things. Also, houses and coffins seem to be of a non-living type, so that’s another base class. Should it be dead things? If the coffin is made of wood, it used be alive, so maybe that makes sense.

Most real-world characteristics of things are completely irrelevant to most programs. In our simulation, perhaps the only thing ghosts do is haunt, whereas vampires and zombies bite people but don’t haunt. It’s confusing and wasteful to worry about how they are all types of monsters, who are types of dead things and so forth.

Now, occasionally, it does make sense to think of inheritance as an is-a relationship. The cf0x10 parse tree, for example, is a pile of subclasses. When this type of design makes sense, however, it will be obvious; no need to shoehorn everything into it.

What about other metaphors? It’s common, for example, to say that instances of classes are receivers while method calls are messages to that receiver. That’s a useful perspective for language design and it’s useful to have a name for that bit before the dot – receiver.message() – but, again it’s not so helpful a metaphor when designing a program.

In real programs, metaphors like these just tend to cause trouble. Software isn’t made of physical things. A class, in reality, is just a way to group related bits of a program. I prefer not to start by creating any design for a class hierarchy; instead I write code that does the things I need it to do. A class hierarchy, if any, usually emerges from unifying the bits that make sense to put together.

Exception Rules IV: The Voyage Stack

Something I wrote seven years ago and I’m publishing now to see if learned anything since then.

This is part of a series where I review common wisdom about Java error handling. The series is in no particular order, but the first installment explains my categorization method.

Clean up with finally
Truth: high
Importance: high

Joshua Bloch explains the reasoning behind this in Effective Java as “strive for failure atomicity.” Whatever happens, clean up after yourself. Checked exceptions give us one of their rare benefits by just maybe reminding us to write finally blocks.

One way not to write a finally block, however, is like this:

try {
	connection = DriverManager.getConnection("stooges");
	// Snip other database code
} catch (SQLException e) {
	throw new RuntimeException(e);
} finally {
	try {
		// Bad. Do not do this.
		connection.close();
	} catch (SQLException e) {
		throw new RuntimeException(e);
	}
}

If opening the connection throws SQLException, closing the connection throws NullPointer and obscures the original cause. Usually, people suggest wrapping a conditional around the connection.close() call to check for null, but that gets ugly fast and is easy to forget. Instead, follow the next rule.

Make try blocks as small as possible
Truth: high
Importance: medium

Consider this code that obscures which file could not open:

try {
	// Smelly. Don't do this.
	curly = new FileReader("Curly");
	shemp = new FileReader("Shemp");
} catch (FileNotFoundException e) {
	throw new RuntimeException("Whoopwoopwoopwoop", e);
} finally {
	// The close method translates IOException
	// to RuntimeException
	if (curly != null) {
		close(curly);
	}
	if (shemp != null) {
		close(shemp);
	}
}

We programmers wrap multiple lines in the same handler because we are lazy, but like the hare napping during the race, that laziness hurts us in the end; it forces us to reason much more carefully about the application state during cleanup.

Instead, handle the exception as close to its cause as possible:

try {
	curly = new FileReader("Curly");
} catch (FileNotFoundException e) {
	throw new RuntimeException("Missing: Curly", e);
}
try {
	shemp = new FileReader("Shemp");
} catch (FileNotFoundException e) {
	throw new RuntimeException("Missing: Shemp", e);
} finally {
	close(curly);
}
close(shemp);

The benefits of the shrunken try block may not be obvious from this tiny example, but consider how you can now factor out the file opening blocks to a method that simply throws an unchecked exception for missing files. Once done, this code simplifies down to:

curly = open("Curly");
try {
	shemp = open("Shemp");
} finally {
	close(curly);
}
close(shemp);

Note that all these examples lose the original exception when the close method throws an exception. Most applications can afford that minimal risk, but if you believe it likely that your cleanup code will throw further exceptions, a log and re-throw might be appropriate.

Do not rely on getCause()
Truth: high
Importance: high

Peeking at an exception’s cause is equivalent to using something’s privates or parsing the string representation to find a field value. It makes code brittle; moreover, you can only test it by forcing errors in your collaborators.

If you find that some library forces you to do this, consider avoiding that function completely; if you still have no way around it, adorn your code liberally with “XXX” comments, and test as best you can.

Do not catch top-level exceptions
Truth: low
Importance: low

Top-level exception classes like Exception live close to the root of the exception hierarchy. The argument against catching these says that you should avoid it because the specific lower type, such as FileNotFound, traditionally conveys information necessary to handling the exception, so by catching the top-level exception, your handler is dealing with an unknown error, which it probably knows little about.

Actually, the advice should say “do not try to recover from top-level exceptions.” Catching top-level exceptions is not fundamentally wrong or even bad, but because you do not really know how severe the exception was, you should usually do no more than report and organize a crash.

Exception Rules III: The Search for Cause

Something I wrote seven years ago. Did I learn anything in that time?

This is part of a series where I review common wisdom about Java error handling. The series is in no particular order, but the first installment explains my categorization method.

Do not use empty catch blocks
Truth: high
Importance: high

This is the most obvious of the category of exception handling rules that address how to avoid losing error information. The usual example for when you might justifiably ignore an exception goes like so:

static void closeQuietly(Closeable closeable) {
  // Smelly. Do not do this.
  try {
    closeable.close();
  } catch (IOException e) {
    // I've already done what I needed, so I don't care
  }
}

Yes, the comment makes this better than the completely empty catch block, but that is like saying that heroin is fine because you wear long sleeves.

Very rarely, suppressing an exception actually is the right thing to do, but never unless you absolutely know why it happened. Do you know when close() throws IOException? I thought not.

Do not catch and return null
Truth: medium
Importance: medium

Catching and returning null is a minor variation on the exception-suppression theme. Consider the Stooges class, which contains this method:

public String poke(String stooge)
              throws StoogeNotFoundException {
  if (stooges.contains(stooge)) {
    return "Woopwoopwoopwoop";
  } else {
    throw new StoogeNotFoundException("Wise guy, eh");
  }
}

Suppose you want to write another method that checks a Stooge’s reaction to a poke, but Stooges gives you no isStooge method. Instead, it forces you to write this:

static String getStoogeReaction(Stooges stooges, String name) {
  try {
    return stooges.poke(name);
  } catch (StoogeNotFoundException e) {
    return null;
  }
}

If you have to use an API that uses exceptions for flow control, something like this might be your best option, but never write an API that makes your clients do it.

Log only once
Truth: medium
Importance: low

You can also state this rule as “Log or re-throw, not both.” Redundant logging is certainly impolite to those maintaining your application, but hardly the worst you could do. You might log and re-throw for legitimate reasons:

  • Your framework swallows exceptions you throw at it
  • Your application logs in a specific location, different from your container
  • This is part of a larger application, and you worry that your clients might ignore the exception

Do not log unless you need to, but if in doubt, log it.

Always keep the cause when chaining exceptions
Truth: high
Importance: high

Only the very naive intentionally do this, but it is easy to do accidentally, and a very easy way to lose the information about what went wrong.

In 2016, I still think exception chaining is a very important feature and I’ve been surprised by how many mainstream languages lack exception chaining.

Exception Rules II: The Wrath of Checked

Something I wrote seven years ago; I’m publishing now to see if I’ve learned anything.

This is part of a series where I review common wisdom about Java error handling. The series is in no particular order, but the first installment explains my categorization method.

Use checked exceptions when the client might recover
Truth:
low
Importance: medium

The checked exception experiment tested a compelling ideal. Stated in Sun’s tutorial:

Any Exception that can be thrown by a method is part of the method’s public programming interface. Those who call a method must know about the exceptions that a method can throw so that they can decide what to do about them. These exceptions are as much a part of that method’s programming interface as its parameters and return value.

The more Java I write, the more convincing I find Bruce Eckel’s argument that the experiment proved its hypothesis false.

The tutorial writers tell us to use checked exceptions whenever our clients can do something useful to recover. They fail to mention the abstraction-destroying effects of checked exceptions.

To be fair, checked exceptions only destroy abstractions the way alcohol destroys families; if daddy stopped using so much we would be fine. But programmers are human, and humans are lazy. Especially programmers.

Laziness makes programmers suppress errors, but they hide exceptions for good reason. In the typical example, when trying to abstract away the database connection, avoid subjecting your client to SQLException. If SQLException were unchecked, abstractions that neglected its handling it would leak on error, but their programmers would not add the leakiness to their signatures.

Yes, checking the error code and determining whether to retry after waiting or email an administrator or call Ghostbusters is ideal, but only a small fraction of programs actually need that depth of error tolerance. For the majority, wrapping in RuntimeException is often best, but Sun’s tutorial will make you feel guilty about that:

Do not throw a RuntimeException or create a subclass of RuntimeException simply because you don’t want to be bothered with specifying the exceptions your methods can throw.

Ignore it. This is the sort of thinking that encourages silly specifications like FileNotFound, which indicates that the file does not exist. Or is read-only. Or locked. Or a directory. Or for some other reason inaccessible.

All languages I know other than Java work perfectly well without checked exceptions, implying that you can legitimately throw unchecked exceptions only and stop wasting brain cycles on whether you should make the exception checked or not. If, however, you still want to use checked exceptions, follow this simple guideline:

Use checked exceptions only when client code could not have anticipated the error.

FileInputStream, for example, makes itself more irritating by ignoring this advice. Its constructors should not throw FileNotFound because client code should have checked for the file’s existence before trying to open it. [Retraction: I don’t recommend check-then-act style so much any more. Better to ask forgiveness than get permission, thanks again, Python.]

I consider this guideline true even for multi-threaded use because errors of improper synchronization still land in the bucket of exceptions the client should have anticipated.

Exception Rules

I drafted this long ago, then quit my job where I was writing Java and never looked back… till now, since I’m writing for Android. I thought it would be fun to see if I’ve learned anything in the seven years since I wrote this.

The not-very-secret secret to simplifying code is really very simple: just remove error handling. One hobbled dialect of Java burdened with bulky XML syntax built its success on removing the constraints of compile-time type checking and exceptions [I meant Spring].

Fortunately, those who handle errors formed an elite group of code writers. They stand between us mere mortals and chasm of infinite code failure. I know this because Bjarne Stroustrup appeared to me in a dream and directed me to where I found the Silicon tablets that contained this group’s Java wisdom.

That is, at least, what I wish happened. Actually, no one really knows the best way to handle exceptions. I suspect this somehow relates to them being exceptional; guidelines scattered around the internet are usually incomplete and often contradictory. To make things worse, my own ideas on the matters of error handling best practices vary with the situation.

Nevertheless, I add my noise about how you should use Java’s exceptions to the rest. In this series, I summarize and categorize many of the Java error handling best practices I have heard.

I categorize each rule based on arbitrary axes of “Truth” and “Importance,” which roughly match how religiously you should follow the guideline and how severe the consequences if you do not.

Truth indicates how often you should follow the rule.

  • Low truth: Ignore the rule
  • Medium truth: Follow the rule sometimes
  • High truth: Always follow the rule

Importance indicates what happens when you do not follow the truth. That is, the damage code suffers by either following a low-truth rule or breaking a high-truth rule.

  • Low importance: No serious risks
  • Medium importance: Sometimes very dangerous
  • High importance: Always risks horrible consequences

I begin with a simple one:

Do not specify  “throws Exception”
Truth
: high
Importance: low

Throwing “Exception” pesters client coders without providing any useful information. It ranks high on truth because only sloppy laziness and stupidity cause people to violate it. Still, I rank it as a low importance because annoyance is worst consequence of violation unless coupled with breaking another rule, like using an empty catch block to suppress the Exception.

Do not use exceptions for flow control
Truth:
high
Importance: medium

Although this is the rare rule where everyone agrees, some people still break it.

Author’s note, seven years later: the linked api throws an exception to indicate login failure. I still consider that a poor design, but at the time I didn’t know about Python’s StopIteration, which actually makes sense.

Despite the universal agreement on this principle, most writers fail to give any better reason than blustering about the expense of generating stack traces. While not really premature optimization, that argument smells like it because the two are barely related. Improving performance by avoiding exception-based flow control is like improving your sex life by brushing your teeth.

The real danger of exception-based flow control lies in exception suppression. For example, the poke method lets you jab a stooge in the eye.

public String poke(String stooge)
              throws StoogeNotFoundException {
  if (stooges.contains(stooge)) {
    return "Woopwoopwoopwoop";
  } else {
    throw new StoogeNotFoundException("Wise guy, eh");
  }
}

You can write a hideous, dangerous, unforgivable isStooge method like so.

public boolean isStooge(String name) {
  // Evil. Never do this.
  try {
    poke(name);
    return true;
  } catch (Exception e) {
    return false;
  }
}

This isStooge hides and forgets any exception poke throws, not just StoogeNotFound. On average, however, exception-based flow control is not this dangerous, though still unbearably ugly.

If the snippet specified “catch (StoogeNotFoundException),” the try-catch would just be a structured replacement for goto. When done correctly, using exceptions for flow control is merely poor style used by those who long for the good old days of goto. Stay away from it for the same reason you stay away from top hats and morning coats.

A non-tragedy

After recent record-breaking denial-of-service attacks, Bruce Schneier wants regulation, to “Save the Internet from the Internet of Things”:

The market can’t fix this because neither the buyer nor the seller cares… the original buyers only cared about price and features… insecurity is what economists call an externality: it’s an effect of the purchasing decision that affects other people.

Any casual student of economics will recognize “externality” in this context as an allusion to the more sensationally-named “tragedy of the commons”, first proposed by William Lloyd, in his paper, “Save the Street from the Horses [paraphrased].” Lloyd explained that if Alice, Bob and Charlie share a common resource, like a street, Charlie might buy more horses than he should. Charlie wants to show off by having the carriage with the most horsepower and, since he doesn’t have to clean up the poop, Charlie buys horses without consideration of the pollution they cause. Bob, meanwhile, bears the cost, as he soils many coats on horse manure, tossed chivalrously below Alice’s feet.

Bruce Schneier argues that the economics of devices like Internet-enabled Pooper Scoopers (iScoop app lets you play back in slow-motion!) inevitably must destroy the Internet the way Charlie’s horses wrecked the street.

Problem is, the assumptions are wrong. Bruce claims buyers don’t care if their devices are secure, but most people I know do care. That is anecdotal, but consider also that antivirus companies make lots of money, thus it is clear that people are willing to pay for computer security.

Second, the argument is imprecise. If we are to say that there is a negative externality, we must identify what harms whom. In a follow-up piece and his testimony (pdf) to Congress, Bruce reiterates but adds little detail:

The owners of those devices don’t care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can’t even tell they were used in the attack… the insecurity primarily affects other people.

What other people, and how much? This presumably implies that the targets of the attacks – Krebs and Dyn – suffer the externalities while the owners of the subverted devices don’t suffer at all. That assertion that should be obviously false. If aunt Millie’s cat cam participates in crashing Friendface for a day, aunt Millie does suffer: she can’t post her funny cat videos.

Device owners then, certainly do bear some cost of their device ownership. Now, there can still be negative externalities – Charlie, after all, bears some of the cost of owning his horses, as he is not immune from stepping in dung any more than the next guy.

That’s just how real markets are: messy. Many – perhaps most – transactions cause externalities. Sometimes the externalities are significant enough to warrant correction, by measures like Pigovian taxes. Such corrections often cause other problems.

Imprecise analysis leads to solutions that do more harm than good. Schneier should know this; he frequently argues against over-broad legislation, such as the Digital Millenium Copyright Act. He should appreciate the need for care as described by Tim Harford, the Undercover Economist, on keyhole economics:

Keyhole surgery techniques allow surgeons to operate without making large incisions, minimizing the risk of complications and side effects. Economists often advocate a similar strategy when trying to fix a policy problem: target the problem as closely as possible…

Without an obvious way to measure security, how can we calibrate a tax on insecurity? The significant market failure, if any, is that consumers can’t measure how secure their devices are: imperfect information, not externalities.

Qwerty War released

It’s actually been up for a while now at http://www.qwertywar.com/.

As is typical of software, final cleanup took me longer than expected, a week perhaps. One problem still irks me: it’s too slow. While it runs reasonably well on my desktop machines, not so much on my my high-resolution devices. And this after already spending a day or two profiling and optimizing.

Poor performance on the tablets and phones doesn’t bother me too much, since it’s a typing game, and who would play with a touchscreen anyway? My high-dpi Lenovo laptop, however, also does badly. I suspect the problem is partly thanks to poor Nvidia support on Linux.

Still, the performance problems surprised me because it just doesn’t seem like there really is all that much happening on the screen. The game draws a few shapes and renders a handful of different 32×32 sprites in maybe a few dozen different positions.

Sometimes, though, you just need to call something good enough and go play Qwerty War.

Solving interference for the Microsoft Sculpt

The Microsoft Sculpt is currently my favorite keyboard, but it suffers a nearly fatal flaw: sometimes keypresses fail to register. This happens apparently at random, and often enough to make the keyboard unusable. My best guess at the cause is wireless interference, but the keyboard does not come in a wired version.

It stands to reason that the receiver might just need to get closer to the transmitter, so I might be able to solve the problem by making the keyboard semi-wired. Effectively, I would extend the receiver and tape it to the back of the keyboard:

Sculpt keyboard with receiver taped to back

This works flawlessly.

Better yet, taping the receiver to the keyboard is unnecessary: simply plugging the receiver into a usb extension cable lets keystrokes register without fail. I surmise that the extension cable becomes an antenna, but whatever the reason, keystrokes now register without fail and the receiver can be just as far from the keyboard as before.

Programming Android: first impressions

I suspended work on Comefrom0x10 for a little while to start my first attempt at a serious Android app. It is tentatively called “Text Collector” and essentially just makes a pdf of your text messages.

So, how is Android as a platform?

Well, first, it’s Java. This means that half my code is type declarations, the other half is keywords; we all saw that coming, move along…

The Android core api is unpleasant to use, but it could have been worse. Its main problem is severe under-documentation, apparently thanks to a bad case of “source code is the documentation” syndrome.

Though technically Java, for better or worse, it feels like an api designed by people who would rather write C. Integer constants and bitmasks are everywhere; there is even the occasional “out” parameter. On the bright side, there is a refreshing lack of abstract factory singletons. There is no xml standing in for “dependency injection” code.

There is plenty of xml for defining layouts, though. Layout xml is attribute-heavy, which means less verbose than it could have been, but also that you can’t put comments in many places where they ought to go:


<Frobnicator
  android:foo="bar" <!-- could use a comment here, but that's illegal -->
...

Thankfully, layouts and resource definitions appear to be the only places you have to use xml. In principle, you could define layouts entirely in Java, but frying pan, meet fire.

As far as I can tell, the entire Java standard library is available, but I’ve used only a few small parts of it. There are bizarro-world Android replacements of some parts. Methods that expect uris take android.net.Uri instead of java.net.URI. Bundle of Parcelable looks like it probably could just have been Map<String,Serializable>. I haven’t spent enough time with Android code to judge whether there are good reasons for this seeming duplication.

Like many apis, the core library is a mix of surprisingly easy juxtaposed with surprisingly difficult. There are some nice included layouts and widgets, like a date picker, but try hooking up a date picker to a TextView with inputType=date, and you are in for nasty surprises. Writing and displaying pdf is almost trivial, but if you want zoom and two-dimensional scrolling while you display it, expect pain.