Secure email monitor

I segment my online accounts into two groups: valuable accounts and everything else. “Valuable” can vary a little by personal priorities, but for most of us, our most valuable accounts will be those with direct access to cash: banking and investments.

By transitivity, any account that can allow access to those accounts is also in the valuable and high-risk group. These include financial aggregators and any email address used for account recovery.

I would like to keep the only computer with access to the valuable accounts locked away in a dungeon guarded by trolls, but highly restricted access also makes it difficult to monitor activity. I want to be able to see notifications on all my devices while not actually allowing them access.

Enter Google Apps script.

Set up the script

You’ll need an everyday – normal – email address, one you access from anywhere. Then, you’ll need another that you only access from secured devices, the restricted – high risk – account.

Make a restricted Gmail account for yourself.

  • Don’t use an existing email address in for recovery
  • Do use a password manager and make sure you have a backup
  • Do set up two-factor auth

Go to

Untitled Google Apps script


function digestAndArchive() {
  var monitor = ""

  // Docs say that if you have many threads, for some unspecified value of "many", you
  // should use the paginated version of getInboxThreads, as the simple version will fail.
  // It turns out that means "fail silently", returning at most some arbitrary number of
  // threads, and there is no obvious way to know there are more. I suspect the "correct"
  // way is to keep calling the paginated version with an increasing start index until
  // it returns nothing, but that seems ridiculous. For practical purposes, this function
  // returns more threads than you are likely to receive in a day.
  // So, upon first installing this script on a long-ignored inbox, it might need to run
  // several times before it clears out the inbox, but that shouldn't hurt anyone.
  var threads = GmailApp.getInboxThreads()
  var bySender = {}
  for (var i = 0; i < threads.length; i++) {
    // I'm assuming this is a receive-only email address, so all messages in a thread
    // presumably have the same sender (or similar). Organizing by sender isn't
    // strictly necessary, but I think the final digest is more understandable.
    // The docs don't say whether the first message is the most recent or not, but that
    // generally should not matter.
    var message = threads[i].getMessages()[0]
    var sender = message.getFrom()
    bySender[sender] = bySender[sender] || []
  var body = ''
  var indent = '\n  - '
  for (var sender in bySender) {
    body += sender + indent + bySender[sender].join(indent) + '\n'
  // Experimentally, it seems that GmailApp.sendEmail encodes the body as text/plain
  // so it should be safe to drop any old string in it. Would be nice to find
  // documentation to that effect. It munges astral plane characters, but for my
  // purposes here, I don't care.
  GmailApp.sendEmail(monitor, "Daily digest for " + new Date(), body)
  for (var i = 0; i < threads.length; i++) {
    // GmailApp.moveThreadsToArchive() can move multiple threads at once, but throws an
    // error and moves nothing for more than 100 threads. That's a pretty low limit when
    // you first run this on an inbox you haven't been regularly cleaning, so move one
    // by one.

Remember to change the email address in the script above to your normal, every day email. Save, as, for example “daily-digest”

To run by hand, go to the Run menu or just click the play button in the toolbar. At your normal address, you should see an email like this:

Subject: Daily digest for Fri Dec 02 2016 08:11:39 GMT-0500 (EST)
From: [your restricted address]

Andy from Google <>
– Josiah, welcome to your new Google Account

Run on a timer

To schedule, click Resources (menu) -> Current Project Triggers -> Click the link to add a trigger.

Set up to run daily on a timer. The time option is in your local time, as detected by Google.
Google apps time-driven trigger example

When you save the trigger, it will prompt you to authorize the trigger to run. Click “Review permissions”, which opens a new popup window, then allow.

Run the script by hand again after setting up the trigger. It should prompt for another required permission.

Daily summaries of your restricted account should now start to appear in your normal email.

If you’re paranoid – and if you’ve gotten this far, you probably are – delete the digest emails from time to time. Deleting the digests removes a possible attack vector on your high-value account because Google’s account recovery process asks the date you created an account as a form of identity verification.

A non-tragedy

After recent record-breaking denial-of-service attacks, Bruce Schneier wants regulation, to “Save the Internet from the Internet of Things”:

The market can’t fix this because neither the buyer nor the seller cares… the original buyers only cared about price and features… insecurity is what economists call an externality: it’s an effect of the purchasing decision that affects other people.

Any casual student of economics will recognize “externality” in this context as an allusion to the more sensationally-named “tragedy of the commons”, first proposed by William Lloyd, in his paper, “Save the Street from the Horses [paraphrased].” Lloyd explained that if Alice, Bob and Charlie share a common resource, like a street, Charlie might buy more horses than he should. Charlie wants to show off by having the carriage with the most horsepower and, since he doesn’t have to clean up the poop, Charlie buys horses without consideration of the pollution they cause. Bob, meanwhile, bears the cost, as he soils many coats on horse manure, tossed chivalrously below Alice’s feet.

Bruce Schneier argues that the economics of devices like Internet-enabled Pooper Scoopers (iScoop app lets you play back in slow-motion!) inevitably must destroy the Internet the way Charlie’s horses wrecked the street.

Problem is, the assumptions are wrong. Bruce claims buyers don’t care if their devices are secure, but most people I know do care. That is anecdotal, but consider also that antivirus companies make lots of money, thus it is clear that people are willing to pay for computer security.

Second, the argument is imprecise. If we are to say that there is a negative externality, we must identify what harms whom. In a follow-up piece and his testimony (pdf) to Congress, Bruce reiterates but adds little detail:

The owners of those devices don’t care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can’t even tell they were used in the attack… the insecurity primarily affects other people.

What other people, and how much? This presumably implies that the targets of the attacks – Krebs and Dyn – suffer the externalities while the owners of the subverted devices don’t suffer at all. That assertion that should be obviously false. If aunt Millie’s cat cam participates in crashing Friendface for a day, aunt Millie does suffer: she can’t post her funny cat videos.

Device owners then, certainly do bear some cost of their device ownership. Now, there can still be negative externalities – Charlie, after all, bears some of the cost of owning his horses, as he is not immune from stepping in dung any more than the next guy.

That’s just how real markets are: messy. Many – perhaps most – transactions cause externalities. Sometimes the externalities are significant enough to warrant correction, by measures like Pigovian taxes. Such corrections often cause other problems.

Imprecise analysis leads to solutions that do more harm than good. Schneier should know this; he frequently argues against over-broad legislation, such as the Digital Millenium Copyright Act. He should appreciate the need for care as described by Tim Harford, the Undercover Economist, on keyhole economics:

Keyhole surgery techniques allow surgeons to operate without making large incisions, minimizing the risk of complications and side effects. Economists often advocate a similar strategy when trying to fix a policy problem: target the problem as closely as possible…

Without an obvious way to measure security, how can we calibrate a tax on insecurity? The significant market failure, if any, is that consumers can’t measure how secure their devices are: imperfect information, not externalities.

Beware string concatenation

The Owasp top 10 webapp vulnerability list is due to update soon; I wager its top three won’t change from the 2013 list:

  1. Injection [mainly sql injection]
  2. Broken Authentication and Session Management
  3. Cross-Site Scripting (XSS)

What do two of the top three vulnerabilities have in common? Cross-site scripting is a form of injection attack that is so popular it deserves its own category; both cross-site scripting and “injection” often result from unsafe string concatenation:

"select name from students where " + ...
html = "<div>" + ...

I bet one simple rule would stop most webapp compromises:

Avoid string concatenation.

Production code should very rarely concatenate strings; when it must be done, it should be written to make it obvious that the concatenation is safe.

Assuming most programmers do not intend to write vulnerable code, it is safe to say that they consider the vulnerable code they are writing to be obviously safe. That is just a special case of “programmers don’t think about the bugs they are writing.” By extension, they don’t think about the vulnerabilities they are writing.

So, I dislike rigid coding standards, but if I were to implement one, it would be to say that any place string concatenation is used must be accompanied by comments to answer:

  • Where is the data coming from?
  • What could happen if a malicious actor provided the data?
  • What makes this safe?

If you have so much string concatenation in your webapp that this seems onerous, I guarantee your site is full of security holes.

A practical web of trust?

In order of difficulty, there are three basic parts to using gpg:

  1. Generate your keys
  2. Keep your keys secure
  3. Decide who to trust

Most people will never even get to step one, much less the far more difficult steps that follow. I’ve written about this problem before:

The web of trust is based on the idea that you can reduce a complex human dynamic, namely trust, into a mathematical system.

Even if you more or less understand the levels of trust gpg offers, it’s hard to be certain how much trust to assign any given key, since you can never know how the owner might use a key.

But what if the goal were simplified? Instead of something complex like “trust”, we might say you have only one decision to make about a key: is this the person I think it is?

Such a system might, I think, gain large-scale acceptance if correctly implemented. Your email address already is your identity, so use email for key signing. A large email provider, say Google, could generate a key for every address. So when Alice signs up for a Gmail account, Gmail transparently signs her outgoing mail with the key held in escrow.

This signing should be implemented such that mail agents can validate it if they know how, but those that don’t transparently ignore it. Perhaps it can be done with some form of multipart/alternative.

When using a client that is aware of the signature, the mail client prompts the user, unobtrusively. So, when Bob gets a message from Alice he sees something like this off to the side:

Are you sure this message is from Alice?

Bob can either click “yes” or ignore the prompt. If Bob clicks yes, he signs Alice’s key under the covers with his own key, also held in escrow.

Implemented in a large enough community, this system could rapidly build a network of signed keys. It’s not clear how we might use this, but if a large scale network of reliably signed keys existed, who knows?