Follow grumpybozo on TwitterFollow me on Twitter, where I write more and shorter
August 10, 2012
And yet more boring advice…

If you have many email accounts (as so many of us do these days) but don’t much use some (as Mat Honan didn’t much use his address,) you shouldn’t be using one that you ignore as a place for any other provider to send password recovery emails.

And at a deeper level, it is careless to be ignoring any working email account. In the teachable moment of the week, the ignored account was an iCloud ( account, which Apple sent a notification message when they reset the password. That may seem silly, but if MH had forwarding set up on that account or had a connected IMAP IDLE session from whatever mail client he uses or even if he just checked the account every 10 minutes with a smartphone, he would have known of the crack in progress faster.  With providers as careless as they have proven themselves to be, mail accounts get cracked. A user who doesn’t keep a trivial watch on an empty and unloved Inbox won’t see a crack when it happens. If you don’t exercise your ownership of an account, you won’t notice it being stolen. 

11:22pm  |   URL:
(View comments
Filed under: security rant 
August 9, 2012
Another Stab At The Apple & Amazon Pwning

Inspired by: Secret Security Questions Are a Joke - Slashdot

So-called “Security Questions” have been spreading in use as a mechanism for password recovery, but anyone who knows anything about computer security knows that they are not about securing anything, they are about loosening security.

That’s not altogether bad. The flipside of strong authentication is that it is easy for users to lose the ability to authenticate themselves. Passwords are forgotten, certificates are deleted, temporal PIN gadgets are lost or destroyed, etc. Having a way to reset the primary authentication mechanism helps mitigate that risk. However, the “security question” mechanisms in broad use are mostly far too loose because they draw on a common universe of research-vulnerable questions (e.g. “Mother’s maiden name”) and in many cases (as with Apple and Amazon) are mediated by humans whose jobs are mostly not focused on security, but rather on low-skill customer support for which their employers pay very little. It is not rational to expect that those workers will follow a rigorous security policy that requires them to take time and risk disappointing customers. No amount of security policy rigor can address the problem that security policy is routinely ignored.

It appears that the case of Mat Honan hinged on absurdly weak security question policy at Amazon and a failure at Apple to follow policy in regards to security questions. The best fix isn’t to tighhten and try to enforce policy, it is to change the nature of the process. Authentication recovery mechanisms need to meet 2 simple criteria:

  1. The secondary authentication information must be truly secret, known only to the user and the provider.
  2. There must be no way for a special pleading to override the formal mechanism short of persuading the people who defined the mechanism that it should be bypassed.

This means that sometimes users will lose access to their accounts because they can no longer provide either the primary or secondary authentication factors. It may mean that sometimes real security professionals will have to listen critically to the sob stories of careless users. 

For the real world where that sort of change isn’t going to happen in most cases at any point in the near future, smart users must adapt to the fact that most service providers have de facto lax security. I included some user-relevant lessons in my last post but here are a few more concrete ways to stay safe:

  • When offered a choice, pick security questions with non-researchable answers. If your spouse or sibling could answer the question, it’s a bad one. If a Facebook “friend” could answer it, it’s worthless. 
  • Answer bad security questions with memorable and unique lies. For example, you might tell Apple that your mother’s maiden name is Wozniak or that you graduated from Cupertino High School, while telling Amazon that she was born a Bezos and you went to Seattle Country Day School (dunno if that even exists…)
  • Use an email service that provides a way to invent working unique addresses on the fly so that you can give a unique email address to everyone who asks for one. This is easier than you may think, since GMail supports “+” tagging and arbitrary insertion of periods in addresses.
  • Don’t let anyone store a credit card number in their system that can be used by any other vendor. I said this in my prior post but it is worth repeating.
  • Shun providers who behave badly. For example, some time ago a provider who shall remain nameless (as they may have changed) tried to “canonicalize” addresses I gave them by doing transformations on parts that might have been tags and trying to send mail to the modified addresses. Because I use my own complex and obscure mechanism for unique addresses this only meant that they bounced a few messages off my mail server, but the result was that I deleted my account and blocked all of their mail on my mail server.
  • Avoid the temptation of making any online identity a “hub” for everything you do online. Especially avoid this with free accounts (e.g. Google, Facebook, Twitter, Yahoo, etc.) because ultimately those are provided and governed at the whim of the provider. Apple accounts are slightly better because their email accounts are associated with you being a paying customer, but they also can have such serious powers (e.g. remote wipe) that it is unwise to have them hooked to anything else (like a GMail account) that might turn out to be part of an attack surface.
  • Be as autonomous as you can be. Having your own domain name is a start, but it’s just the prerequisite for a stack of DIY online services that you may or may not be up to handling on your own. At a minimum, having your own domain can be the basis for varying degrees of control over your email addresses that you really cannot have if you stick to using addresses in domains that you do not own. 

August 7, 2012
The Lessons of Mat Honan’s Very Bad Weekend Are Not Really New

The story is here: How Apple and Amazon Security Flaws Led to My Epic Hacking | Gadget Lab |

This is only news because it happened to a writer for Wired. The “hack” didn’t expose any previously unknown vulnerabilities, the children doing it didn’t demonstrate any significant technical skill or use any sophisticated tools,  it was essentially just a case of random vandals digging around online where they could dig easily and telling a few lies to “customer support” staff whose work can never be worth much more than the sub-median 3rd-world wages they are paid.

I’m NOT picking on Mat Honan here. It’s pretty clear that he’s a gadget guy not a security expert and as a journalist I’m sure he gets more and slicker pitches from hucksters who find security a nuisance than from security experts. Real computer security isn’t cool. It isn’t fun. It isn’t in any sense spiffy. If you think it is, you’re a geek. I do not say that as an insult, just to note that we are not normal. I have given up scolding normal people for not being security geeks. It’s pretty well proven that a lot of generally normal people love gadgetry but have no affinity for system security.Mat Honan wasn’t particularly careless or clueless, he just had never absorbed some clues that those of us who work in security have sadly stopped talking about much. Clues that are among the least cool, fun, or in any way spiffy lessons of computer security:

  • Any secret which you share with someone else so that they can authenticate your identity later is a password. That includes things that are not very secret (e.g. “mother’s maiden name”) that can be used to recover or reset “the” pasword. This means that “security question” access recovery mechanisms are de facto security-weakening tools.
  • Don’t use the same password for different accounts. This is a hard one, since it really is not practical to use a completely different password for every account without using a keyring tool, which ultimately is one password for everything. However, a secure keyring is MUCH better than using just one password everywhere or keeping all of your passwords in a plaintext note in some “cloud” service.
  • Don’t give anyone an unrestricted credit card number or bank account number to store for easy reuse. Yes, I know Amazon, PayPal, Apple, and others all really want this. They are stupid and effectively evil. Really. It’s not in a bad way; they don’t intend to be stupid or evil. That doesn’t make it much better. If you can’t resist easy one-click purchasing, get a Discover or other card that provides single-vendor numbers, so that you can’t break the previous rule with a card number. After all, a credit card number is a password to your money and Mat Honan’s example shows that even a part of a number can become part of a de facto alternative password to your account. The same card number linked to many accounts becomes a common and very weak password to them all and to your money.
  • An authentication system that has a fallback system that lets you recover from a lost or forgotten password is less secure than one which does not.
  • If it can be, human judgment almost always will be the weakest link in any security system. It takes an unusually weak assembly of mechanical security mechanisms to out-fail a person who has the power to circumvent it. If an authentication system includes the ability to call a human and beg for access, that will be the easiest way to break it.
  • Security and convenience are directly and intrinsically opposed to each other. Secure systems are not cumbersome and easy-to-use systems are not insecure as a result of poor design, but by necessity.
  • Using email addresses as unique identifiers for people is irresistible, so they become (sigh) a sort of secondary password. If you use one email address for everything, see the second clue…
  • Incumbent technical constraints are often not seen as part of security but may in fact be critical tacit assumptions for the security of systems that are perfectly functional — but are made insecure —with those constraints removed. Parables of this include WEP, the silly kerfuffle created by Steve Gibson over “raw socket” support in Windows, and a long parade of schemes to stop spam based on assumptions that spammers wouldn’t do things that they so far hadn’t done which basically only demanded audacity and motivation.
  • Email isn’t secure. It can be in specific cases and could be in general with existing tools, but in the real world as it is today the main protection most people have against undetected interception of their email in transit is the fact that there’s so much email in transit all the time and so much of it is pure worthless garbage.  The “needle in a haystack” analogy applies, but a better one would be “corn kernels in the sewer.”
  • Backup is a critical security component because information loss is much more common than and usually worse than information leakage.
  • There are many degrees of security and many degress of attacker. If you allow yourself to be “low-hanging fruit” you will be vulnerable to low-effort attacks from a huge population of weakly motivated opportunists. The other side of this is that very small improvements in how you maintain your own security can raise your vulnerability above where most random vandals will bother.

These boring old truths have implications for “Cloud” services that sell themselves as hubs for a digital life enabled by frictionless sharing and synchronization and yadda yadda yadda. Mat Honan did things that those of us who are Security Geeks have given up warning against. Those warnings make people who wear ties and sign paychecks doze off and wake up grumpy. We’ve spent the past decade or so biting our tongues and taking paychecks and hoping that it would all work out, but it hasn’t. It never will, because it fundamentally can’t. Systems and applications that are most appealing when used in fundamentally insecure ways cannot be made secure. Systems and applications whose security is dependent on end users practicing good security hygeine will not be secure. Systems and applications whose provider-side security is dependent on adherence to policy rather than operation of tools will always be crackable by social engineering.

None of this is news. Back when the press made a big deal of Kevin Mitnick as a great “hacker” it was known by many people who wore that label proudly with no connotation of criminality that he was in fact just a very good con man with unremarkable technical tools and skills. We have had standards, tools, and tested best practices for online security since before most people had heard of the Internet, but still most service providers don’t bother with them. There is a geek subculture where good security hygeine is the norm and then there’s the world at large where many people use one email address and one password and let all of their accounts everywhere interact freely with each other to the extent that losing one to a random script kiddie essentially means losing them all. People who don’t understand that they have to deal with inconvenience as a price of security and that they can’t rely on providers who promote convenience to maintain security will always be the easiest prey for the largest field of predators.


Related articles, courtesy of Zemanta:

April 24, 2012
Sophos fires up the FUD machine.

A slightly worse version is “awaiting moderation” as a comment on the Sophos blog: 1 in 5 Macs has malware on it. Does yours? | Naked Security]

It is irresponsible fear-mongering to claim that the widespread presence of Windows malware on Macs is in states that “can still be spread to others” without backing that claim up in detail. 

The top two families you cite are carried in email, and are readily identified as “spam” by eye or by low-end spam filters like those used in or by most consumer mail providers. It certainly is possible to forward email, but forwarding infective spam is an unusual act. Some of the others are things I would expect to find in the browser caches of reckless wanderers, but they are hardly an infective threat to anyone from that position.  

The comparison to Chlamydia  is worse than tacky, it is outright deception. Chlamydia is frequently asymptomatic in the short term but it is living and causes problems in the long run.  Chlamydia is not less transmissible by people who have no acute symptoms. For malware that requires Windows to run and propagate, presence on a Mac is not a quiet infection, it is (at worst) non-destructive storage.  In some cases storage itself renders the malware inert over time because the attack vector itself is dependent on finding control systems online that don’t live in any one place forever.   

One of the reasons Mac users have been reluctant to adopt AV software is that it is perceived as bloatware that does nothing of direct value for a Mac user. Is it worth the AV overhead for the average Mac user to know when he has surfed past a page that has IE-specific evil JavaScript in it or when the latest blatant phish in his Junk folder is recognized specifically as containing a Windows attack vector? Not really. Flashback and PubSab change that analysis significantly, but not enough for a lot of Mac users. Maybe if the major AV vendors could claim to have prevented infections before Apple’s sluggish fix for the Java hole they would be more convincing.

I am not saying that all Mac users who choose to run bareback are behaving wisely, whether or not they rationalize that decision based on the de facto Windows focus of all AV software. However, it would be a lot easier to persuade Mac users who DO rationalize their recklessness if there was a lightweight Mac AV tool that didn’t spend most of its time worrying about Windows malware.

It would also help if AV vendors stopped spewing blatant bullshit in an effort to scare Mac users into buying their tools. The simple truth really ought to be adequate without dressing it up in nonsense. 

I should add that while I have copies of some free versions of commercial Mac AV software and have a clamav installation whose database is updated automatically, I run bareback. Clamav only scans things when I manually ask it to, I have not installed any of the commercial packages, and I have no intention of making anything act as automatic protection for me. That’s not because I think the cost/benefit analysis is generally correct, but because:

  1. I work in security and so occasionally have a need to work with blobs of data that are or may be malware.
  2. I have a huge junk email archive that I access via IMAP using multiple MUA’s and can’t be bothered to exempt all of their local stores just to please some aggressive AV. 
  3. Under normal conditions I practice meticulous computer and network hygeine. This may not provide perfect protection, but it has held for a long time. Someday a trojan may fool me, but none has yet… 

I do think that it is time for Mac users to accept the end of the era of general Mac safety. It was always a bit more myth than reality, but one that has held up over the years in part because of the FUD that has periodically spurted out of AV vendors to meet rapid debunkery. Mac users have had an arguably rational skepticism protecting our myth. I wish that now that the AV vendors have a real wolf to be telling us about, they’d stop turning it into a ravening horde of Wargs. 

Some related links, selected from the bottom half of what Zemanta offered:

September 3, 2011
Secure file sharing 101, an allegory

One reward for calling Julian Assange an insecurity expert but a security incompetent is that some people have questioned my understanding of how one can encrypt files without having to share a password. Rather than ask people without shell skills to run ‘man gpg’ (much less to install GnuPG) and interpret those instructions I figured I’d make it simple. I’ll tell an alternate reality story. No one in this story is in any way related to anyone in reality.

So here’s the hypothetical situation: Julian has a file called z.7z that he keeps extremely secure, but he wants to give a copy to David. Julian also wants to hand out a copy to a million random fans so that if something bad happens to him, he can trigger the release of a key that anyone can use to decrypt the file. He knows that David probably means well and wants to maintain the security of the file, but that he would be likely to come in third in a clue contest against a box of rocks and a bag of hammers. Need I add that David is a British newspaper journalist? I didn’t think so… 

The first think Julian should do is evangelize David and maybe David’s bosses on the use of secure email using public key cryptography, in the form of the OpenPGP standard and the GnuPG tools. Julian gets David to create a key pair:

gpg —gen-key

That gives a dialog for parameters. The defaults are reasonable except a 4096 key length makes sense for this application. 

Julian, as a security pro, surely already has his own key. Surely. (This is a hypothetical Julian, not a real one. KEEP UP!) He also surely knows how to use GnuPG with email, and he shows David. Maybe gets David to use it regularly. In any case, Julian and David at some point exchange public keys, generated with:

gpg —export —armor

The output of which they would exchange and import to each others’ keyrings with:

gpg —import

Now Julian wants to give David that file, but he doesn’t want to mail it. So he runs:

gpg -s -r david —encrypt-files z.7z

This will require Julian to enter the passphrase for his private key, because he has a passphrase on his key (because *he has a security mindset*) and he is signing the file. He will now have a file named ‘z.7z.gpg’ Julian can’t decrypt that file. It can only be decrypted by David, or by someone with David’s priavte key and its passphrase (which Julian made David use, because *Julian has a security mindset*.) Also, since David has the whole problem with being compared unfavorably to collections of rocks and hammers, he has talked to his corporate IT guy who smirked and assured him that he really did need a passphrase for the private key, since he has also been using it to exchange voicemail hacking tips with the blokes at the Times. David gets a thumbdrive from Julian and is read the Riot Act: no networking, no sharing, delete decrypted files before disconnecting the drive, etc. David decrypts the file with:

gpg z.7z.gpg

He is prompted for his passphrase. He enters it and the file decrypts with a note that it was signed by Julian. 

Six months later David is grumpy because Julian doesn’t call him any more to sit in on the “strategy sessions” with the Ecstasy and Blow and cute Swedish girls. He hears from an IT intern that Julian has published a pile of files via BitTorrent without telling him, he gets the boy to set him up with uTorrent, and he sees z.gpg there, almost exactly the same size as his z.7z.gpg. In a flash of brilliance, he realizes it must be the same file. He writes a book. He uses his passphrase as a chapter title, assuming that someone will figure out the connection, decrypt the secret file, and put Julian in his place. Two days later he gets a call from his corporate IT manager, who is less amused than last time. It turns out that he has shared the passphrase for the one private key that everyone at the Grauniad uses for everything sensitive. That key does not decrypt the z.gpg that Julian published. He tries with: 

gpg z.gpg

He gets back a message about some key with Julian’s name on it not being available. David decided that he is getting old for all the newfangled technology, and retires. 

September 2, 2011

Adapted from comments I posted at The Atlantic:

Thursday night, WikiLeaks stopped trying to do a carefully redacted and curated gradual release of the US State Department diplomatic cables they had in their possession, and instead they just dumped them all. 251287 cables plus metadata in half-gigabyte 7z files, dropped out into the jungle of BitTorrent with checksums and little else. Whee!
It was time. I tweeted a vote for it. So did a lot of people who are fans of Assange. I’m not a fan, or of how he has run WikiLeaks, and I am also not positively impressed by his best-known defector, Daniel Domscheit-Berg, who manages being even a less appealing character than Assange. I don’t have the attention span necessary for the full soap opera aspect of those two and their battle, so maybe I’m missing something, but they both come across pretty badly as data security professionals and as humans. There’s a lot of detail about the soap opera at Rixstep

Alongside the DDB/JA soap opera of narcissist geeks, there was the actual work of WikiLeaks happening. You can love that or hate it, but it is important. A large part of the Arab world is in revolt, largely positive revolt, to a significant degree because of information disclosed as part of the “cablegate” releases. Less prominent upheaval is going on in other places as well as a result of the sunlight provided by WikiLeaks. Love them or hate them, think good or ill of Assange or DDB, they have lit a match for change that is largely positive so far. Personally, I am uncertain about what they’ve been doing. I agree with the principle that secrecy is generally overused by government, and there’s a lot of evidence to support that. For example, the so-called “state secrets” privilege in federal cases was essentially invented in US v Reynolds not to protect significant secrets, but rather to avoid liability for putting civilians into a military plane known to be unsafe for test flights.  Ever since, it has been used more to avoid embarrassment and liability than to actually protect secrets that actually need protecting.  Many of the Cablegate releases back that up: much of what was released before tonight was more embarrassing than it was really security-sensitive. On the other hand, I can see how it is sometimes positive for diplomacy to include secret communications and that also has examples in the Cablegate releases. There are frank analyses of events and individuals that are embarrassing today but which would have caused major disruptions and possibly even violence if they had been made public when written. I don’t know what the answer is. I do know that we’ve been keeping many more secrets than necessary for a long time, and that it is good to get many of those out inn the open. It will embarrass some people named Bush and Clinton (among many others) but there are many things they should be embarrassed for and ashamed of. 

What led up to the point where WikiLeaks is releasing their whole stash and I wholly agree is an egregious comedy of errors in a field where I do know something: data security. Nigel Parry has a detailed account of what happened in, but it boils down to a serious of stupid acts that add up to a de facto release of the whole Cablegate stash months ago. Julian Assange passed the raw cables to a staff member of The Guardian in a shoddy fashion and later broadcasted the same file with the same encryption to the world at large as a mystery file amongst thousands of WikiLeaks texts in an attempt to assure that the archive could not be destroyed. Then,  The Guardian published the password in a book, thinking (apparently) that the password they had was only for the file they received, as it should have been. As a result, there were many unidentifiable people around the world with the Cablegate archive in an encrypted file that they had downloaded to help WikiLeaks. That file could be decrypted by a password published at the top of a chapter of a book about WikiLeaks by one of their main media partners. In essence, the Cablegate unredacted archive had been released, and it was just a matter of who could put those two facts about WikiLeaks together. If there is something objectively worse than both secrets kept well AND secrets exposed to all, it is secrets that have leaked a little to an unknown audience. The only way to improve the situation was for WikiLeaks to let go: release the whole archive.

The story of the leak is one of (at minimum) technical incompetence at WikiLeaks. The WikiLeaks account of how DDB sabotaged them is an equivalent tale of technical incompetence combined with an inability to make objective judgments about other people. It seems likely that Assange’s legal troubles will end up reflecting a similar lack of ability to evaluate people and behave towards them in a suitable manner based on caution and self-discipline. No matter what side you take on the aims of WikiLeaks, the people of WikiLeaks are clearly not capable of handling their chosen roles. It seems to me that this is another example of something that has bothered me all of my professional life: insecurity experts calling themselves security experts. A knack for cracking other people’s security is a different talent from being able to build and operate systems securely. From what I recall of “Hacking, Obsession, and Madness,” (a tedious work of semi-fiction in  which he plays a role and shares writing credit) Assange has demonstrated skill as a cracker. It isn’t clear that he has any talent as a builder. It is unfortunate that no one with a security mindset seems to have had any authority at WikiLeaks.  

WikiLeaks’ objective errors:

  1.  Overtrusting their media “partners.” These are journalists, not data security experts. Even if they always mean well, the phrases “bag of hammers” and “box of rocks” should have stayed in the mind of Assange at all times when dealing with them. Never trust stupid. 
  2.  Apparently giving each of their media “partners” absolutely identical files with absolutely identical content and encryption. Had the leak been less farcically inadvertent, this could have been used to identify a stealthy leaker. 
  3.  Failing to communicate to the dullards at the Guardian how important it was to never release that password. This has devolved into a slapfight over what was said exactly, but it is clear that they just didn’t get that this wasn’t just their password for the file they had. If Assange had not pulled an awful ad hoc key exchange mechanism out of the back of his head, it would have been *their* key for *their* file, and nothing more. 
  4.  Releasing the same content in an identically-encrypted file via BitTorrent on 2010-12-08 (as described at the Nigel Parry page to which you linked) with 3 other GPG files containing as yet unknown data, in a package of torrent files among which those 4 files stood out as something special.

When putting sensitive data into the hands of someone who is not expert at its handling, one must provide clear and complete instructions, make accidental leakage difficult and unlikely, and make leakage directly detrimental to the potential leaker as a means of focusing their attention. There is a well-known way to do that, first discussed by Phil Zimmerman 20 years ago for this sort of circumstance. WikiLeaks should have explained the risks to their partners and required them to publish public keys and solicit general communication using them. That would at least potentially make loss of the corresponding private keys a harmful event for the partners. WL could then give each partner their own slightly unique copy of the cable collection, each encrypted using their own individual public key. Then when WL wanted to use the anonymous BitTorrenting public for distributed backup, they could have used a symmetric cipher like AES256 as they did, or maybe they could have used a cipher not given its name by the US federal government, wrapped around AES256 or 3DES. Or a 4096-bit RSA key for which they didn’t release the public half. They could have made the special nature of those 4 files less obvious by stashing them without extensions in amongst the CRS and Scientology docs as binary blobs that most people wouldn’t even notice. 

And in the final analysis WikiLeaks could have avoided the whole mess by not trying to nuance the process. For years, they operated successfully, if quietly, by doing their releases without first engaging media partners and planning out carefully curated and redacted dribbles of data. The cables are arguably special, but maybe not so much. In the end, the process of playing with the establishment media has proven a perilous game and in the end the only difference between what we have now and what we might have had with a direct release a year ago is a slower release, Assange in legal trouble that seems trumped up, WikiLeaks personnel shredded and fractured, an embarrassing personality battle playing out in public without anything significant happening with the real work of WikiLeaks, and a general loss of interest in WikiLeaks. This is almost certainly the end of WikiLeaks, and that’s a mostly bad thing. 

Liked posts on Tumblr: More liked posts »