BitLocker, Computers and Internet, Encryption, Microsoft, Security, Trust

commoditizing the shaft

Lorenzo Franceschi Bicchierai wrote about BitLocker and our conversations with government agencies. Excerpt below:

“Fuck, you guys are giving us the shaft,” the agent said, according to Biddle and the Microsoft engineer, who were both present at the meeting. (Though Biddle insisted he didn’t remember which agency he spoke with, he said he remembered this particular exchange.)

Biddle wasn’t intimidated. “No, we’re not giving you the shaft, we’re merely commoditizing the shaft,” he responded.

Biddle, a believer in what he refers to as “neutral technology,” never agreed to put a backdoor in BitLocker. And other Microsoft engineers, when rumors spread that there was one, later denied that was ever a possibility.

Full article:
Did the FBI Lean On Microsoft for Access to Its Encryption Software?

Standard
Android, apps, Computers and Internet, Encryption, Security, Trust

Code Identity and the Android Master Key Bug

Android invasion, Sydney, Australia

Android invasion, Sydney, Australia (Photo credit: Pranav Bhatt)

I was part of the effort to drag MSFT security into the modern era. It was extremely painful. I assumed (perhaps stupidly) that our highly-public lessons would mean other late-comers to the security party would look at our wrecked living room, burned furniture and bad tattoos and then not make the same mistakes we made in our irresponsible youth.

But perhaps no. This Android bug could prove to be extraordinarily bad.

Blowing hash and signing functions so that the underlying code can be changed without the hash and sigs changing is horrifyingly atrocious. This is the code equivalent of impersonating a person with a mask so good nobody, not even the real person themselves, can tell the difference.

The entire value of a chain of trust is that you are limiting the surface area of vulnerability to the code-signing and hashing itself. This bug, if it’s as described, destroys the chain. All bets are off. You’d be better off without the assertions and chain at all: Treat everyone as adversarial and move all critical operations off-device and into something you know you can trust.

I am not saying this automagically makes Android phones infinitely vulnerable to horrible deeds. It doesn’t. As of July 4th 2013 there are no known exploits in the wild that make use of this attack. To really hit something out of the park based on this bug the bad guys are going to need a way to get an offending app onto a phone. This means getting it through a heretofore unknown exploit in Google Play or onto the phone via side-loading or another distribution method.

So we’re all okay, right? Well, no. Not necessarily. Perimeter security – which is what Google uses to keep bad apps off of phones in the first place – is notoriously bad. It’s so bad that Google (and Apple, MSFT, and everybody else) use techniques like sandboxing (perimeters within perimeters), privilege, code signing and code validation to make up for its deficiencies.

Malicious software has an annoying habit of finding it’s way onto devices with considerably stronger perimeters than Android so validation of code that is on the system is critically important.

Unfortunately it’s not just the exploit that is distressing. One of the the things we eventually got good at at MSFT back when we routinely had our pants around our ankles on security was in our responses. There’s no way you can survive forever in an environment of constant adversarial attack if you don’t get much better at defending yourself technically AND much better at working with the public about what you’re doing.

In this blog post, Google advocate that companies “should fix critical vulnerabilities within 60 days”  and that “after 7 days have elapsed without a patch or advisory, we will support researchers making details available so that users can take steps to protect themselves”.

Google espouses 60 days to fix exploitable bugs and going public one week after private notification. According to Bluebox they told Google about this via bug  8219321 in February 2013. That’s a little bit more than 60 days ago. Seeing as it’s now July, I think (and I’m not very good at math, so bear with me here) that’s at least twice as many. It’s especially more than 7 days. I’m not sure how Google are following their own disclosure policy.

Let me speak from personal experience (again) that you need to be really good at dealing with the public on security stuff. If you are going to make clear and solid statements that have numbers in them (eg 60, 7) then you really REALLY need to make sure you are always on the right side of those numbers.

I am also not saying this automagically makes Google evil. As I said at the beginning of this post – I’ve been there when it was bad. Sometimes you are trying your hardest to be good but you’re tripping and falling down. People see you fucking up and assume it means you are evil when really you’re just trying to stay alive long enough to fix your broken shit and learn so you can be better.

I don’t wish anyone at Google any ill will over this. I’ve been there, it’s no fun.

Standard