Microsoft, Palladium, Security, Trust

Perception (or, Linus gets away with being honest again)

The more I learn about Linus Torvalds, the more I like. I like that he’s “just” an engineer (and near as I can tell a very good one).

As he is just an Engineer, he is prone to clear, logical thinking, and thus also prone to clear logical statements. Here is an oldie, but a goodie where Linus essentially tweaks the noses of an entire generation of wankers, erm, make that “opinionated people who have no place making real engineering decisions” by essentially declaring that DRM is a perfectly reasonable security model and as such by itself it can’t be evil. (Clearly my interpretation, you are welcome to interpret it yourself.)

People who aren’t engineers, or at least aren’t very good ones, often try to argue with these kinds of statements as if they are religious issues. This approach doesn’t work so well with engineers or logicians.  It’s kinda like trying to convince an engineer he should build a truck bridge out of wet sand instead of steel because “ironz is teh evel!”.

Yeah, not such a good argument. But sometimes these arguments actually work! And when they do, the world isn’t a better place. This brings us to my Third Law of Trust: The Perception of Trustworthiness Can Be as Important as The Reality of Trust Itself.

A great case study in the phenomenon of perception is a recent post from Linus, here. Imagine, for just a second, that this statement came not from Linus, but instead from either Steve Jobs or BillG.

If Steve Jobs had said this, people would say “well der, Jobs is all about the user experience”. It might not even make headlines.

If Bill said it, even though he’s now retired from his role at MSFT and so it shouldn’t matter, there may well be massive coverage, the gist of which would be “see! MSFT doesn’t give a crap about security! I knew it! M$ is teh evel!”.

This is perception. The notion that this is true should come as no surprise to anyone. But if we dig a little deeper we find that this perception issue has significant implications.

Implication 1: Perception allows mediocre or even bad ideas to be treated as if they are good.  

Example: The public seems to believe that the security precautions which are currently in place in major airports in places like America and Europe are good and make sense. We can assume this because they continue to fly. Do I think for a second that if 50% of the planet stopped flying tomorrow to protest the stupid fluids ban that the ban would last even a week? Of course not.

But people think that the people in charge must know what they are doing. That’s their perception. And so they tolerate it when someone won’t let them fly with an extra ounce of toothpaste, or when they are told they must drink their own breast milk to prove it’s not pure hydrogen peroxide.

This is in spite of the fact that not a single competent security engineer has ever come forward and made the claim that the fluids ban actually works. (Not that I am aware of, at least.)

Perception, rather than reality, is ruling the day and letting a bad idea continue on.

Implication 2: Perfectly reasonable ideas which are offered up by people or groups who are perceived as being un-trustworthy may be lost in the ensuing maelstrom of idiotic public wankery and flagellation.

Example: Something called Palladium (even when it was named NGSCB “it’s pronounced Palladium”). The general perception of Palladium was, well, bad. Very bad. It was very bad for a variety of reasons, but the biggest perception was that it was very very evil because some people thought that MSFT was very very evil.

Linus posted his bit about DRM in April of 2003. In September of 2002 I posted this, which you can see is part of a larger thread. Re-reading my posts, I can’t find any major faults anywhere.

But clearly that wasn’t enough. The perception of MSFT was that it was evil, and if MSFT was evil, that made Palladium the hellmouth from which pure, unadulterated evil would pour forth.

Here’s an interesting quote from this page: “XenSE is designed to allow desktop users to create securely separated compartments to run applications that contain highly confidential information. The system would prevent such data from overflowing from one compartment to another.”

Replace XenSE with “Palladium” and you have, well Palladium. Note the lack of public outcry about XenSE, however. Clearly NOT Palladium in that sense. Of all the things that “killed” Palladium, negative perception was the most important factor.

When I look around I find lots of examples of things we were doing in Palladium being done in the open source community. Linux has TPM drivers, people are looking at secure boot, there are complete Palladium near-clones in a number of universities.

This makes me happy, actually. I still believe in the principles of Palladium and I think that they are required to make the world a better and safer place. If it takes smart people in the OSS community to make it happen, well you go.

If you are right and you have time on your side (like Linus does) then sooner or later people will come round to your way of thinking, and that will, over time, significantly improve perception.

It takes a community with both the best technical expertise AND good public perception to best make the world a significantly better place. If I have to choose between the two I know that I will always place my bets with the former, but I really appreciate just how important the latter is.

In the case of Trustworthy Computing at least this stuff is happening. Maybe that’s the most important thing.

Flying, Guns, Palladium, Security, Trust

Trust Isn’t Transitive (or, “Someone fired a gun in an airplane cockpit, and it was probably the pilot”)

I’ve been saying that trust isn’t transitive for years, using this example:

We all have a cousin Bubba we trust to change the transmission in our 1970 AMX, but we wouldn’t trust him to babysit the kids for the weekend. Both involve trusting him with our kids lives, but trust isn’t transitive and we know from experience that Bubba is a hard-drinking and hard-living roustabout with greasy fingernails who can certainly keep track of little things like screws, but certainly can’t keep track of little things like children.

Bruce Schneier has pointed out many times that he thinks that arming pilots is stupid. I’d say that arming pilots is stupid only insofar as you don’t make sure they are as, or more, experienced with firearms as they are with airplanes.

Experience will make them predictable, and predictability is critical to trust.

This bring us to this: Someone ND’ed in an airplane cockpit. For those of you who aren’t gun-nuts, an ND is a “Negligent Discharge”. It is the better term, far more preferable than “AD – Accidental Discharge”, because modern guns don’t just accidentally go off. Modern guns built by reputable makers – and I guarantee that the gun this pilot had fits that category, much as the plane he had would fit it – are designed to go BANG when you pull the trigger, and to NEVER go bang when you don’t.

Just as modern cars don’t steer themselves into things they aren’t supposed to, guns don’t accidentally discharge. They go BANG when you pull the trigger. That’s it.

So someone was holding the gun, and it went BANG. There are a few ways this could happen. The pilot could have been checking the condition of the weapon. (Is it loaded? Ooops. Yes.) He (yes there are certainly female pilots, and some of them may be armed, but I will give them the benefit of the doubt in this case and say that all armed female pilots are too smart too shoot a gun in their own cockpit) could have been transferring it from a case to a holster. He could have been loading it… He could also have been showing it off to a flight attendant, which happens to be my favorite potential example:

“Do you guys really carry GUNS?”

“Why yes little lady, some of us sure do. I carry a Sig .357, it’s the same gun those air-marshals use!”.

“Ooooh, can I hold it?”

“Of course, but you need to understand that I’m a trained professional, you can’t just <BANG> <SCREAM>”

“oh shit”

Now, how does this relate to trust not being transitive? Let’s look at this quote from the article in question, attributed to Mike Boyd: “if somebody who has the ability to fly a 747 across the Pacific wants a gun, you give it to them.” This is a horribly flawed assumption, because it assumes that trust is transitive, when clearly it isn’t.

The reason trust isn’t transitive is because trust is most often based on data regarding the past which allows us to make assumptions about specific competence, quality of performance, and behaviors in the future.

We can assume that a trained pilot, when facing piloty thingies, will act like a trained pilot. WE CANNOT ASSUME THAT A TRAINED PILOT WILL ACT LIKE A TRAINED LION-TAMER WHEN FACING A WILD LION.

Skills from one domain cannot simply be moved from that domain to another. Saliently, the pilot in question must have thousands of hours of flight time, has done the pre-flight check hundreds or even thousands of times, has been steeped in pilot-ness and thus pilot-safety, probably since he was a late teen. He’s very likely an extraordinarily safe pilot. We can assume that every experienced 747 pilot has a keen awareness of the potential lethality of full loaded 747. In the past we can assume that they at least had a deep appreciation of the potential for harm to their own passengers, and post 9/11 we can assume that they appreciate the harm their plane can be to thousands of additional people.

But this can’t just be automatically carried to guns – guns aren’t planes anymore than they are motorcycles, and many pilots will tell you that jet pilots are much more like to die on a motorcycle than they are on a plane, because they act stupid on motorcycles.

Good gun-nuts know that you learn specific skills for your weapons and then you do them over and over and over again. In my case, ensuring a gun is unloaded will consist of a series of discrete steps that I’ve repeated at least hundreds of times to ensure that only the things I want to happen will happen.

I always check the condition of a weapon which has been handed to me the exact same way, even if the woman who handed it to me is mrs super gun chick and I watched her remove the magazine, repeatedly work the slide back and forth and then lock it back, stick her finger in the chamber and then visually inspect the chamber and mag-well. Guess what? I’ll do whatever of those things are possible myself, too. And I still won’t paint her or anything I don’t want to destroy.

If you want to trust someone, you need to know about their innate trustworthiness, and you need to know about their experience. Some people are simply more trustworthy than others because, well, they are, and you can trust them more in new situations than other people.

But these people aren’t necessarily the ones well trained in <foo>, so you can’t build security systems around them. If you want to build a system that scales across many users, you want a system that mandates everyone be predictable enough for the system to work. Judging the innate trustworthiness of a person is very hard, so while you may do that you also wind up forcing people you must have a high degree of trust in to do things that makes them appear to be more predictable in the ways you need them to be.

In other words, you train the living shit out of pilots before you let them fly a plane. The same should be said for guns, and I can pretty much guarantee that the armed pilots in the sky today have probably more than 100 times more experience in flying planes than in handling guns. So – either stop the armed pilot experiment, OR train the armed pilots well enough so that they are as predictable as you need them to be, so that you can make some assumptions about their trustworthiness.

Will there be ND’s anyway? Of course. But there are also plane crashes, and that has to be okay. What is important is that the system be predictable, and of course that it have a real, tangible and measurable result. Number of plane crashes vs. flight hours is a simple equation. Now that we’ve had an ND in a cockpit, lets’ take a look at number of ND’s vs. gun-handling hours…

I have related thoughts about guns and training that apply to personal gun ownership, but that’s for another post…

BitLocker, DarkNet, Development, Enterprise, Microsoft, Palladium, Security, Windows

Threat Model Irony

I do want to say that it is a well-written and thoughtful paper. The practical application of reconstructing keys from memory is cool. (But it’s the overall attack vector is still not news, though. : ) I feel vindicated in some ways, actually. EVEN IF IT ISN’T NEWS! : )

It’s worth noting that back when we were debating what HW should, and shouldn’t, go into Palladium (late 90’s into 2001-ish) we spent quite a chunk of time talking to Intel and AMD about encrypted memory. There were some simple and wicked fast solutions that would have made this attack WAY harder as the keys to decrypt memory itself would live in the CPU or memory controller rather than RAM, and they could be de-persisted much more efficiently than RAM could.

However when we threat modeled it the only attack we came up with at the time was based on DRM.

To justify RAM encryption we needed to treat the “owner” of the machine as an absolute persistent and viable threat, and that bothered me for two reasons:

  1. It meant that the notion of people being able to hack their own machines could become extraordinarily more difficult. I am hugely in favor of people at least having the potential to hack their own machines, so this really bugged me. I count on plucky rebels to keep evil empires in check.
  2. The only serious reason we could come up with back then to go so far in protecting memory was to protect DRM keys from machine owners, and so long as the analog hole existed it seemed particularly crazy to go so far to protect something that was used to protect data that was leaking like as from a sieve everywhere else. Darknet FTW, as it were.

So I decided no encrypted memory for Palladium.

Not sure, in hindsight, if that wasn’t a mistake? Oh the irony!