I just learned that there's a coven of locksmiths STILL mad at me because of this paper I published twenty years ago. So let me repost it here.
mattblaze.org/papers/mk.pdf
Then I wrote this paper as a follow-up, mostly out of spite. mattblaze.org/papers/safeloc…
After I published my master keying paper, I took a job as a professor at U.Penn. The inhouse locksmith there went absolutely apoplectic when he learned I had been hired. He wrote frequent letters the dean and everyone else demanding that I be fired or brought under control.
For the more than 12 years I was at Penn, the dean, our department chair, the provost, and the head of the campus police would get letters from this guy about how dangerous I was. They'd usually forward them to me, and I'd print them and put them on my door.
Jul 30, 2022 · 5:43 AM UTC
So we got into this vicious (to him) or virtuous (to me) cycle, where every time I'd do any researching or give any talks involving locks, he'd complain, and that would provoke me to do more of it.
Anyway, he finally won and I quit, and now I'm another school's problem.
So I found out this guy also left Penn, and is now teaching courses about master keying, where he apparently devotes time to telling the students what a rotten guy I am for publishing these dangerous papers.
Real life can be like Twitter sometimes.
Whether I'm a rotten guy or not, I think physical security has much to teach infosec and cryptography, and perhaps the other way around, too. It repays careful study, and I'm glad I contributed to it in a small way.
An occasional complaint I get about the master key stuff is that it’s “irresponsible” to teach it to undergraduates.
I teach it because I find it’s a good way to make accessible some fairly advanced cryptographic concepts early on that otherwise require a deep background.
But that raises another question. Are there other topics they think are too dangerous to disclose to our students? Or just stuff about locks?
Because I really don’t know how to keep my students from learning about stuff. It’s kind of the opposite of what we’re set up to do.
Pretty much all my courses, it occurs to me, cover ways to commit various kinds of crime or otherwise cause harm. Fortunately, at least so far, the students who use that for good seem to greatly outnumber those who use it for evil.
From a 2003 op-ed in "The National Locksmith" (a now-defunct trade magazine) about me (one in a series) written by Billy Edwards, who was the author of a book on master keying and who was NOT happy about my paper.
Researchers like me who do discover vulnerabilities and practical attacks grapple with difficult questions about how to best disclose our results every day.
Generally happy to discuss it, but don't presume we've never thought about any of this before.
People who start with a hostile or patronizing interrogation of whether I'm an ethical person or whether I'm thinking about these issues are often surprised at how little interest I have in pursuing a discussion with them.
Because, seriously, come on.
Invariably, people who don't want "dangerous" things taught to students believe that they themselves can be trusted with it. It's just those OTHER PEOPLE who can't handle it.
Anyway, helping people learn more about things is my job. Stopping them from learning about things isn't.
I should mention that I wrote the masterkey paper not so much to be about locks, but as a sneaky way to teach some fairly advanced cryptologic concepts that you normally only can get to in advanced specialized courses.
With mechanical locks, you can fairly easily understand concepts like keyspaces, attack work factors, online vs. offline attacks, leaking secrets via queries to oracles, and more. All without electricity, let alone computers.
I think it also has value in the lock world, mind you - it's important that current and potential users of these flawed systems understand the risks - but that was just a bonus side effect.
The safe paper I wrote more as a case study in attacks and countermeasures Safe lock design reveals a rich conversation between attackers and defenders, and is one of the best examples of effective use of security metrics that I'm aware of. We should strive for that in computing.
What do I mean? No safe is perfect, but there are known realistic lower bounds on the time required to breach or manipulate various models. So you might have a safe good for 30 minutes. What good is that? You now know that you need a burglar alarm with a < 29 minute response!
At the risk of enormous understatement, we rarely achieve practical security metrics that compose that way that in infosec, but if we did, it would solve a lot of problems.
I fully understand, by the way, that there are people who find my work irresponsible or even immoral. I think they're misguided. The whole scholarly enterprise of which I'm a part is devoted to the inherent value of discovery. It's how we improve things. It's the best we have.
It can be uncomfortable to discover vulnerabilities like this. But the choice is ultimately to either shut up (and leave it to people who would use it in secret to do harm to discover) or warn everyone so we can begin the work of fixing it. I pick the latter.
I choose to disclose partly out of a realism and humility. I may be arrogant, but not so much so that I accept the delusion, however flattering, that no one else is capable of finding what I did (or perhaps has already done so). Disclosure makes the attacker's job harder.