wide open       print this storyrespond to this story

Home
Open Source: How Secure?




  
Date: 15 Nov 99 Writer: Simson Garfinkel Location: Martha's Vineyard  
 

Rethinking Security Through Obscurity

Upshot:

Open source boasts distinct security advantages over proprietary software. But that doesn't mean it's bulletproof. Fourth of five parts.
 
In recent years, the term "security through obscurity" has become the ultimate put-down for poorly designed security software. A famous posting by Black Unicorn on the Cypherpunks mailing list compared security through obscurity with the security that comes from hiding a house key under a doormat. Security through obscurity is one of the primary attacks used against propriety encryption algorithms. Proponents of these systems say that their secrecy makes them secure, but cryptographers frequently show that the converse is true.

But security through obscurity only breaks down if it is hard for the secret to remain secret. It's pretty easy for a crook to look under your doormat and find the key. On the other hand, the orientation of the pins in your front door lock is simply another secret that provides security. This secret is better, because it's try every possible key or to guess the correct one.

Security through obscurity can also work if the obscurity is simply one part of an overall security architecture. For years the RC2 and RC4 algorithms were kept secret by RSA Data Security. But the secret was designed to protect RSA's commercial position, not to improve security. After the algorithms were leaked over the Internet, cryptanalysis by some of the world's best minds showed that they were apparently secure.

The same is true of computer security. Computer users are implored to pick a password that is "hard to guess." In other words, pick a password that is obscure. Digital signatures and public key systems are more secure than usernames and passwords because 1024-bit private keys are even harder to guess -- that is, they are even more obscure.

Obscurity can produce a considerable amount of computer security, especially when the obscurity is one of many elements of a well-thought-out security posture. Last year my Internet service provider Vineyard.NET suffered a successful penetration from an attacker in Texas. The attacker found a vulnerability in one of our CGI scripts and proceeded to bring up an xterm with a shell on his home computer. He then started systematically trying every single known Linux exploit, in an attempt to get root. But he didn't get very far. That's because my web server wasn't running Linux: it was running BSD/OS, a commercial version of the BSD operating system.

Because you have to pay for it, BSD/OS isn't very popular in the open source world. This is a problem for us at Vineyard.NET: many programs that we download from the Internet won't compile or work properly on BSD/OS. But the operating system's obscurity prevented a minor penetration from becoming a serious security incident. Strong attention to host-based security did the rest.

The CGI script that the attacker exploited was an open source program, part of the HylaFAX system. The vulnerability had been present for years and discovered about three months before the attack. We at Vineyard.NET had been remiss in not reading the announcement of the flaw on Bugtraq, and we paid for our inattention. On the other hand, if the security vulnerability in the HylaFAX program had never been publicized, we never would have been impacted by the hole.

This essay isn't meant to argue that "security through obscurity" is a good thing. Instead, I'm arguing that obscurity is an increasingly ignored factor in overall security posture. Open source systems forgo the security benefits that come with obscurity; as a result, the rest of the operating system needs to be held to a correspondingly higher standard.

The open source security standard is that an attacker should not be able to break into a computer even if they have the full source code and detailed descriptions of how the source code works. The theory, which I believe, is that software that can withstand this higher level of threat can also do a better job withstanding casual attacks from poorly funded attackers -- like the attack we suffered at Vineyard.NET.

The danger, one that we have been slow to admit, is that many programmers contributing to the open source community do not have the technical sophistication to write software that can live up to this high standard.

Part V: What Does Security Mean Anyway?
 
© 1999 Wide Open / Red Hat, Inc. All rights reserved.