Open source boasts distinct advantages over
proprietary software. But that doesn't mean it's bulletproof. First
of five parts. The theory of open source security is
simple, and it is endemic throughout the entire open source
community. The theory is so pervasive, in fact, that it can be
reduced without much effort to a four-word mantra:
Source
code breeds security. Most open source proponents
instinctively believe this theory. I used to, but increasingly I've
come to regard the theory as
a kind of dogma that
substitutes for critical thinking. Open source software is
frequently more secure than proprietary software, but it doesn't
have to be. In this series of columns I intend to explore why, and
to make some suggestions for future development.
I have long
been a proponent of the open source security theory because it does
such a good job explaining what we all instinctively feel must be
true. Without the source code, it's hard to know if an application
program or an operating system has a security flaw.
But with
the source code, you can inspect the code, show it to experts, and
try to ferret out all of the potential problems. If
somebody else finds a problem with your system, possession of the
source code will allow you to correct the flaw. If you don't have
the source code, you are utterly dependent upon the software vendor
for a patch.
The open source security theory also does a
good job explaining why open source Unix-based operating systems
seem to be more secure than Microsoft's Windows 98 and NT --
especially for production systems. When SYN flood attacks and the
Ping of Death were discovered back in 1996, patches for Linux and
other free Unix operating systems were available within a matter of
days. That's because thousands of kernel programmers had the source
code and the understanding of the attacks: they raced with each
other to get the credit for posting the fix. Microsoft invariably
took longer to respond, if for no other reason than fewer
programmers had access to NT's source code, and these programmers
were in great demand.
The theory of open source security
actually got its start among cryptographers in the 1970s. Back then,
the National Bureau of Standards was tasked with creating a federal
Data Encryption Standard. The cryptographers of the day, most
notably Whitfield Diffie, argued that the only way for an encryption
algorithm to be secure would be if the algorithm's details and the
theory of its design were published and analyzed in a peer review
process.
Secret, proprietary algorithms could never be
secure, these cryptographers argued: there is simply no way
to know whether or not the secrecy is hiding a fundamental flaw.
In the years that have followed, the cryptographer's claims
have been shown to be mostly true: time and again, secret
proprietary encryption algorithms have been analyzed and cracked by
cryptographers. There is even a company in Utah,
Access Data, which sells a
system that can decrypt files that have been previously encrypted
using the encryption algorithms in Microsoft Word and many other
application programs. These programs all fail to provide adequate
security because they're based on weak proprietary encryption
algorithms. Indeed, Access Data's program can't decrypt files that
have been encrypted using standard encryption algorithms such as
triple-DES or 128-bit RC4. Those industry-standard encryption
algorithms are unbreakable, given our current understanding of
mathematics and physics.
But applications and operating
systems are not encryption algorithms. And as we'll see, while open
source may breed security, it can offer no concrete assurances.
Part II: Fiascos
Simson Garfinkel is a computer security consultant and
author/co-author of several books on the subject, including "Practical Unix and
Internet Security," published by O'Reilly & Associates.