Battle of the Sources
Open source, as used today, is not necessarily more or less secure
than proprietary closed-source solutions. However, with automated
program analysis tools, open source has the potential to be
dramatically more secure than its commercial alternatives.
By Simson Garfinkel
In November 2003, a malicious hacker tried to compromise the Linux
operating system—not a particular computer running Linux, mind you, but
the whole thing. Here's how: After taking over a series of computers at
an undisclosed university, the individual (or individuals) penetrated a
server used by the Linux development team. Once there, the person
inserted two lines of code—a so-called back door—into the very source
code that is used to compile the Linux operating system.
This back door was quite elegant. To exploit
it, all anyone would have to do is run a two-line program and—wham!—the
attacker would instantly have his privileges upgraded to "root," the
Linux equivalent of the Windows System Administrator. Essentially, the
hack would have made it easy for an attacker to escalate his
privileges. Had the code been compiled and distributed, the
implications could have been far-reaching.
Of course, this hack was not to be. The attack was discovered less
than 12 hours later when automated tools used by the Linux developers
detected the unexplained discrepancy. The code was quickly removed, and
no computers were jeopardized.
What's so maddening, from the perspective of today's CSOs, is that
it is theoretically impossible to look at any piece of sufficiently
complicated code and tell for sure if it has a security vulnerability.
In fact, it's even impossible to determine if an intentional back door
has been added to a program. The problem isn't that terms such as
"vulnerability" and "back door" aren't well-defined. The problem is
that programming languages are too powerful: It is possible to so
completely hide functions and features inside a program that the only
way to find them is by running the program itself—and then it's too
late!
The good news for CSOs is that if you are willing to settle for
less-than-perfect security, then many common programming flaws and even
intentional back doors can be readily detected with a new generation of
automated program analysis tools. The tools will help you find
vulnerabilities, and some even perform a risk-benefit analysis to see
if the vulnerabilities are worth fixing.
Planted Attacks
The Linux attack demonstrates a very real risk in today's
open-source software: Because the software is by definition distributed
in source code form, it's quite easy for an attacker with even
relatively modest skills to plant a malicious attack.
Indeed, it's happened before.
Six years ago, a hacker broke into a computer in the Netherlands that was the distribution location for a firewall
toolkit called "TCP Wrappers." Once again, a back door was added. But
this time, the vulnerability was put into a piece of code that was
being actively downloaded and deployed. Between 7:16 a.m. and 4:29 p.m.
on Jan, 22, 1999, a total of 52 sites downloaded the compromised
program. Some may have even installed it. The author of the program,
Wietse Venema, discovered the unauthorized alteration and stayed up
into the early morning notifying all of the affected sites. He then
wrote a disclosure for the Computer Emergency Readiness Team at
Carnegie Mellon University, which published the alert the following
day.
Some proponents of open-source software argue that open source is
inherently more secure than closed-source proprietary software because
of the "many eyes" theory: With many eyes looking at the code,
vulnerabilities can be rapidly identified and fixed, and then the fixes
can be distributed. But while this theory sometimes works, mostly it's
wrong. Sometimes the eyes just aren't looking, even though the source
code is readily available; sometimes the eyes that are looking aren't
properly trained; and sometimes the eyes find what they are looking
for—except the eyes are working for the enemy.
The "Many Eyes" Theory One of the biggest failures of the
many eyes theory was uncovered in February 1996, when researchers at
Purdue University discovered a devastating bug in the Kerberos Version
4 random number generator. The Kerberos V4 system had been developed at
MIT and distributed in source code form to dozens of companies, all of
which had incorporated the code into their products without ever
looking at it. Once discovered, almost 10 years after the bug was
introduced, the vulnerability made it possible for the academics to
forge Kerberos keys and break into Kerberos-protected systems within
seconds. A patch
was quickly created and distributed—one of the advantages of open
source is that security fixes are generally easier to distribute and
install than fixes for closed-source programs. However, for almost a
decade, anybody who knew about the security flaw could penetrate any
Kerberos-protected system on the Internet.
The Kerberos vulnerability was an accident. But information warriors
worry that a bad guy might join the Linux or Apache development team
and then contribute software with subtle flaws—flaws that could be
exploited by someone with just the right know-how. It's unclear whether
the Linux attack would have been detected if it was covertly checked
into the source code by a trusted member of the Linux development team
instead of by a malicious attacker.
But don't go throwing away your open-source software: Far worse
security problems have been discovered in commercial offerings, as
anybody who runs Microsoft Windows is aware. What's more, a surprising
number of so-called Easter eggs have slipped into commercial programs,
indicating that even the program managers at Microsoft aren't in
complete control of their programmers. For example, Microsoft Excel
2000 had an entire video game hidden, unauthorized, inside the program.
You can Google for the playing instructions.
Tools That Make Open Source More Secure
One tool that can make open-source software more secure is
CodeAssure by Secure Software in McLean, Va. Another is Prexis by Ounce
Labs in Waltham, Mass. Both systems can analyze a large application
written in C, C++ or Java and find a wide range of bugs that hackers
could in theory exploit. Usually, these are bugs that result from
simple coding mistakes. Sometimes the bugs are the result of design
decisions. And occasionally, the bugs were put there intentionally:
After all, if some bad guy is going to put a back door into a program,
it's much safer to make the code look like a mistake rather than an
intentional attack. This is especially true if the bad guy happens to
be one of your own programmers.
These programs are useful for analyzing both open-source programs
that you might download and for code generated internally by your own
team. In either case, the programs produce a detailed report of
vulnerabilities; they even create step-by-step instructions on how to
exploit them!
These systems operate on two levels. First, they perform a syntactic
analysis of software, looking for common coding mistakes. Then they
perform a detailed analysis of the underlying algorithms, tracing the
flow of data through the system, monitoring application programming
interfaces (APIs) to make sure that they are used correctly, validating
that proper error handling is in place—and more.
About the only thing these systems don't do is test your
application. Testing simply takes too long to uncover the kind of deep
problems that these tools can bring to the surface. Instead, these
tools try to construct little logic proofs about the programs that you
throw at them. Meaning, instead of trying to prove that no
vulnerability exists—an impossible task—they take the opposite approach
and try to prove that an exploit does exist. The way that they prove
their goal is by finding the specific exploit.
Mike Dwyer, director of quality assurance at Global Payments, a
leader in the delivery of payments and associated information in
Atlanta, has been evaluating code analysis programs during the past few
months. He says these kinds of tools provide a different approach that
is particularly well-suited to analyzing open-source software within
mission-critical applications. "If you look at the industry, there is
strong evidence that there have been a lot of wonderful [open-source]
success stories out there," he says. "But there is a security risk: The
problem with open source is, admittedly, if it wasn't developed
in-house, then you really don't know what's in there. I can't think of
a better way of trying to assess what's within an open-source project
than with a tool like this."
Insiders at Ounce Labs and Secure Software report that both
companies have found significant security vulnerabilities with
open-source systems that are now widely used throughout the industry.
Invoking the many eyes theory of open-source software development,
hopefully these vulnerabilities have been reported back to the
developers and fixes are on the way.
But the beauty of open source is that you don't need to wait for the
fixes to be adopted: Any organization that wants to make open source a
critical part of its infrastructure can purchase these tools, use them
to find the vulnerabilities and then make its own risk-benefit
analysis. This approach makes good sense, and it's fundamentally not an
option when you buy your software from companies such as Apple or
Microsoft.
Simson Garfinkel, PhD, CISSP, is spending the year
at Harvard University researching computer forensics and human thought.
He can be reached via e-mail at machineshop@cxo.com.
Most Recent Responses:
The author has seemingly ignored the evidence of his own article that a
properly regimented change control process is capable of identifying
unauthorized or changed elements of program code - even simple version
to version code compare techniques will highlight the implementation of
back doors. Or is it that he believes that the owners of LINUX source
do not bother with the elementary precautions and that they were 'just
lucky' in finding the unauthorized code?
Phil Cogger
Info Sec Mgr
Print
|