Ain’t No Flyswatter Big Enough
What do you do when somebody breaks into one of
your organization’s servers? When waving your hands wildly doesn’t
help, you’ll need an intrusion detection plan.
BY SIMSON GARFINKEL
I first started seeing this problem back in the late 1980s.
Hackers would break into a system, set up a few accounts of their own,
and then start installing Trojan horses and other kinds of back doors
so that they could always return. Trying to contain the damage was like
swatting at flies. The system management might remove the bad guy
accounts, but they would come back through the back doors and recreate
them. Management might reinstall the operating system, but the bad guys
would still come back—perhaps through some new service that had been
left behind. And since the bad guys invariably had root-level access,
there was always a chance that they would mount a revenge attack and
wipe out the system.
There was really only one way to
prevent systems like this from spiraling out of control: save whatever
you could, then log out and “nuke from orbit. In other words, a
standard operating procedure was to copy critical files and data off
the system, then reformat its hard drive and do a clean installation.
The “nuke from orbit” approach
works, but like real-life atomic weapons, most incidents don’t need
such drastic measures. It’s incredibly wasteful to spend a week or more
reinstalling a computer’s operating system and applications because of
a few modified files. During that time, you are either running a
compromised machine or your server is down.
What’s even worse, nuking
occasionally doesn’t work. Yes, a complete reinstall would get rid of
the compromised log-in program and mail server that the bad guys left
behind, but reinstalling the operating system doesn’t do much good if
the original penetration was due to a stolen password.
Into this mess came what should
properly be considered one of the computing industry’s first intrusion
detection systems. Called Tripwire, the program took a snapshot of the
computer’s operating system, recording the modification date and a
cryptographic checksum for every program on a given computer system.
This database would then be stored not on the computer itself, but on
removable media—usually a floppy disk. If you discovered that your
machine was compromised, you could run Tripwire a second time and
compare the two databases to find which files the bad guys had
modified. Even better, you could run Tripwire on a regular basis to
find out if your machine was compromised.
A graduate student named Gene Kim
and his adviser, Gene Spafford, developed Tripwire at Purdue
University. (Professor Spafford and I are coauthors on several computer
security books.) Realizing he had a winner, Kim took the research and
cofounded a company, also called Tripwire, based in Portland, Ore.
The Tripwire of today bears little
resemblance to the program developed at Purdue. For starters, the
program runs on both Windows and Unix. It has a management console that
allows many systems under Tripwire to be centrally administrated. It
has a policy language that allows you to specify which files can be
changed and which need to remain the same. There is even a reporting
engine and an automatic alert notification system. In many ways, these
changes mirror the changes that the whole computing industry has
undergone during the same time period.
But the basic premise of Tripwire is
still strong: It’s impossible to run a reliable computing system when
you don’t know which software (and other configuration) files are being
changed.
Curbing Unauthorized Access
On the other hand, as Kim told me in a recent interview, he and
Spafford didn’t understand the extent of what’s now called the “change
management” problem when they invented Tripwire back in the 1980s.
Although the primary motivation for creating Tripwire was to detect
intrusions, the real value in the business world, Kim says, has been in
detecting unauthorized changes from authorized personnel. Although at
times these might be from malicious employees bent on harming the
company, more often they are well-intentioned or accidental changes
from employees who were simply trying to do their jobs.
I have to agree with Kim. I’ve seen
mission-critical servers shut down because a file was accidentally
created in a critical directory or because an undocumented
configuration file got overwritten by what was supposed to be a “minor”
upgrade. And the complexity of change management is only complicated by
the inability of most operating systems to track precisely what happens
when a patch or upgrade is applied.
These are some of the reasons that
Tripwire’s new marketing focus isn’t security against hackers as much
as the general problem of operational continuity and change management.
Tripwire is increasingly being positioned as a tool both to detect
unauthorized changes to configuration files and programs, as well as to
verify that the proposed or required changes actually get made.
And for these reasons, most Tripwire
users have stopped storing their databases on removable floppy disks or
CD-ROMs; although, these days you can get nearly the same security by
storing the database over a LAN on another computer. The security isn’t
quite as good, because there is always a chance that the bad guys will
break into the computer that stores your Tripwire database.
Nevertheless, many system administrators are happy to make this
compromise in the interest of convenience.
Randy Barr has been running Tripwire
on roughly 1,000 mission-critical servers at WebEx Communications for
two years. Barr tries to run a tight ship. Every change to the servers
has to be proposed, documented and finally approved by a change-control
committee. But occasionally things slip through the cracks.
“There was a user who went in,
outside of the change-control process, and made a change,” Barr says.
“He created a file in a directory to keep notes for himself.”
You might not think a file with a
few notes in it would be a problem—and in most cases it’s not. But
because the wrong file created in the wrong directory can potentially
shut down a server, WebEx’s policy is that unauthorized changes such as
these should not happen.
Further investigation revealed that
the employee wasn’t supposed to be logged in to the system at all. “The
policy states that this is a disciplinable action, so that’s how we
handled it,” says Barr.
| Trash Talk
The decidedly unsexy topic of information destruction
Read More | Tripwire
isn’t the only way that WebEx could have caught the employee
infraction. A unified logging system with alert management would have
caught the unauthorized log-in to the server in question. The creation
of the file itself could have been caught with the use of C2-level
audit logs. (The phrase “C2” comes from the U.S. government’s “Orange
Book” that defines requirements for different kinds of secure operating
systems.) But each of these approaches has problems. A unified log
system tends to generate a lot of false positives, since many log-ins
are inconsequential. C2 logging, on the other hand, generates a
tremendous amount of information and can put a significant drain on
system resources.
Barak Engel, CSO at InStorecard, says his startup uses Tripwire to comply with Visa’s Cardholder Information Security Program (CISP). The program, launched by Visa in June 2001, mandates security standards for merchants and service providers.
“We needed to have a data integrity
solution as part of our Visa CISP,” Engel says. He has Tripwire
configured to watch, “critical operating system portions and our
product directories—things that could actually affect or be affected by
any kind of malicious adversary” of the company’s Microsoft-based
servers. Tripwire automatically e-mails regular reports and
notifications whether or not it has detected unauthorized changes.
Unfortunately, there are certain
attacks that can be very hard for Tripwire to detect. If an attacker
breaks into a computer and is able to modify the operating system’s
kernel, the system can be programmed to fool Tripwire into thinking
that files haven’t been modified when in fact they have. This kind of
attack, commonly built into programs, is called “root kits.”
In one version of this attack, the
attacker makes a copy of each program that he or she wants to modify.
The operating system is then modified so that when the computer tries
to read the contents of a program, it reads from the original. But when
it tries to execute a program, it executes the version that’s been
modified. In another variant of this attack, the bad guy modifies the
Tripwire program itself so that the program stops looking at the files
that have been compromised.
Both of these attacks can be easily
subverted by booting the affected computer from a CD-ROM. If you run
with an operating system that’s known to be good and a fresh copy of
Tripwire, you can rapidly determine where the modifications have been
made.
Of course, another alternative is to simply evacuate the data that you care about, blast off and nuke from orbit.
Simson Garfinkel, CISSP, a Boston-area technology writer, can be reached at machineshop@cxo.com.
ILLUSTRATION OF FLY BY ANASTASIA VASILAKIS
Most Recent Responses:
An efficient and very comprehensive way of managing all security
events, use of resources, system changes, etc. is through the capture
of the raw packet data that creates the event logs and analyzing and
correlating that data behaviorally over long periods of time. From a
behavioral point of view, any change to any resource on the network
becomes apparent and can then be examined to determine malicious
intent. When correlated with IDS or IPS alerts from the perimeter and
inside the network and vulnerability scans on a continuous basis,
detection and forensic capability is improved and false positives
become a minor issue.
Scott Paly
CEO
Global DataGuard, Inc.
Email
Print
|