CSO: The Resource for Security Executives
CSO Newsletters
CSO's
free newsletter keeps you informed about the latest articles, analysis,
news, reports and other developments at CSOonline.com. Sign up today.
Subscribe to CSO
Our print publication is free to qualified readers in the U.S. and Canada.
Read CSO Online
All the issues of CSO are available online.
|
|
Unlocking Our Future
A look at the challenges ahead for computer security
BY SIMSON GARFINKEL
Forty-two years ago, John F. Kennedy's commitment to landing a
man on the moon and returning him safely to the Earth was the epitome
of a "Grand Challenge"—the attempt to tackle a problem in science or
engineering that is easy to describe but monumentally difficult to
solve. More recently, the field of supercomputing has used the Grand
Challenge concept as a tool for guiding research and funding priorities
for such activities as modeling the global climate or accurately
predicting weather many days in advance.
The notion of a Grand Challenge had left some—including me—wondering if computer security has an appropriate equivalent.
Well, it does. In November, I had
the honor of being included among 50 of the leading computer security
researchers in the world in doing just that—helping to pinpoint the
"Grand Research Challenges" we are facing today in information security
and assurance. Conference organizers from the Computing Research
Association (CRA) and the Association for Computing Machinery solicited
short essays from around the world, then invited the authors of the 50
most promising proposals to a four-day intensive workshop aimed at
finding the commonalities in those proposals and articulating them.
After days of round-the-clock
meetings and late-night wordsmithing, this predictably cantankerous
crowd managed to come up with four challenges deemed worthy of
"sustained commitments." We identified the hard problems that we don't
know how to solve today but that might be solvable within a decade
(assuming enough research dollars are spent). Perhaps most important,
they are problems that need to be solved if we want to continue to
enjoy the fruits of the computer revolution.
First on the list of Grand
Challenges is the elimination of "epidemic-style attacks" within 10
years. Certainly it would be nice to return to an Internet that is
largely free of viruses, worms and spam. But it is interesting to note
that the conference attendees don't think the solution to viruses and
worms is for people to install antivirus software
and keep their systems up-to-date—two of the primary solutions
recommended last year by the National Strategy to Secure Cyberspace.
Instead, we agreed that what's needed is a fundamentally new approach
to solving the problem, perhaps by moving more of the responsibility to
Internet service providers.
Large-Scale Systems
The second Grand Challenge: Develop tools and principles for
creating large-scale systems for applications that are really
important—so important, in fact, that today these systems are largely
still on paper (or at least on standalone computers not connected to
the Internet). Two examples from the CRA workshop are medical records
systems and electronic voting. In the case of medical records, we
agreed that doctors and patients should be able to benefit from
Internet technology without having patient records routinely stolen by
Russians and ransomed back to the hospital administration (right?). And
voting systems present all of the same security challenges with the
added twist of auditing. We asked ourselves, How do you build a system
that ensures the privacy of the ballot box while still preventing
somebody from electronically stealing an election?
Certainly, the second challenge
seems more doable than the first. Various pieces of the puzzle have
been discussed at length: Perhaps all we need to do is assemble these
tools together in a complete whole. Some researchers argue, for
example, that every electronic voting machine should have an internal
little printer and a roll of paper just to prevent the computer system
from accidentally zeroing out votes for one candidate and assigning
them to another.
But a competing proposal would have
a second computer recording the votes with a digital camera. Indeed,
this might not be a Grand Challenge at all if it weren't so terribly
important and if we hadn't, as a society, done such a bad job with our
voting system attempts to date.
Measuring Risk
The third Grand Challenge doesn't seem all that difficult—that is,
until you try to do it. It calls for developing quantitative
measurements of risk in information systems. But then, consider this
riddle: What's the percentage chance that a programming flaw will be
discovered in Windows within the next 30 days that will allow an
attacker to get administrative privileges on your system? And do the
Linux and OpenBSD operating systems have a higher or lower chance of a
similar flaw being discovered? Many CEOs would like answers to such
questions. But with computer systems today, there is no reliable way to
measure risk.
If we could reliably measure the
risk associated with a particular piece of software, we could then give
an estimate of how much it would cost to decrease the risk—or,
alternatively, how much we could save by accepting it. Banks have been
making these kinds of risk-benefit decisions for decades in the realm
of physical security. Infosecurity professionals, on the other hand,
have all but given up trying to rate the risk of different systems.
Instead, the practitioners have developed sets of best practices that
they hope will decrease the chances of a computer being compromised.
Alas, there are many problems with
"best practices." The most obvious is that they really don't tell you
how secure you happen to be at the moment. Instead, they simply tell
you that you are as secure as everybody else who is following the same
practices. Likewise, best practices give no metric for making
purchasing decisions. That's why reviews comparing antivirus systems or
firewalls tend to stress other factors, such as how much the systems
cost, how fast they run and how easy they are to manage. Today, we just
don't have good tools for measuring and quantifying the actual
differences between various security applications and appliances.
Control Freaks
Our final challenge is to make security easier to use—specifically,
to give end users control over their own computers. That is especially
important as we move into a world in which each person will have many
different computers, all with different capabilities, architectures and
security models.
Infosec's Grand Challenges for the Future
1. Eliminate epidemic-style attacks (viruses, worms, spam) within 10 years.
2. Develop tools and principles that allow construction of large-scale
systems for important societal applications—such as medical records
systems—that are highly trustworthy despite being attractive targets.
3. Develop quantitative information-systems risk management to
be at least as good as quantitative financial risk management within
the next decade.
4. Give end users security controls they can understand and
privacy they can control for the dynamic, pervasive computing
environments of the future.
SOURCE: COMPUTING RESEARCH ASSOCIATION AND THE ASSOCIATION FOR COMPUTING MACHINERY
|
This "secure usability" chestnut is a hard one to
crack. After all, for years security experts have been telling
everybody else that security and usability are diametrically opposed:
If you make a system more secure, you make a system harder to use, and
vice versa. Making security something that users can understand might
mean that we need to fundamentally change the way that we think about
and work with information systems.
Consider the role of education. It's easy to blame many of the recent Internet worm epidemics on the failure of users to download and install software updates. At the height of the Blaster worm,
Microsoft was running full-page advertisements in many newspapers
giving people instructions on how to enable XP's built-in Internet
Connection Firewall. But this massive educational campaign wouldn't
have been needed if Microsoft had instead configured XP to
automatically download and install its patches. Are we better off
trying to educate users who do not wish to be educated, or should we be
automating as many processes as possible, knowing that those automated
systems will occasionally make a mis- take—and that they, themselves,
can be subverted? That's part of the riddle of the fourth Grand
Challenge.
Creating these challenges was a
useful exercise for the researchers, academics and government employees
who attended the workshop. But the real value of this work was putting
a signpost into the ground pointing to the direction in which we should
be marching. It's easy to get caught up in the tactical elements of
computer security, with all of its encryption
algorithms, public-key infrastructures, disk sanitization and other
nuts-and-bolts issues. Ultimately, though, we need to start thinking
more strategically about computer security, or else we are going to
lose this war.
Indeed, if we don't get a handle on the spam
and worm problems soon, Internet e-mail could become a lost
communication medium—metaphorically speaking, it could become the CB
radio of the 21st century. We might see individuals and businesses
disconnecting their computers from the Internet, deciding that the
added benefits of being able to transfer files and download software
are simply not worth the extra cost of eternal vigilance and the risk
that something might go wrong with their computer systems. That isn't a
far-fetched scenario. According to the Pew Internet & American Life
Project, millions of people have already given up on e-mail because
they don't want the spam associated with it.
Nevertheless, I hope these
challenges will be used as a starting point for research projects and
for businessfolk who are thinking of starting new companies. There's
clearly a lot of work to be done. Let's get started!
Simson Garfinkel, CISSP, is a technology writer based in the Boston
area. He is also CTO of Sandstorm Enterprises, an information warfare
software company. He can be reached at machineshop@cxo.com.
> Traffic Cops
Most Recent Responses:
For years Microsoft has been saying that Open Source is less secure because the source is visible to the world. "Open
source faces a unique challenge because the availability of source code
makes it possible for developers who want to find and fix security
vulnerabilities to do so. At first blush, that may sound like an
advantage, but there are two accompanying problems: (1) a great deal of
expertise is required to identify security flaws and to fix them
without creating new vulnerabilities, and (2) readily available source
code also empowers malicious hackers who use the transparency of the
open-source model to exploit weaknesses in the product's code base."
-(http://www.microsoft.com/resources/sharedsource/Articles/SoftwareDevelopmentModelsOverview.mspx) With
the recent exposure of the Windows source code I predict we will all
soon see that the emperor has no clothes. As far as the "great deal of
expertise is required to identify security flaws" theory, if Microsoft
can't afford this expertise, who can? The
ability to judge the inherent risk in Open vs. Proprietary software has
suddenly presented itself. 2004 will provide some quantifiable data to
support better risk management decisions.
Eric Shoemaker
CTO
WiKID Systems
Email
Print
On Control Freaks:
Is the solution to turn the update feature on by default? That may be okay for home users and very small shops.Once
size never fits all when you move beyond small networks. I can't
imagine the experience of a "police state" Internet with ISPs
scrutinizing our packets. No, Microsoft should focus on writing better
code instead of squashing all potential competitors.
Mike Satterfield
Network Specialist
MCCSC
Email
Print
|