Beyond Passport Vulnerabilities
Security flaws in high-profile products like Microsoft's Passport led experts and vendors to find new ways to disclose bugs
BY SIMSON GARFINKEL
Little more than a year ago, a company that I'm involved with found a serious flaw with Microsoft Passport.
Microsoft Passport, for anyone not in the know, is Microsoft's highly promoted identity management and single sign-on
system. Instead of having one password for the Microsoft Developer
Network, another password for Hotmail and another password for
Microsoft Messenger, all of these services are tied together with a
single common database. Log in to one system, and you've logged in to
them all. In theory, this makes the overall process easier for users,
since there is only one ID and password to remember, and more secure,
since it is easier to debug and audit one system as opposed to many.
Microsoft has adopted Passport
internally for most of its products that need to identify users—things
such as Windows Media Player. Microsoft has also encouraged other
companies to adopt Passport as their back-end authentication
system. The biggest company that has jumped onboard so far is eBay,
which allows you to sign in using either an eBay ID or a Passport ID.
The problem that the company
discovered had to do with the way the Windows XP Registration Wizard
used Microsoft Passport to register new copies of Windows when they
were first loaded. Instead of communicating with the Passport servers
over an encrypted SSL channel, as Microsoft claimed, much of the
information was being sent without encryption.
Because Passport is so widely used,
the bug was significant. By sniffing the packets on a local area
network or an ISP, an attacker could learn the ID and password of any
person registering a new copy of Windows XP. What's more, because the
registration was done in a Wizard program—rather than in a traditional
Web browser—there was no telltale "https," meaning there was no easy
way for people to know the information was being sent without
encryption.
Passport vulnerabilities have been
big news in the past: People who have found them have made the front
page of The New York Times. Microsoft then scrambled to fix the
problem, while individuals and organizations using the system were left
in the lurch. The problem, of course, is that it's hard to stop using
Passport. But once the vulnerability is known, the black hats are free
to start exploiting it. And once they know where to look, more
vulnerabilities might be found.
Bug hunters weren't always so fast
to disclose vulnerabilities. When I started writing about computer
security 15 years ago, such disclosures were widely seen as
irresponsible and dangerous. Back then, newly discovered
vulnerabilities were shared with a few trusted security professionals
and communicated to the vendor or software developer. The idea was to
give those most affected the opportunity to immediately protect
themselves and give the company time to develop a fix before the
problem was widely known. Frequently there was no "patch" issued at
all; the fix for the security problem was simply folded into the next
software release.
The Problem with Selective Disclosure
There was just one problem with this careful approach to
vulnerability disclosure: Many security vulnerabilities never got fixed
at all. Uninformed that the new releases actually contained security
fixes, many users didn't bother upgrading—especially users running
mission-critical systems that couldn't afford any downtime. Even worse,
many software vendors simply didn't fix the security problems that were
brought to their attention. After all, why should they? The typical
application or operating system has many security vulnerabilities—some
of which are known publicly, some of which are known internally and
most of which are undiscovered. Why fix a vulnerability that's being
kept secret?
As the 1990s unfolded, we learned
another reason why selective disclosure didn't work: Increasingly, the
people who were discovering security vulnerabilities weren't part of
the privileged cabal of computer security researchers and
practitioners; they were students, "reformed hackers," independent
consultants and even journalists. Time and again, I would hear stories
of people who had sent e-mail to a company, reporting a vulnerability
they had discovered and then got nothing back, not even a "thank you."
How frustrating. And, as far as the companies were concerned, how tremendously shortsighted.
Thus was born the idea of full
disclosure. Mailing lists such as Bugtraq, the sole purpose of which
was to allow this new breed of researchers to exchange red-hot
vulnerability information, sprung into existence. Computer vendors were
welcome to monitor Bugtraq to learn about vulnerabilities in their
products—or in the products of their competitors. Of course, the bad
guys subscribed to Bugtraq as well—so, too, did a number of highly
placed journalists. Thus began the era of disclosures being published
on the front page of newspapers, followed by hectic days of
patch-or-be-hacked. And all too often, the important disclosures were
almost invariably followed by a new round of computer worms or viruses
that took advantage of the disclosures.
Disclosures that showed up on
Bugtraq weren't just about new buffer overflows; sometimes the bugs
were with e-commerce shopping cart software—bugs that would allow a
knowledgeable attacker to get products for free, or even to execute
commands on the shopping cart's server and steal credit card numbers.
The most prestige went to people who posted notices with so-called
"exploit scripts," usually a small program that both demonstrated the
bug and allowed an attacker to break in to the remote system.
In many cases, there was no obvious
public interest served in the public disclosure. Sure, the person who
found the bug got credit, but merchants relying on the products were
frequently hurt. This was evident when the exploits discovered were
with orphaned products made by companies that were having financial
problems or had gone out of business. Yes, the merchants relying on
these products need to find solutions. But widely posting such
vulnerabilities probably did more harm than good.
The Importance of Full Disclosure
These days the pendulum is swinging toward a middle ground called
responsible disclosure. People and companies that find security
vulnerabilities are supposed to notify the company in question about
their discovery and start a clock. The company has 30 days to confirm
the vulnerability, come up with a patch
and distribute that patch to its users. If the company isn't
responsive, the theory goes, then the bug hunter has not just a right
but a duty to publicly disclose the vulnerability in an effort to both
light a fire under the vendor and warn users.
| Putting Out Fires
It wasn't your typical vendor meeting, reports a CSO colleague of mine.
Read More | These guidelines have been agreed upon by a consortium called the Organization for Internet Safety (OIS, www.oisafety.org).
The consortium includes software publishers such as Microsoft and The
SCO Group and bug-hunters such as @Stake, Foundstone, Internet Security
Systems and Symantec. The hope is that agreed-upon ground rules should
bring stability to the hectic world of vulnerability disclosure.
The whole question of vulnerability
disclosure is one that most CSOs will have to wrestle with from time to
time. The most obvious reason is that a CSO needs to know when new
vulnerabilities are disclosed in products that his organization is
using. For this reason, it makes sense to have at least one person in
your shop monitoring mailing lists such as Bugtraq and Full-Disclosure.
The person should also do regular Web searches of product names and
release numbers, just to keep tabs on the "chatter" surrounding your
organization's infrastructure investment.
But another reason that disclosure
protocols affect CSOs is that a CSO is likely to encounter security
vulnerabilities as well. In these cases, the CSO needs to know what to
do with this information—whom to tell, how to tell and how to manage
the flow of information.
Follow Disclosure Guidelines
It makes good sense for CSOs to be familiar with the OIS disclosure
guidelines. Although nothing makes these guidelines sacrosanct, they do
reflect a lot of hard work from respected people and organizations
familiar with disclosure problems. If I were CSO at a major
corporation, I would be hard-pressed to find a reason to implement a
policy that was fundamentally different from what the OIS is proposing.
That's what my company did:
Following the responsible disclosure guidelines, we contacted
Microsoft. Following the guidelines, the company took us quite
seriously. In fact, Microsoft said the problem was a minor
configuration on one of the Passport Web servers. A few days later, the
problem was fixed. We didn't get any glory, but we received a very nice
box of Microsoft warm-up jackets in the mail as a kind of tangible
"thank you."
It's important to remember that the
disclosed vulnerabilities represent only a tiny fraction of the
vulnerabilities that are in any given piece of software. Any program
that's sufficiently complex will have security problems. Ultimately,
what makes a security disclosure something that you need to act upon is
that other people know about it. You will always have vulnerabilities.
If nobody knows about them, you're relatively safe.
Isn't that a comfortable thought?
Simson Garfinkel, CISSP, is a technology writer in the Boston area. He can be reached at machineshop@cxo.com.
ILLUSTRATION BY ANASTASIA VASILAKIS
|