http://www.csoonline.com/http://www.csoonline.com/magazinenewsletterscareerfeaturesresourcessearch

October 2005 CSO Magazine

October 2005

Global State of Information Security 2005

Sowing the Seeds of Strategic Security

Surveillance World

DHS Gets Low Marks

Compliance? What’s That?

Financial Services: Safe Deposits

So Many Breaches, So Few Insights

Follow the Money...Please!

Next Year's To-Do List

How to Foil a Phish

What happens after phishers strike? An inside look at one midsize bank’s cutting-edge incident response plan.
By Sarah D. Scalet

After Phishing? Pharming!

Many security experts are concerned about this more technically sophisticated attack. A CISO’s playbook.
By Bob Violino

Basic Training for Guard Duty

Guards are key players in an organization’s physical security roster. To groom good ones requires a commitment to both train them and provide them opportunities to grow
By Daintry Duffy

The Sky Really Is Falling

Ed Lazowska, cochairman of the President’s Information Technology Advisory Committee, says that there is a looming security crisis, and the government, vendors and CIOs aren’t doing enough to stop it.
By Ben Worthen

50-Cent Holes

Sure, you've got a million-dollar security battleship, but it's full of...
By Thomas Wailgum

Briefing

Departments

FROM THE EDITOR

Remote Sensors
By Lew McCreary

MACHINE SHOP

A Field Guide to
Spotting Bad Cryptography

TOOLBOX

Artificial Noses

DEBRIEFING

Rachmaninoff versus riffraff

 

Cryptography

A Field Guide to
Spotting Bad Cryptography

It takes an expert to determine whether
a cryptographic system is truly secure, but
CSOs can learn to spot red flags

By Simson Garfinkel

To determine if a cryptographic protocol or system is actually secure takes an expert. And, even then, hidden flaws may be lurking.

Cryptography is the collection of techniques used to protect information from unauthorized disclosure or modification. These techniques are the basis of the secure sockets layer, or SSL, protocol used to secure e-commerce transactions over the Web, as well as digital signature schemes that make it possible for video game consoles to tell the difference between a game that’s authorized and one that’s not.

Although cryptography was originally the stuff of spooks and diplomats, it is becoming more important every year as other strategies for protecting information increasingly show their limits. For example, it was once possible to prevent electronic documents from getting into the wrong hands by keeping them on a computer that was not connected to a network. These days, it’s nearly impossible to keep a computer off the network, and even if you could, there is always a chance a document might leak out on somebody’s USB memory stick. Enter cryptography in the form of digital rights management systems, which keep documents in their encrypted form and only release the decryption key when a document is being accessed by an authorized individual.

The problem with cryptography is that it is downright difficult to tell the difference between a system that is actually secure and one that merely provides the appearance of security. Case in point, those bicycle locks with the cylindrical keys that were used for more than 20 years before thieves realized locks could be picked with a ballpoint pen. There are probably thousands of unknown security flaws lurking in a popular PC software.

Fortunately, the converse is generally not true: It’s relatively easy to look at a crypto system and know if it is probably not secure. That’s because there are a few red flags that usually indicate something inside is not kosher. These warning signs won’t tell you for sure that a system is hopeless, but they will tell you further research is warranted.

Red Flag #1:
Keys That Are Too Small

The security of most cryptographic systems is based in part on the secrecy of its key. If an attacker can try every possible key and know for sure when he has found the correct one, the attacker can compromise the system. This is known as a brute force attack.
Keys are binary strings of 1’s and 0’s with a length that’s almost always fixed. As with digits in a phone number, more bits means that there are more potential combinations for authorized users to choose from, and therefore more possible keys that an attacker needs to go through to try to find the one that’s correct.

There are two kinds of encryption algorithms: symmetric algorithms, like data encryption standard and advanced encryption standard, and public-key algorithms like RSA and Diffie-Hellman. Generally speaking, symmetric keys that are shorter than 128 bits are not considered secure and should not be used. Likewise, you should not use RSA keys that are shorter than 1,024 bits.

When the 802.11 wireless equivalent privacy (WEP) standard was released in the 1990s, the standard called for 40-bit encryption. Even before the first attacks against WEP were publicly disclosed, I was telling my clients not to trust WEP because the key was simply not long enough to ensure security. Since then, numerous other vulnerabilities have been discovered as well.

Red Flag #2:
Keys That Are Too Long

The U.S. government’s advanced encryption standard (AES) supports keys that are 128, 192 and 256 bits long. If longer keys are more secure, then why stop at 256 bits? Wouldn’t a 512 or 1,024-bit symmetric key be more secure still?

Surprisingly, the answer to this question is usually no. Given the limits of computers as we understand them, there is no reason to think that a 192-bit or 256-bit symmetric key will be any stronger than a 128-bit key for the foreseeable future. That’s because even the fastest computers mankind is likely to build within the next two or three decades will be unable to try all possible 128-bit keys to crack an encrypted message with a brute force attack, let alone all 192-bit or 256-bit keys. Although the additional bits confer more theoretical security, that additional security is meaningless.

Nevertheless, there has been a steady pressure on technologists to adopt longer and longer keys. Part of this pressure comes from history: In the 1990s, there were many cases in which successively longer keys were cracked by computer scientists. What people forget is that the industry at the time was using unreasonably short keys as a result of federal regulation—regulations that have since been lifted. Unfortunately, the experience of the 1990s wrongly taught some technologists that key lengths need to be increased every few years. Another part of the push for longer keys is unbridled marketing: Longer keys just sound more secure than shorter ones, even if the security isn’t relevant for computers likely to be manufactured in the 21st century. I suspect that it’s harder to sell a 128-bit encryptor when your competition is selling a spiffy something with 256 bits.

Nevertheless, you should be suspicious if a vendor tells you that it is selling something with 256-bit encryption because 128-bits is not secure. You should be especially suspicious if someone tells you that he is using 448-bit encryption or 10,000-bit encryption. This usually means the vendor’s salesman doesn’t understand what he is talking about.

Red Flag #3:
Proprietary Algorithms

Related to the red flag of suspiciously long keys is the red flag of proprietary encryption algorithms. Cryptography research-ers have spent decades developing encryption standards like AES, triple DES and RSA that are considered good enough for the most sensitive information. Generally, there is no reason to consider using anything other than a published standard encryption algorithm.

Experience has shown that secret, proprietary algorithms are rarely as strong as encryption algorithms that have been published and publicly analyzed. A basic tenet of modern cryptography is that the entire security of an encrypted message should rest with the encryption key, not with the encryption algorithm. That’s because it’s nearly impossible in today’s world to keep an algorithm secret: An attacker can always obtain a copy of your program, reverse-engineer it and learn the encryption algorithm that’s in use.

Usually algorithms that are purportedly “secret” can make that claim only because nobody has been suitably motivated to figure out how they work. One of the best examples was the closely guarded DVD encryption algorithm used for preventing consumers from making unauthorized copies of DVDs. This algorithm was widely adopted by the consumer entertainment industry, put into tens of millions of DVD players, and  cracked by a high school student.
So why do vendors sometimes develop secret algorithms and try to get customers to buy them? Sometimes it is because the vendor didn’t have a handle on its software development process: Perhaps a programmer thought that it would be fun to write a new encryption algorithm rather than use one of the standards. Other times it is because the company is trying to cut costs. Most frequently, though, it’s because the people who were charged with developing the cryptographic system fundamentally didn’t understand cryptography in practice.

Red Flag #4:
Keys That Can’t Be Changed

Since the security of an encryption system depends on the key, there should be a way to change a key if it is compromised. Many commercial systems use a small number of fixed and un-changeable encryption keys to protect their data. Once again, the best known of these systems was the DVD encryption system: Although the industry imagined it would simply change the decryption keys for future DVDs if the system was compromised, after the break, the industry discovered that there was no way to upgrade all of those DVD players in the field. Whoops.

Evaluate the Options

With this simple list of red flags you can start evaluating the various charlatans and hucksters who come into your office trying to sell you their cryptography paraphernalia. But be careful: A little knowledge can be dangerous if it is misapplied.

For example, an interesting area of research in secure computing today involves devices that use a physical unclonable function (PUF). These de-vices implement a fingerprint for computer systems—an identity that can’t be changed. Although this seems to violate Red Flag #4, the identity also can’t be copied, so PUFs are thought to be reasonably secure.

On the other hand, if you meet a new vendor who has a security gizmo that will encrypt laptop hard drives using a secret high-performance encryption algorithm with an 822-bit encryption key that’s stronger than anything allowed by the U.S. Government, now you’ll know enough to stay clear.

Simson Garfinkel, PhD, CISSP, is spending the year at Harvard University researching computer forensics and human thought. He can be reached via e-mail at machineshop@cxo.mail.



Most Recent Responses:

For more on this subject the Snake Oil FAQ is a good resource: http://www.interhack.net/people/cmcurtin/snake-oil-faq.html

For more on cryptography in general, the Cryptography FAQs are a good resource: http://www.faqs.org/faqs/cryptography-faq/

Joe Morris
Technical Risk/Security Director
PNC Financial Services
Email
Print

Add a Comment:

Your comment will be displayed at the bottom of this page, at the discretion of CSOonline.

* Name:

* Title:

* Corp:

* E-mail:

* Subject:

* Your Comment:

 
* Required fields.

We do not post comments promoting products or services.
Comments are owned by whomever posted them. CSO is not responsible for what they say.
Selected comments may be published in CSO magazine.
We will neither sell nor display your personal information.



Buy a Link Here