PASSWORDS AND Computer Security
2013 Surveillance as a Business Model As the Edward Snowden revelations continue to expose the full extent of the National Security Agency's eavesdropping on the Internet, it has become increasingly obvious how much of that has been enabled by the corporate world's existing eavesdropping on the Internet.
2015
Privacy Post Snowden
:
We've decided that advertising, that marketing, that personal
information is the currency by which we all buy our Internet.
MySpace Passwords Aren't So Dumb
Bruce Schneier is the CTO of BT Counterpane and the author of
Beyond.
How good are the passwords people are choosing to protect their
computers and online accounts?
It's a hard question to answer because data is scarce. But
recently, a colleague sent me some spoils from a MySpace phishing
attack: 34,000 actual user names and passwords.
The attack was pretty basic. The attackers created a fake MySpace
login page, and collected login information when users thought
they were accessing their own account on the site. The data was
forwarded to
various compromised web servers, where the attackers would harvest
it later.
MySpace estimates that more than 100,000 people fell for the
attack before it was shut down. The data I have is from two
different collection points, and was cleaned of the small
percentage of people who realized they were responding to a
phishing attack. I analyzed the data, and this is what I learned.
Password Length: While 65 percent of passwords contain eight
characters or less, 17 percent are made up of six characters or
less. The average password is eight characters long.
Specifically, the length distribution looks like this:
1- 4 0.82 %
5- 1.1 %
6- 15 %
7- 23 %
8- 25 %
9- 17 %
10- 13 %
11- 2.7 %
12- 0.93 %
13- 320.93 %
Yes, there's a 32-character password:
"1ancheste23nite41ancheste23nite4."
Other long passwords are "fool2thinkfool2thinkol2think" and "dokitty17darling7g7darling7."
Character Mix: While 81 % of passwords are alphanumeric, 28 % are just lowercase letters plus a single final digit -- and two-thirds of those have the single digit 1.
Only 3.8 % of passwords are a single dictionary word, and another
12 % are a single dictionary word plus a final digit -- once
again, two-thirds of the time that digit is 1.
numbers only 1.3 %
letters only 9.6 %
alphanumeric 81 %
non-alphanumeric 8.3 %
Only 0.34 % of users have the user name portion of their e-mail
address as their password.
Common Passwords:
The top 20 passwords are (in order):
password1, abc123, myspace1, password, blink182, qwerty1, fuckyou,
123abc, baseball1, football1, 123456, soccer, monkey1, liverpool1,
princess1, jordan23, slipknot1, superman1, iloveyou1 and monkey.
(Different analysis here.)
The most common password, "password1," was used in 0.22 % of all
accounts. The frequency drops off pretty fast after that: "abc123"
and "myspace1" were only used in 0.11 % of all accounts, "soccer"
in 0.04 % and "monkey" in 0.02 %.
For those who don't know, Blink 182 is a band. Presumably lots of
people use the band's name because it has numbers in its name, and
therefore it seems like a good password. The band Slipknot doesn't
have any numbers in its name, which explains the 1. The password
"jordan23" refers to basketball player Michael Jordan and his
number. And, of course, "myspace" and "myspace1" are
easy-to-remember passwords for a MySpace account. I don't know
what the deal is with monkeys.
We used to quip that "password" is the most common password. Now
it's "password1." Who said users haven't learned anything about
security?
But seriously, passwords are getting better. I'm impressed that
less than 4 % were dictionary words and that the great majority
were at least alphanumeric. Writing in 1989, Daniel Klein was able
to crack (.gz) 24 % of his sample passwords with a small
dictionary of just 63,000 words, and found that the average
password was 6.4 characters long.
And in 1992 Gene Spafford cracked (.pdf) 20 % of passwords with
his dictionary, and found an average password length of 6.8
characters. (Both studied Unix passwords, with a maximum length at
the time of 8 characters.) And they both reported a much greater
percentage of all lowercase, and only upper- and lowercase,
passwords than emerged in the MySpace data. The concept of
choosing good passwords is getting through, at least a little.
On the other hand, the MySpace demographic is pretty young.
Another password study (.pdf) in November looked at 200 corporate
employee passwords: 20 percent letters only, 78 percent
alphanumeric, 2.1 percent with non-alphanumeric characters, and a
7.8-character average length.
Better than 15 years ago, but not as good as MySpace users. Kids
really are the future.
None of this changes the reality that passwords have outlived
their usefulness as a serious security device. Over the years,
password crackers have been getting faster and faster. Current
commercial products can test tens -- even hundreds -- of millions
of passwords per second. At the same time, there's a maximum
complexity to the passwords average people are willing to memorize
(.pdf). Those lines crossed years ago, and typical real-world
passwords are now software-guessable. AccessData's Password
Recovery Toolkit would have been able to crack 23 percent of the
MySpace passwords in 30 minutes, 55 percent in 8 hours.
Of course, this analysis assumes that the attacker can get his
hands on the encrypted password file and work on it offline, at
his leisure; i.e., that the same password was used to encrypt an
e-mail, file or hard drive. Passwords can still work if you can
prevent offline
password-guessing attacks, and watch for online guessing. They're
also fine in low-value security situations, or if you choose
really complicated passwords and use something like Password Safe
to store them. But otherwise, security by password alone is pretty
risky.
Will We Ever Learn? by Bruce Schneier Counterpane Internet Security, Inc.
Mon, 15 May 2000
From:
Bruce Schneier
Subject: Computer Security: Will We Ever Learn? (CRYPTO-GRAM, May
15, 2000) in RISKS with permission, by Bruce Schneier, Counterpane
Internet Security, Inc. See Bruce's
free Internet security newsletter
]
Computer Security: Will We Ever Learn?
If we've learned anything from the past couple of years, it's that
computer security flaws are inevitable. Systems break,
vulnerabilities are reported in the press, and still many people
put their faith in the next product, or the next upgrade, or the
next patch. "This time it's secure," they say. So far, it hasn't
been. Security is a process, not a product. Products provide some
protection, but the only way to effectively do business in an
insecure world is to put processes in place that recognize the
inherent insecurity in the products. The trick is to reduce your
risk of exposure regardless of the products or patches. Consider
denial-of-service attacks. DoS attacks are some of the oldest and
easiest attacks in the book. Even so, in February 2000,
coordinated, distributed DoS attacks easily brought down several
high-traffic Web sites, including Yahoo, eBay, Amazon.com and CNN.
Consider buffer overflow attacks. They were first talked about as
early as the 1960s -- time-sharing systems suffered from the
problem -- and were known by the security literati even earlier
than that. In the 1970s, they were often used as a point of attack
against early networked computers. In 1988, the Morris Worm
exploited a buffer overflow in the Unix fingerd daemon: a very
public use of this type of attack.
Today, over a decade after Morris and about 35 years after these
attacks were first discovered, you'd think the security community
would have solved the problem of security vulnerabilities based on
buffer overflows. Think again. Over two-thirds of all CERT
advisories in 1998 were for vulnerabilities caused by buffer
overflows. During an average week in 1999, buffer overflow
vulnerabilities were found in the RSAREF cryptographic toolkit
(oops), HP's operating system, the Solaris operating system,
Microsoft IIS 4.0 and Site Server 3.0, Windows NT, and Internet
Explorer. A recent study named buffer overflows as the most common
security problem.
Consider encryption algorithms. Proprietary secret algorithms are
regularly published and broken. Again and again, the marketplace
learns that proprietary secret algorithms are a bad idea. But
companies and industries -- like Microsoft, the DVD consortium,
cellular phone providers, and so on -- continue to choose
proprietary algorithms over public, free alternatives.
Is Anyone Paying Attention?
Sadly, the answer to this question is: not really. Or at least,
there are far fewer people paying attention than should be. And
the enormous need for digital security products necessitates
people to design, develop and implement them. The resultant dearth
of experts means that the percentage of people paying attention
will get even smaller.
Most products that use security are not designed by anyone with
security expertise. Even security products are generally designed
and implemented by people who have only limited security
expertise. Security cannot be functionality tested -- no amount of
beta testing will uncover security flaws -- so the flaws end up in
fielded products.
I'm constantly amazed by the kinds of things that break security
products. I've seen a file encryption product with a user
interface that accidentally saves the key in the clear. I've seen
VPNs where the telephone configuration file accidentally allows a
random person to authenticate himself to the server, or that
allows one remote client to view the files of another remote
client. There are a zillion ways to make a product insecure, and
manufacturers manage to stumble on a lot of those ways again and
again.
No one is paying attention because no one has to.
Computer security products, like software in general, have a very
odd product quality model. It's unlike an automobile, a
skyscraper, or a box of fried chicken. If you buy a product, and
get harmed because of a manufacturer's defect, you can sue...and
you'll win. Car-makers can't get away with building cars that
explode on impact; chicken shops can't get away with selling
buckets of fried chicken with the odd rat mixed in. It just
wouldn't do for building contractors to say thing like, "Whoops.
There goes another one. Sorry. But just wait for Skyscraper 1.1;
it'll be 100% collapse-free!"
Software is different. It is sold without any claims whatsoever.
Your accounts receivable database can crash, taking your company
down with it, and you have no claim against the software company.
Your word processor can accidentally corrupt your files and you
have no recourse. Your firewall can turn out to be completely
ineffectual -- hardly better than having nothing at all -- and yet
it's your fault. Microsoft fielded Hotmail with a bug that allowed
anyone to read the accounts of 40 or so million subscribers,
password or no password, and never bothered to apologize.
Software manufacturers don't have to produce a quality product
because there is no liability if they don't. And the effect of
this for security products is that manufacturers don't have to
produce products that are actually secure, because no one can sue
them if they make a bunch of false claims of security.
The upshot of this is that the marketplace does not reward real
security. Real security is harder, slower, and more expensive,
both to design and to implement. Since the buying public has no
way to differentiate real security from bad security, the way to
win in this marketplace is to design software that is as insecure
as you can possibly get away with.
Microsoft knows that reliable software is not cost effective.
According to studies, 90% to 95% of all bugs are harmless. They're
never discovered by users, and they don't affect performance. It's
much cheaper to release buggy software and fix the 5% to 10% of
bugs people find and complain about.
Microsoft also knows that real security is not cost-effective.
They get whacked with a new security vulnerability several times a
week. They fix the ones they can, write misleading press releases
about the ones they can't, and wait for the press fervor to die
down (which it always does). And six months later they issue the
next software version with new features and all sorts of new
insecurities, because users prefer cool features to security.
The only solution is to look for security processes.
There's no such thing as perfect security. Interestingly enough,
that's not necessarily a problem. In the U.S. alone, the credit
card industry loses $10 billion to fraud per year; neither Visa
nor MasterCard is showing any sign of going out of business.
Shoplifting estimates in the U.S. are currently between $9.5
billion and $11 billion per year, but you never see "shrinkage"
(as it is called) cited as the cause when a store goes out of
business. Recently, I needed to notarize a document. That is about
the stupidest security protocol I've ever seen. Still, it works
fine for what it is.
Security does not have to be perfect, but the risks have to be
manageable. The credit card industry understands this. They know
how to estimate the losses due to fraud. Their problem is that
losses from phone credit card transactions are about five times
the losses from face-to-face transactions (when the card is
present). Losses from Internet transactions are many times those
of phone transactions, and are the driving force behind SET.
My primary fear about cyberspace is that people don't understand
the risks, and they are putting too much faith in technology's
ability to obviate them. Products alone cannot solve security
problems.
The digital security industry is in desperate need of a perceptual
shift. Countermeasures are sold as ways to counter threats. Good
encryption is sold as a way to prevent eavesdropping. A good
firewall is a way to prevent network attacks. PKI is sold as trust
management, so you can avoid mistakenly trusting people you really
don't. And so on.
This type of thinking is completely backward. Security is old,
older than computers. And the old-guard security industry thinks
of countermeasures not as ways to counter threats, but as ways to
avoid risk. This distinction is enormous. Avoiding threats is
black and white: either you avoid the threat, or you don't.
Avoiding risk is continuous: there is some amount of risk you can
accept, and some amount you can't.
Security processes are how you avoid risk. Just as businesses use
the processes of double-entry bookkeeping, internal audits, and
external audits to secure their financials, businesses need to use
a series of security processes to protect their networks.
Security processes are not a replacement for products; they're a
way of using security products effectively. They can help mitigate
the risks. Network security products will have flaws; processes
are necessary to catch attackers exploiting those flaws, and to
fix the flaws once they become public. Insider attacks will occur;
processes are necessary to detect the attacks, repair the damages,
and prosecute the attackers. Large systemwide flaws will
compromise entire products and services (think digital cell
phones, Microsoft Windows NT password protocols, or DVD);
processes are necessary to recover from the compromise and stay in
business.
Here are two examples of how to focus on process in enterprise network security:
1. Watch for known vulnerabilities. Most successful network-security attacks target known vulnerabilities for which patches already exist. Why? Because network administrators either didn't install the patches, or because users reinstalled the vulnerable systems. It's easy to be smart about the former, but just as important to be vigilant about the latter. There are many ways to check for known vulnerabilities. Network vulnerability scanners like Netect and SATAN test for them. Phone scanners like PhoneSweep check for rogue modems inside your corporation. Other scanners look for Web site vulnerabilities. Use these sorts of products regularly, and pay attention to the results.
2. Continuously monitor your network products. Almost everything on your network produces a continuous stream of audit information: firewalls, intrusion detection systems, routers, servers, printers, etc. Most of it is irrelevant, but some of it contains footprints from successful attacks. Watching it all is vital for security, because an attack that bypassed one product might be picked up by another. For example, an attacker might exploit a flaw in a firewall and bypass an IDS, but his attempts to get root access on an internal server will appear in that server's audit logs. If you have a process in place to watch those logs, you'll catch the intrusion in progress.
In this newsletter and elsewhere I have written pessimistically
about the future of computer security. The future of computers is
complexity, and complexity is anathema to security. The only
reasonable thing to do is to reduce your risk as much as possible.
We can't avoid threats, but we can reduce risk.
Nowhere else in society do we put so much faith in technology. No
one has ever said, "This door lock is so effective that we don't
need police protection, or breaking-and-entering laws." Products
work to a certain extent, but you need processes in place to
leverage their effectiveness.
Copyright (c) 2000 by
Counterpane Internet Security, Inc
.
Bruce Schneier, CTO, Counterpane Internet Security, Inc. 3031
Tisch Way, 100
Plaza East, San Jose, CA 95128
Ph: 408-556-2401
Fax: 408-556-0889
A version of this essay originally appeared in the April issue of * Information Security * magazine.
Used with permission - thanks Bruce.