Argument Essay – Cryptography – Computer Science

Topic: Encryption – Writing Assignment

Read the article listed below, write an 800 word Times New Roman 12 point font paper on one argument (or viewpoint) made in the article which impresses you the most. Please be clear and concise. Your paper should focus on the following things:

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

·        

A short overview of the article.

·         A description of the argument chosen by you.

·         Why do you agree or disagree with this argument?

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

·         Why does this argument impress you the most?

·         To which extent does this argument enhance or deepen your understanding about security and management?

R. Anderson. “Why Cryptosystems Fail”. Communications of the ACM, 37(11):32-40, November 1994.

Why Cryptosystems Fail

ryptography is used by governments, banks, and other or-ganizatiom to keep
messages secret and to protect electronic transactions from modification. It
is basically an engineering discipline, but differs in a rather striking way
from, for example, aeronautical engineering: there is almost no public feed-
back about how cryptographic systems fail.

Commercial airline crashes are extremely public events. Investigators
rush to the scene, and their inquiries involve experts from a wide range of
interestsfrom the carrier, through the manufacturer, to the pilots’ union.
Their findings are examined byjournalisrs and politicians, discussed on elec-
tronic bulletin boards and in pilots’ messes, and passed on by flying instruc-
tors. This learning mechanism is the main reawn why, despite the inherent
hazards of flying, the risk of an individual being killed on a scheduled air
journey is only about one in a million.

Cryptographers rarely get this kind of feedback, and indeed the history of
the subject shows the same mistakes being made over and over again. For
example, Norway’s rapid fall in the Second World War was largely due to the
Germans’ success in solving British naval codesusing exactly the same tech-
niques that the Royal Navy’s own “Room 40” had used against Germany in
the previous war [ 161.

Altbough we now have a reasonable history of cryptology up to the end
of World War II, a curtain of silence has descended on government-sector
use of this technology since then. Although this is not particularly surpris
ing, it does leave a large experience deficit, especially as the introduction of.
computers since 1945 has changed the situation considerably. It is as ifacci-
dent reports were only published for pistonengined aircraft, while the caw
es of all jet-aircraft crashes were kept a state secret.

This secrecy is no longer appropriate, a~ military messaging networ!~ now
make up only about 1% of the world’s crptography, whether we measure
this by the number of users or by the number of terminals. There are some
civilian applications for secure messaging, such as interbank money transfer
and burglar alarm signaling; but the great majority of fielded cryptographic
systems are in applications such as bank cards, pay-TV, road tolls, office
building and computer access tokens, lottery termmals, and prepayment
electricity meters. Their job is basically to make petty crime, such as card
forgery, slightly more difficult.

Cryptography was introduced to the commercial world from the military
by designers of automatic teller machine (ATM) systems in the early 1970s.
Since then, ATM security techniques have inspired many of the other sys
ternsespecially those where customers initiate low-value transactions for
which we want to account. One might therefore expect the ATM experience
to give a good first-order threat model for cryptographic systems in general.

Automatic Teller Machine Disputes

In SCIIIK countries, banks are responsible for the risks associated with new
technology In 1980, a New York court believed a bank customer’s word that

she had n”t made a withdrawal, rarhcl
than the word of the bank’s exper,

that she must have done so 1151; the
Federal Reserve then passed regula-
dons that require U.S. banks to refund
all disputed electronic transactions
unless they can prove fraud by the cus-
tamer. Since then, many U.S. ATM
cash dispensers have had video carr-
eras installed.

In Britain, the courts have not yrt
been so demanding; despite a parha-
mentary commission that found thar
the personal identification numbe
(PIN) system was insrcurr [14], bank-

crs simply deny that their systems ca”
ever be at fault. Customers who c”m
plain about “phantom withdrawals”

are told that they must be lying, 01
mistaken, or that they must have
been defrauded by their friends or
relatives. This has led t” a string of
court cases in the U.K.:
l A teenage girl in Ashton was COIL-
victed in 1985 of stealing f40 from
her father. She pleaded guilty on
the advice of her lawyers that shy
had no defense, and then disap-
peared; it later turned “ut that
there had nevrr been a theft, but a
clerical rrr”r by the bank, which
tried to cover it up.
. A She&Id police sergeant wa>
charged with theft in N”vembe,
1988 after a phantom withdrawal

took place on a card he had confia-
cated from a suspect. He was lucky:
his colleagues located the person
who made the transaction after the
disputed one, and her testimony
cleared him
l Charges of theft against an el-
derly w”man in Plymouth were
dropped after our inquiries showed
the bank’s computer security sys-
terns were a shambles. The same

happened in a case against a taxi
driver in Great Yarmouth.
l After a police constable c”m
plained he had not madr six AIM
withdrawals that appeared on his
bank statement, the bank had him
prosecuted and convicted for at-
tempting to obtain money by decrp-
don. Their technical evidence was
highly suspect; there was an outcry
in the press, and an appeal is
under way.
l Customers are suing banks in the
civil courts in both England and

&otlar~d, dnd n C~LC ,,,a? lbe
launched shol-tly in Norwa) a well

Wr have been involved i” prow-
ing expcrr advice in many of thrse

cases, which produced a vasr quantity
of evidence. In addition m this, and
“the,- information discovered

through the legal p,-ocess, we haw
intcrvirwed former bank rmployeeb
and criminals, srarching the banking,
legal, and trchnical literatures, and
drawn on experirnce gainrd dcsign-
ing cryptographic equipment. One
outcome of all this activity has been
the first unclassified study ofhow and
why cryptosystcms fail.

The Three COmmOfl Problems
with ATM Security
Automatic teller- machine systems ue
rncryption t” protect customers
PINs. The details vary from one bank
m another, but many use variants ofa
system originally drvclopcd by IBM
[21], in which the PIN is derived from
the account number by encryption. Ir
is also rncryptrd while being sent
Srom the ATM to thr bank for vrrifi-
cat;on.

When the crypt” know-how was
“riginally imported from the dcfenae
sector, a threat model came with it.

This model presumed that attacks on
the system would be technically so-
phisticated, and might in\,olve crypr-
analysis or the manipulation of trans-
acti”ns at s”mr point in the path
between the ATM and the bank that
issued the card.

Indeed, there are many ways i”
which a skilled attacker could pene-
tratr thr world’s ATM networks [3].
Some networks do not cncrypt PINS
properly, or at all; many banks, espc-
cially in the U.S., do encryption in

software rather than hardware, with
the rrsult that the keys are known t”
programmers; somr older ATMs and
encryption devices have known wcak-
nesses; and even those systems that
use approved encryption hardware
can be vulnerable, as the Data En-
cryption Standard algorithm is be-
coming increasingly “pen t” attack
[24]. All these facts are used by en-
cryption equipment sales staff in thei!
efforts t” persuade bankrrs to buy the

latest products.
Of the hundreds of documented

failures of ATM security, however,

“nl) IMO rnwlwd ruch attack*: in
one, a telephone engineer in Japan

recorded customer Cal-d data and
PIN5 from a phone line; in the other,
technicians programmed a communi-
cations px-ocessol- to send only posit
tive authorirati”ns t” an ATM where
accomplices were waiting. None of
the other thefts and frauds wrre due
t” skilled attack, but were rather
made possible by errors in the desig”
or operation of the ATM system itself

The three main causes of phantom
withdrawals did not involve cryptol-
ogy at all: they w.e,vz pr”gram buga,

postal interception OS cards, and
thefts by bank staff.

First, there is a “background
noise” of transacti”ns that turn out t”
be wrongly processed (e.g., posted t”
the wong account,. It is well known
that it is difficult to get an error rate
below about 1 in 10,000 on largr, he
rrogcnrous transaction pl-“casing
systems such as ATM networks [IO],
yet, bef”re the British litigation
started, the government minister rc-
sponsiblc for Britain’5 banking indus-
try was claiming an rrr”r rate of 1 in
1.5 million! Under prcsaurr from
lawyers, this claim was trimmed to I
in 250,000, then 1 in 100,000, and
most recently to I in 34,000. Eveo
this last figure would still mean thal

about 30,000 phantom withdrawals a
year in Britain (and “vrr 200,000 in
the United States) are caused by pro-
cessing errors.

Second, problems with the posul
service are also well known and can
be particularly acute in university
towns. In Cambridge, for example,
approximately 4,000 pcoplc “pen
bank accounts every October; thei,
ATM cards and PINS are delivered t”

college pigeonholes, which are wide
open t” thieves. Yet this author’5
bank was unable t” arrange for a card
to be sent by recorded delivery; ita
systrm designers had not anticipated
the requirement.

The third big problem is theft by
bank stall. British banks dismiss
about I% of their staff every year for
disciplinary reasons, and many of
these firings are for petty thefts in
which ATMs can easily bc involved.
There is a mwal hazard here: staff

know that many ATM-related theft5
g” undetected because of the p”licy

the pwblem (and it did not occur
to anyone t” check).
. One of the largest Londrm hanks
had written the encrypted PIN on
the card’s magnetic strip. The crirw
inal fraternity found by trial and
error that you could changr the
arcount number on your own card’s
magnetic strip to that of y”ur tdr-
get, and then use it with your “WI>

PIN to toot the targeted acc”unt. A
document about this tectmique cir-
culated in the British prison systenl,
and tw” men were recently chargrd
at Bristol Crown c”wt “f conspiring
to steal money by altering cards in
this way. They produced an em-
nrnt banking industry expert who
testified that what they had planned
was impossible; but after a leading
newspaper demonstrated otherwise,
they changed their plea to guilty[8].

l Some banks have schemes that
enable PINS to be checked by “ff-
line ATMs withnut giving them the

master encryption key needed t”
derive PINS from account numbezra.
For example, customers of one Brit-
ish bank got a credit-card PIN with
digit-one plus digit-four equal t”
digit-two plus digit-thrcr, and a
debit-card PIN with one plus three
equals tw” plus four. Villains even-
tually discovered that they could
use stolen cards in off-line devicea
by entering a PIN such as 4455.
l Even without such weaknesses,

the use of store-and-forward pro-
cessing is problematic. Anyone can
“pen an account, get a card and
PIN, make several copies of the
card, and get accomplices to draw
cash from a number of different
ATMs at the samr time. This was a
favorite modus operandi in Britain
in the mid-19ROs, and is still a
problem in Italy, where ATMs arc
generally off-line over the weekend.
l Any security technology can be
defeated by gross negligence. In
August 1993, my wife went into a
branch of our bank and told them
that she had forgotten hrr PIN;
they helpfully printed a replace-
ment PIN mailer from a PC behind
the counter. This was not the

branch at which her acc”unt is
kept; no “ne knew her, and the
only identification she produced
was her bank card and checkbook.

By thnt tm,e, bank5 II, Bntam hnd
endured some 18 months of bad
publicity about pwr ATM sccuriry,
and this parrirutar bank had been 4

press target since April of that year.

Tllis might Icad IIS to ak what the
future might bold. M’ill all magnrtic
cards br replaced with smartcards, as
is already happrning in countrirs
from Francr t” Guatemala and from
Norway t” South Africa [Z]? One “I
the smartcard vrndors strongest ar-
guments is that card forgery keeps on
rising, and that the fastest growin!:

modus “prrandi is to use hdac trm&
nals to collrct customer card and PIN
data.

Artarks “1 this kind were lirsr ICI-

ported from the United Stares in
198X; more recently, an enterprising
gang b”ughtATMa and an ATM aoft-
ware dcvrtopmrnt kit (on credit),
pwgrammed a machine t” capturr
PINS, and rented space f”l, it in a
shopping mall in Connecticut. A
Dutch gas station attrndant used a
tapped pr,int-of-sale terminal t” har-
vest card data in 1993; and in March
1994, villains constructed an entire
bogus bank branch in the East Lnd of

London and made off with f250,OOO
($375,000). Thcrr srems to be no de-
fense against this kind of attack, short
of moving fwm magnetic cards to
payment tokens, which are more dil-
ficult to forge.

But trusting technology to” much
can be dangerous. Norwegian banks
aprnt millions “n smartcards, and arc
now as publicly certain about theit
computer security as their British c”I-

leagues. Yet despite the huge iwest-
ment, there have been a number of
cases in Trondheim, Norway, where
stolen cards have been used without
the PIN having been lraked by the
user. The hanks‘ refusal to pay up will
probably lead t” litigation, as in Brit-
ain, with the same risk to both bal-
ance sheets and reputations.

Where transaction processing sys-
tems are used directly by the public,

there are really two separate issues.
The first is the public-interest issue of

whether the burden of proof (and
thus the risk) falls on the customer or
on the system operator. If the cus-
mmer carries the risk, the operator
will have little short-term incentive to

rn~p~ovc security; but 111 the lorlg:cl
term, when innocent people are prose
ccutrd because of disputed transa-
ti~ns, the public interest becomea

acute.
If, “1, thr other hand, thr system

operator carrirs the risk, as in the
United States, then the public-intcr-
est i\suc disappcara, and security be-
comes a straightfwward engineering
pwblrm fix the hank (and its insur-
e1.s and equipment suppliers). WC
wnsider how this problem can br
tackled in the following sections.

Organizational Aspects
First, a caveat: our reseat& sbowcd

that the organizational problems “I
building and managing srcure sys-
tems arc so ~rvrrr that they will frus-
trate any purely technical solution.

Many organizations have no corn
puter security team at all, and the rep,
have tenuous arrangements. The irr-
ternal audit dcpartmcnt, fw rxam-
plr, will resist bring givrn any line

management tasks, while the pro-
gramming staftdislike anyone whose
role seems m be making their .j”b

more difficult. Security teams thus
tend to be “rrorganizrd” rrgulxly,
lading t” a toss of continuity; a re-
cent study shows, for example, that
the average job tenure of compurel
security managers in U.S. govern-
ment departments is only seven
months [13].

It should not br surprising that
many firms get outside consultants t”
do their security policy-making and

review tasks. However, this can be
dangerous, especially if firms pick
these suppliers for an “air ofcertainty
and quality” rather than for theit
technical credentials. For example,
there was a network “fover 40 hanks
in &a that encrypted their PINS in a
completely insecure manner (using a
Caesar cipher) for five years, yet in all
this time not one of their auditors or
consultants raised the alarm. It is in-
teresting to notr that, following a
wave of litigation, accountancy firma
are rewriting their audit contracts to
shift all responslbthty for fraud con-

trol to their clients; but it remains to
be seen what effect this wilt have on
their security consulting business.

Much of the management debate,
however, is not about the consultancy

The Problems with Securitv
Products

know that most hkel) fkuhs, ,n

that its theoretical breaking strain
with optimal materials is six times

what is required, and to proof tat
samples of the actual materials used
to three times the ncrdcd strength.

Aircraft rngincers, on the other
hand, know that many accidents are
causrd by the failure of critical com-
ponents, and make extensive use of
redundancy; with very critical func-
tions, this may rxtrnd to design di-
versity. Whrn flying in clouds, pilots
need to know which way is up, and so
a modern airliner typically has two

attitude indicators driven by elcctri-
tally powered gyro platforms. If these
both fail at once, there is a 1950s-rra
technology artificial horizon with
pneumatically driven gyros, and a
1920s vintage turn-and-slip indicator
driven by a battery.

But neither overdcsign nur redun-
dancy is adequate for secure compu-
tational systems. Just doing more
rounds of a bad encryption algo-
rithm, or using a number of weak al-

gorithms one after another, will not
necessarily produce a strong one; and
unthinking use of redundancy in
computer systems can be dangerous,
as resilience can mask faults that
would otherwise be found and fixed.

Our work on ATM systems thcre-
fore inspired us to look for an orga-
nizing principle for robustness prop-
erties in computer security systems.
The kry insights came from the high-

tech end of the business-from
studying authentication protocols
and the ways in which cryptographic
algorithms interact (see the sidebar
“No Silver Bullet”).

These results suggest that explicit-
ness should be the organizing princi-
ple for security robustness. Crypto-
graphic algorithms interact in ways
that break security when their de-
signers do not specify the required
properties explicitly; and protocol
failures occur because naming, fresh-

ness, and chaining properties are as-
sumed implicitly to hold between two
parties.

The importance of explicitness is

confirmed m tbr field of oprtamg
systems security by a recent report
that shows implicit information proh-
lems were one of the main causes of
failure there, and that most of thr
others were due to obGous require-
ments not being explicitly checked [ 171.

However, just saying that every
sccul(ty property must be made ex-
plicit is not a solution to the practical
problem of building robust systems.
The more aspects of any system are
made explicit, the more information
its designer has to deal with; and this
applies not only to designing systems,

hut to evaluating them as well. Can
our explicitness principle ever
amount to more than a warning that
all a system’s assumptions must be
examined very carefully?

There are two possible ways for-
ward. The first is to look for ways in
which a system that has a certain set
ofrelationships checked explicitly can
be shown using formal methods to
possess some desirable security prop-

erty. This may be a good way to deal
with compact subsystems such as au-

thentication protocols; and lists of the
relevant relationships have been pro-
posed [I].

The othrr, and more general, ap-
proach is to try to integrate security
with software engineering. Data-
dependency analysis is already start-
ing to be used in the security world:
l A typical difftcult problem is iden-

tifying which objects in a system
have security significance. As we

saw previously, frauds have taken
place because banks failed to realix
that an address change was a secu-
rity event; and evaluating thr signif-
icance of all the objects in a distrib-
uted operating system is a
Herculean task, which involves trac-
ing dependencies explicitly. Auto-
mated tools are now being con-
structed to do this [l I];
l Another difficult problem is that
of verifying whether an authentica-
tion protocol is correct. This prob-

lem can he tackled by formal meth-
ods; the best-known technique
involves tracing an object’s dcpen-
dencies on crypto keys and fresh-
ness information [9], and has been
used to verify transaction processing
applications as well [2].

However, we cannot expect to find

a “sdvr, bullrr” brtr crrbrr ‘1 t,,i 1s
bccaux many of the m,,re subtle and
difftcult mistakes occur whew ash
sumptioos ahoul security proper&b
fail at the interfacr between different
levels (e.g., algorithm-prot~j,col or

protocol-operating system) [6]. Thus
when WC decompose our system into
mod&s, we must be very careful to
ensure that all our assumptions about
possible interactions have been made
explicit and conzidrrcd carefully.

Explicitness and Software
Engineering
Robustness as rxplicitnesb lita m well
with the general principles of soft-

ware engineering but may require
some changes in its practice. A recent
study shows that for many years the
techniques used by system builders tu
manage security requirements, as-
sumptions, and drpendencies have
lagged a grneration behind the state
of the art [5].

An even more elementary problem
concerns the mechanisms by which
security goals are established. Many
software engineering methodologies
since the waterfall model have dis-

pensed with the traditional requirc-
ment that a plain-language “concept
of operations” should be agreed

upon before any detailed specifica-
tion work is undertaken. This is illus-
trated by our work on ATMs.

ATM security involves several cow
flicring goals, including controlling
internal and external fraud, and arbi-
trating disputes fairly. This was not
understood in the 1970s; people built
systems with the security technology

they had available, rather than from
any clear idea of what they were try-
ing to do. In some countries they ig-
nored the need for arbitration alto-
gether, with expensive consequences.
This underlines the particular impor-
tance of making security goals ex-
plicit, where a concept of operations
can be a great help; it might have fo-
cused ATM designers’ attention on
the practicalities of dispute resolution.

Finally, it may be helpful to com-
pare secure systems with safety criti-
cal systems. They are known to be re-
lated: while the former must do at
most X, the latter must do at /~_YI X,
and there is a growing realization
that many of the techniques and even

wrnponcnr~ horn one disclplmc can

be reused in the other. Let us extend
this relationship to the methodologi-
cal level and have a look at good de-
sign practice. A leading software
safety expert has summed this up in
four principles [Xl]:

l The specification should list all
possible failure modes of the sys-
tan. This should include every hub-
stantially new accidrnt or incident
that has ever been reported and
that is relevant to the equipment
being specified.
l It should explain what swateg)
has been adopted to prevent each

of these failure modes, or at least
make them acceptably unlikely.
l It should then spell out how each
of these strategies is implemented,
including the consequences when
each single component fails. This
explanation must cover not only
technical factors, but training and
management issues too. If the pro-
cedure when an engine fails is to
continue flying with the other en-
gine, then what skills does a pilot
need to do this, and what are the
procedures whereby these skills are

acquired, kept current, and tested?
l The certification program must
include a review by independent
experts, and test whether the equip-
ment can in fact be operated by
people with the stated level of skill
and experience. It must also in-
clude a monitoring program
whereby all incidents are reported
to both the equipment manufac-
turer and the certification body.

This structure ties in neatly with
our findings, and gives us a practical
paradigm for producing a robust,
explicit security design in a real proj-
ect. It also shows that the TCSEC
program has a long way to go. As we

mentioned earlier, so far no one
seems to have attempted even the
first stage of the safety engineering
process for cryptographic systems.
We hope that this article will contrib-
ute to closing the gap, and to bring-
ing security engineering up to the
standards already achieved by the
safety-critical systems community.

concIustons
Designers of cryptographic systrms

have suttered from a lack <,I feedback about how their products fail in prac- tice, as opposed to how they might fail in theory. This has led to a false thrrat model being accrpted; design- e,n focused on what could possibly go wrong, rather than on what was likely to, and many of their products ended up being so complex and tricky to use, they caused implrmentation blunders that Icd to security failures.

Almost all security failures are in
fact due to implementation and man-
agement errors. one specific conse-
quence has been a spare of ATM
fraud, which has not only caused fi-
nancial losses, but has also caused sev-

era1 wrongful prosecutions and at
least one miscarriage ofjustice. There
have also been military consequences,
which have now been admitted (al-

though the dewI, remain classitird).

Our work also shows that compo-
nent-level certification, as embodied
in the TCSEC program, is unlikely to
achiew its stated goals. This, too, has
been admitted indirectly by the rnili~
tary; and we would recommend thal
future security standards take much
more account of the environments in
which the components are to be used,
and rspecially the system and human
factors.

Most interesting of all, however, is
the lesson that the bulk of the com-

puter security research and develop-
ment budget is expended on activities
that are of marginal relevance to real
needs. The real problem is how to
build robust security systems, and a
number of recent research ideas arc
providing insights into how this can

No Silver 6ullt?t

Mach of these ProPosats only addresses Part of the Problem. and none of
them 15 adequate on Its own: ProtPcot fatlures are knOwI which W$“ttkO”J
the lack of name, or of heshnesr. or M context tnformatton wtfhtn the s.?C”rth/
envek,Pe 111. But Putting them together and tnststtng on at1 Me% “artabtes
being made exPttCtt t” each me55age aPPears to sot”e the global rObuStWSI
Problem–af least for StmPte PrOtOCOtS.

This COmbined aPProach had act”att” been adopted in 1991 for a banttng
aPP,,cat,on 121, In which attact on the Payment Protocots are Prevented by
making each message stiTt wt+_h the sender’s name. and then enCr’,Pttng If
under a key that contains a “ash of the Previous message. These techntdWS
were not Wed as an exPertmenf In robuStne5s. but to factlltate format Wrtflcatlon.

4nother reason to believe that exPttcitneSS Should be the organlztng Prlnct-
Pte for robust sec”rtN comer from st”dyt”g how CWPtOgraPhtC atgortthms
Interact. Researchers have aSLed. for examPte. what sort of PrOPertIeS We
need from a hash function In order to use It With a given StgnatUre scheme.
and a number of “ece55ar” CPndtftonS have been found. Thts ted US tO ask
whether there Is any stngte ProPetT, that IS S”MCte”f to Prwenf at, dangerous
interactions. We recen#y showed that fhe anrwer is Probably no 141.

What this mexx 16 that in CrfPtotog”. a5 In sG+euare engtneertng. we cannot
exPect to Rnd a “s,,“er bullet” 171; there can be no general PrPPei-b, that Pre-
“entS NV0 algorithms frOm t”teWXt”g and Mat 15 tltW tO be Of any PrWUCat
use. In most real sttuattons. however. we Can eYPtlCltly SPeCtfY the PTODeTtteS
we need: tyPIcal ProPertIes mlghf be that a function fts Corretatton-free Iwe
can’t find x and Y such that flxt and WI agree tn too manv blt51 or IIIUtflPtt~
don-free Iwe Can’t find X. y, and L Such fhaf flx)f,“, = ftZtl.

3

Still stressed with your coursework?
Get quality coursework help from an expert!