An Annotated Bibliography (AB) about my policy topic which is “Social Media Platform – Privacy Policy: Branch 1 User Location Privacy, Branch 4 Data Storage,Branch 5 Third-party application access”
Write an Annotated Bibliography on 2 articles
The first article you will choose from the list below: (Chose only ONE articlefrom this list)
1-
Information Security Policy Compliance_ An Empirical Study of Ethical ideology.pdf
Download Information Security Policy Compliance_ An Empirical Study of Ethical ideology.pdf
2-
Stahl2014_Article_Critical Theory As An Approach To The Ethics Of information Security.pdf
Download Stahl2014_Article_Critical Theory As An Approach To The Ethics Of information Security.pdf
3-
Organizations Information Security Policy Compliance Stick or Carrot Approach.pdf
Download Organizations Information Security Policy Compliance Stick or Carrot Approach.pdf
4-
Security_as_Ethics.pdf
Download Security_as_Ethics.pdf
The second article you will have to find on your own and should be related to your policy topic/setting or type (I recommend google scholar)
10
Security As Ethics
Anthony Burke
Every vision of security has an ethic, but not all are equally ethical.
In late 2008, as this chapter was being written, Presidential candidates John McCain and
Barack Obama toured the United States campaigning for election to an office that – more
than any other – has the potential to shape the global security environment and affect the
very possibility of security for millions of human beings. The statistics are familiar. The
USA has the world’s largest economy, and the world’s largest military. Its powerful position
in the governance of major global institutions shapes the social and financial landscape of
the world economy. It web of alliances influences the security calculations and doctrines
of multiple states, and its use of force, or even the merest threat of it, affects crises from Iraq
to North Korea. It has helped to prevent genocides, and stood by as they were perpetrated,
and its powerful position in the United Nations has both driven, and frustrated, normative
and institutional innovation. As the just war theorist Jean Bethke Elshtain (2004: 6) is fond
of saying, with great power comes great responsibility.
Despite this, the vast importance of the electoral race seemed perversely mirrored in
the abjection of its foreign-policy discourse. Both candidates vied to boast of how they
will ‘protect America’ and best preserve its ‘national security’, rather than the security of
states and human beings outside America. Both vowed to protect Israel and prevent Iran
obtaining nuclear weapons, and, despite setting out an admirably cosmopolitan vision of
US leadership in a powerful speech in Berlin, Obama vowed to ‘take out’ Osama Bin Laden
in Pakistan if necessary. The war in Afghanistan was important not to prevent the return
of the Taliban – a vicious regime that had slaughtered thousands and abolished the rights
of women – but because al-Qaeda may be plotting new attacks against Americans (CNN
2008). These were, to be sure, legitimate foreign policy concerns, but their self-regarding
nature spoke volumes about the ethical problems posed by security policy and discourse.
The issues raised during the year reflected the gamut and changing nature of international
security concerns – climate change, nuclear proliferation and disarmament, poverty,
terrorism, genocide and military flashpoints – yet all of them are global in nature and raise
concerns for billions outside the United States. Inside the USA, it seemed impossible for
90
security as ethic s
Obama to repeat his paean in Berlin that ‘there is no challenge too great for a world that
stands as one’ (Obama 2007).
Affirming, Questioning, Ethics
Linking security and ethics is not a simple matter. Debates about security have rarely been
framed in explicitly ethical or moral terms, and it is hard to point to any self-consciously
ethical theory of security as such, in contrast to the role that the just war tradition performs
for studies of war and peace, for example. This does not mean, however, that ethics are
foreign to questions of security; they are in fact central to them, however submerged or
disavowed. Every vision and practice of security has an ethic: an ideal of the good and the
right, even if it is buried in a story about the facts. Every vision of security poses an ideal
vision of human nature, societal priority and international order, even if they state them as
fixed, ahistorical givens.This is one vision of the ethical, one that is all too often collapsed
into ontology, into claims about what is true, what exists and what cannot exist.
Yet another vision of the ethical, which we could call ‘critical’, seeks to make ethical
judgements about ethical visions. It refuses to collapse ethics into ontology, to reduce ethics
to timeless visions of what is or to a set of fixed and rigid principles. It does not eschew
principles, but subjects them to a test of their effects and implementation, to the practices
they generate and the results – in terms of security or insecurity – that they produce.
However a serious obstacle is posed by the way in which the idea of security itself
shapes the terrain upon which we can think and achieve the ethical. The historical terms
upon which we have understood and conceptualized security – and with it, political
and moral community – constrain the very basis upon which we can achieve anything
‘ethical’ with security. In the wake of recent reformulations that have sought to ‘broaden’
or ‘deepen’ security to new threats or new objects of concern, powerful questions press in.
Who or what is the object of security concern? How is security defined? What practices,
methods and goods does security name, and how do we evaluate them? And could security,
by overriding other goods that societies deem to be of value, unnecessarily constrain the
worlds we can make under the name of the ethical? In short, does security have ethical
worth at all?
National Securities: Statism
Realist approaches are primarily statist, or state-centric, in nature. This is also true of
some liberal security policy outlooks, particularly those that frame global commitments
under the banner of ‘good international citizenship’, where national interests take priority
and shape international commitments, which are seen otherwise as acts of charity. Such
approaches view ‘national security’ as a fundamental objective and role of government, and
the nation state as its beneficiary. In the language of security studies, the state is the ‘referent
object’ of security.
In the realist ontology of international relations, insecurity is a permanent condition
because of either the innate human tendency towards violence, power-seeking and
aggression, or – as John Herz’s analysis of the ‘security dilemma’ contends – the structural
conditions of international anarchy (Donnelly 1992). Upon this consensus realists then
divide on their policy outlooks, especially with regard to the utility of force, the dilemmas
91
a n th o n y b urk e
of deterrence and the role of morality in foreign policy.These differences tend to be most
stark between some strategists and classical realists, and matters have become more complex
with the rise of neo-conservatism (which combines a particularly hawkish view of force
with aggressively liberal visions of Western-led global transformation) and realist-liberal
hybrids of just war theory and international intervention.
Many Cold War strategists – perhaps best exemplified by Robert Osgood and Robert
Tucker (1967), Thomas Schelling (1966), and Henry Kissinger (1969), who were using a
conceptual template laid down by Carl Von Clausewitz (1975) – saw force as a natural tool
of American statecraft and sought to develop hard-boiled doctrines for its use. Out of these
efforts came the doctrines of limited war (including, controversially, the limited use of
nuclear weapons), and conflicts inVietnam, Iraq, Lebanon,Afghanistan, Panama, Nicaragua,
Cambodia and East Timor, among others. Often quite ruthlessly, national security was
premised on the insecurity and suffering of others – on what Schelling called ‘the power
the hurt’ (1966: 4). This consensus quickly evaporated however, with early limited war
thinkers such as Morton Halperin (1987) and Robert McNamara (2003) becoming critics
of nuclear strategy and sceptics of hawkish positions.
Here a statist ethics divided: while the hawks viewed force as a politically and morally
unproblematic servant of the national interest, other realists saw US and European security
as closely intertwined with that of its adversaries and developed profound moral qualms
about the humanitarian costs of war in Vietnam or potential nuclear war. Such thinking
was also reflected in more liberal European ideas of ‘common security’ and ‘non-offensive
defence’ (Booth and Wheeler 2008: 137–45). More recently, John Mearsheimer and
Stephen Walt criticized the USA’s 2003 invasion of Iraq and uncritical support of Israel as
detrimental to US security and interests, and as lacking a compelling strategic basis (2003,
2007).Walt also elucidated a broader vision of US foreign policy that abandons a desire for
primacy for an effort to act on the basis of ‘greater knowledge, wisdom and self-restraint’
(2005: 26). While they were not indifferent to violations of human rights principles and
universal norms, their central concern was about strategic damage to US interests. Here a
statist ethic raises profound questions about the utility and costs of force.
Given the liberal, Wilsonian tinge of much neo-conservative foreign policy after 2001,
such realists were implicitly reviving a classically realist concern with excessive moralism in
foreign policy because of its detrimental impact on mutual security.While such arguments
have been deployed against the view that foreign policy should not be distracted by morality
at all (Kennan 1954), writers like E.H. Carr (1969) and Hans Morgenthau (1954: 245–9)
have argued for a complex balance of realism and morality, with Morgenthau making
the prescient argument that Western policy-makers are too often tempted to assume that
all moral virtue lies with their own states and allies and all perdition with their enemies,
which can intensify conflict and dehumanize the other side (see also Schmitt 1996: 54).
These concerns with mutual security have informed liberal efforts to promote
cooperation, confidence-building measures, security dialogues and regional institutions that
can build upon the original normative framework of the United Nations more effectively.
Here statist ethics jostle for room with more deontological, cosmopolitan perspectives, a
set of tensions captured in liberal arguments about ‘good international citizenship’ and in
much constructivist scholarship on security. Here scholars have extended Karl Deutsch’s
idea of the ‘security community’ to examine the processes of ‘socialization’ that create
systems of states that eschew the use of force to resolve disputes, and create normative webs
that can prevent conflict and solve common security problems (Acharya 2001; Adler and
Barnett 1998). However concerns have been raised that constructivist scholarship, such
92
security as ethic s
as on ASEAN, neglects the coexistence of good with bad norms that legitimize coercive
responses to intra-state threats, and that security communities have the potential to be
transformed into ‘regional fortresses’ that contain the potential for new conflicts with
outsiders (Bellamy 2004; Burke and McDonald 2007).
Profound ethical issues have also been raised by late twentieth-century realist scholarship
and policy that ‘broadens’ the national security agenda to take in new ‘transnational’ and
‘non-traditional’ threats such as those posed by disease, the environment, crime, terrorism,
illegal immigration, economic instability, piracy, cyber-attacks and more (Buzan et al. 1998;
Dupont 2001). This framework builds on older Asian models of ‘comprehensive’ security,
which are still state- and regime-centric but focus on a wider range of threats, including
those from within. This agenda certainly incorporates legitimate concerns, but has been
criticized for its focus on states as objects of security and for legitimating repressive or
inappropriate policy responses that undermine international law, or compromise human
rights (Burke 2007a). In this regard the work of the Copenhagen School has been
important, if ethically ambiguous. It rightly seeks to illuminate and question the process
of how new threats are ‘securitized’, and in some cases advocates their ‘de-securitization’.
However it tends to see security in very realist (and even Schmittian) terms as being about
existential threats to the ‘survival’ of the state, the ‘special nature’ of which ‘justif[y] the use
of extraordinary measures to handle them’. Furthermore, by reifying the state as the prime
referent of security, Copenhagen School analysis blocks any path to human security (Doty
1998–9: 80; Fierke 2007: 110). In his key 1995 article, Ole Wæver insists that ‘there is no
literature, no philosophy, no tradition of security in non-state terms … neither individual
nor international security exist’ (1995: 48–9).
Human and Environmental Securities: Cosmopolitanism,
Feminism and Critical Theory
The most profound, and ethically promising, move in the redefinition of security came in
the 1990s as feminist, post-Marxist and peace movement perspectives came to influence
security analysis. In the academic sphere, R. B. J. Walker (1988) argued for a security
inclusive of all human beings in his book, One World, Many Worlds; Ken Booth published his
article ‘Security and Emancipation’ in a 1991 issue of the Review of International Studies; and
J. Ann Tickner published her book Gender and International Relations: Feminist Perspectives
on Achieving Security, in 1992. In reinterpreting security as ‘emancipation’, Booth argued
for ‘a holistic and non-statist’ approach that would achieve ‘the freeing of people’ from
‘physical and human constraints … war and the threat of war … poverty, poor education,
political oppression and so on’ (1991a: 317–19). Tickner argued that security should be
based upon ‘the elimination of unjust social relations, including unequal gender relations’.
It must address the ‘multiple insecurities’ represented by ecological destruction, poverty
and (gendered) structural violence, rather than the abstract threats to the integrity of states,
their interests and ‘core values’ (1992: 127–44). Her perspective has been echoed by her
student, the revisionist just war scholar Laura Sjoberg, who argues for a replacement of ‘the
need for security with an affective attachment to others’ needs for security’ (2006: 212).
On the policy side, the United Nations Development Program (UNDP) placed what
it called ‘human security’ on the UN agenda in its 1994 Human Development Report. There
it defined human security as ‘safety from constant threats such as hunger, disease, crime
and repression … [and] from sudden and hurtful disruptions in the patterns of our daily
93
a n th o n y b urk e
lives’ (1994: 3). The 2003 UN Commission on Human Security in turn defined human
security in terms of a twofold set of freedoms: ‘freedom from want and freedom from
fear … protecting people’s vital freedoms from critical and pervasive threats, in ways that
empower them’. To do so, it continued, ‘means creating political, social, environmental,
economic, military and cultural systems that together give people the building blocks
of survival, livelihood and dignity’. Its report, Human Security Now, also argued that ‘the
security of one person, one community, one nation rests on the decisions of many others –
sometimes fortuitously, sometimes precariously’; an insight which stresses the irreducibly
interconnected and interdependent nature of security, and strongly implies a cosmopolitan
ethic and architecture of response (Commission on Human Security 2003: 4, 2).
The ‘deepening’ move made by human security analysis – the shift in the referent of
security from the state to the human being – is of profound ethical importance. Not only is
it a sharp challenge to the ethics of statism and its restrictive moral community, it challenges
the emphasis on military and defence issues along with the ‘broadening’ move of more
recent realisms in ethical terms. It incorporates threats from states to their own populations,
and challenges states to find non-repressive and just solutions to internal challenges to their
security. Human security thus challenges states and international organizations to deal
with a broadened range of threats in less self-regarding and coercive ways. If the security
of human beings is everywhere intertwined, national security cannot be an exclusive
priority and cannot be achieved by depriving others of security.To the extent that national
security remains valid, it must be based on human security. Approaches that are genuinely
holistic also stress preventive, long-term and structural solutions to insecurity, which places
a premium on transnational cooperation and justice. It is possible that human security
can improve a statist ethic – by showing pragmatically how to achieve more sustainable
national security outcomes – but its central ethic embraces human beings everywhere, in
both their diversity and common humanity.
This is where arguments for the narrowing of human security to questions of conflict
and human rights – to the ‘freedom from fear’ agenda – raise potential ethical problems.
This is the approach taken by the Human Security Report Project at Canada’s Simon
Fraser University, and by some realists who seek to limit human security to questions of
political violence and intervention (Thomas and Tow 2002).While one can appreciate the
pragmatic rationale for this – which seems to be based around a concern with how new
issues can be effectively securitized – it opens up dangers.While granting that the emphasis
of new normative frameworks for intervention (such as the ‘responsibility to protect’) is on
prevention, it risks undermining a genuinely holistic analysis of how insecurity processes
are interrelated and threats arise, and must thus be addressed. It also risks associating human
security too closely with intervention, which in the post-Iraq era has become controversial
and, in some quarters, deeply suspect as a new liberal version of imperialism. Indeed, in
some liberal and just war accounts, controversial new arguments for intervention are made
in terms of human rights and security, even as they affirm Western security agendas and
statist ethical frameworks (see Ethics and International Affairs 2005, and International Relations
2005). It is likewise possible for governments to absorb human security rhetorics into
(otherwise unchanged) statist frameworks, to reduce human security to a political discourse
of disciplinary power rather than an ethically rich series of emancipatory practices. Human
security, I would argue, should not be reduced to ‘good international citizenship’; it is far
more challenging and transformative than that.
Good international citizenship, or what the Australian Labor government of Kevin
Rudd calls ‘multilateral realism’, is merely a modification of a statist ethic which prioritizes
94
security as ethic s
national security, and only seeks human security and sustainable development when it is
in the national interest, or can be constructed as a form of cost-free altruism (Burke 2008:
232–3). While national security cannot be disregarded, good international citizenship
blocks the transformative potential of human and critical security approaches and obscures
the profundity of their demand. It also obscures the advantages for national security of the
sustained stability that human security provides. At the policy level, human security needs
to be thought as broadly as possible, uniting the freedom from fear and want agendas.
It demands a genuinely holistic and long-term policy approach that builds interlocking
systems in which the basic underpinnings of security – from systems of health and social
security, stable and sustainable economies, the prevention and resolution of conflict within
and across borders, to the protection and promotion of human and cultural rights – can
be assured in forms that preserve human dignity rather than lock people into new, and
ethically suspect, webs of disciplinary power. This latter concern, as I explore below, is
something of which post-structuralists rightly warn.
In this way it is unsurprising, and welcome, that critical security approaches emphasize a
cosmopolitan reconstruction of world order so it is less violent and more just. Booth argues
that reconceiving security as emancipation offers ‘a theory of progress for society and a
politics of hope for a common humanity’ (2005: 181), and in his Theory of World Security
seeks to imagine a vision of transnational political community that is not ‘synonymous
with homogeneity’.This, in Gandhi’s terms, is a challenge to ‘reconcile the singular I with
plural We’s’. ‘World security’, writes Booth, ‘asks us to celebrate the possibility of human
equality; this alone, if put into consistent and universal political practice, offers hope of
eradicating universal human wrongs’ (2007: 138, 140). How as a world we construct and
enact such cosmopolitanism, and the challenges of power and responsibility it would bring
into play, open ups a whole series of new ethical dilemmas in turn.
Mitigating and preventing damage to natural ecosystems, animal and plant life, and
human activities that are vulnerable to environmental disturbances, has also become an
ever more important part of a broadened security agenda. Examples include a UN Security
Council meeting in February 2007 that discussed climate change, the US Congress’s
commission of a national intelligence estimate on the implications of climate change for
US security, and the UK government’s listing of climate change as a major security threat
(Burke 2007a; Eckersley 2009). Yet we have a right to ask how well the discourse about
human security has been ‘deepened’, and whether aligning environmental with human
security really captures the ethical challenge it evokes.
The ethical problem raised by environmental security is twofold. First, to what security
‘referents’ is the environment linked: states, individuals or ecosystems? And secondly, what
are the implications of ‘securitizing’ environmental problems: will they be effectively
addressed or are new injustices possible? Arguments that climate change poses a serious
threat to international security do have great weight (Dupont and Pearman 2006), but
there is an all too real danger that they will simply be statized: that they will stimulate
militarist and exclusivist responses, rather than the far more profound transformation of
policy and concepts that they demand. The report that stimulated Congress’s vote for
the NEI, by a group of retired admirals and generals for the CNA Corporation, made
a number of important warnings about how US defence policy would need to change
(CNA Corp 2007). However it was narrowly focused on US security and interests: on
the instability posed by climate refugees and environmental stresses on weak states, rather
than on their human security impacts and attendant ethical obligations. Reflecting such
thinking, Al Gore said in his speech to the 2008 Democratic convention that ‘military
95
a n th o n y b urk e
experts warn us our national security is threatened by massive waves of climate refugees
destabilizing countries around the world’ (Gore 2008). If Western policy aims merely to
block, warehouse and detain refugees, and to quarantine unstable regions, the climate
change future will be one of both physical and moral disaster.
In this light, the holistic and cosmopolitan impulses of critical security approaches have
rarely been more relevant: dangerous climate change must be prevented by a profound
transformation of global energy economies, land use patterns and approaches to growth
– and where it cannot be prevented, we must manage its impacts humanely and without
regard for harsh Westphalian norms, rather than wall ourselves into violently defended
islands of greater and lesser security. In turn, we can ask whether climate change and other
forms of environmental degradation do not demand an ethical displacement of both the
national and human egos? Here, animals and ecosystems become prime objects of moral
concern, and the post-Cartesian human pretence to mastery and calculation yields to a
sense of debt to, and interconnection with, nature (Dalby 2002: 160).
Post-structuralism: Security Politics and the Human
Post-structural writings raise important questions for security studies through what might
be called an ethics of conceptual destabilization. Here the ‘linguistic turn’ in twentiethcentury social science is linked with a ‘deconstructive’ tradition in continental philosophy
that places the entire structure of concepts under scrutiny, and excavates practices from the
claims to necessity and truth that naturalize them.
Where traditional and critical approaches settle on a fixed set of referents for security,
seeing it as a thing and an end, post-structural writings see a politics: a struggle for power,
and a process for constructing social life and thus shaping the possibilities for determining
ethics. As Jef Huysmans argues, ‘security knowledge is a political technique of framing
policy questions in logics of survival with a capacity to mobilise politics of fear in which
social relations are structured on the basis of distrust’. This, as he and others have argued,
is to see security as a form of Foucauldian ‘governmentality’ and a ‘political technology’
that links strategies of ‘individualising’ and ‘totalising’ power into a working whole (Burke
2007b: 1–53, 2008: 8–12; Huysmans 2006: pp. xii, 9, 97). This enables us to identify and
critique concrete practices that claim to promote and defend security, by denaturalizing
them and questioning their ethical effects.
In addition, post-structural theory has pointed out how, more deeply, claims about
security are embedded in ahistorical claims about the nature of politics and life. As R. B. J.
Walker has written, security is linked to a ‘constitutive account of the political’ that reduces
humanity to citizenship (1997: 69–71). In this light, security is a metaphysical discourse,
an overarching political goal and practice that guarantees political existence as such, that
makes the possibility of the world possible. However since the settlement of Westphalia was
developed into a hard series of norms structuring international society, alongside a tradition
of statist political thought deriving from the social contract theories of Hobbes, Locke and
Rousseau, these possibilities have been limited to an exclusive moral community existing
in fundamental (and often violent) alienation from a range of internal and external Others
(Burke 2007b: 28; Dillon 1996: 13; Neocleous 2008: 11–38).This is where post-structural
thought can challenge the way that the idea of security itself shapes the terrain upon which
we can think and achieve the ethical.
96
security as ethic s
Both of these moves have been very productive in enabling a sociological analysis of the
way in which security has naturalized violence, cramped the ethical potential of political
community and structured the creation of a ‘security politics’ in which human bodies and
minds can be appropriated, constructed, harmed, disciplined, utilized and discarded in both
‘normal’ and ‘emergency’ practices of bureaucracy, war, justice, intervention and capital
accumulation. Following Foucault (1987) and Agamben (1998), this process – termed
biopolitics – has been placed under sustained scrutiny because of the seizure of life at its
roots that it implies. Profound ethical problems are raised by even the routine question
of how, in the national security state, subjects can find the agency to negotiate daily webs
of power; when coercive powers are legitimated and lives are placed directly at risk, the
ethical stakes are raised even further.
Warfare obviously raises such issues, but even apparently routine practices of immigration
policy and population management highlight the ethical dangers of biopolitics. These
were put powerfully by Elizabeth Dauphinée, in her conclusion to The Logics of Biopower
and the War on Terror, where she links her own story of immigration processing – ‘a process
through which one becomes politically human in the United Kingdom’ – to the suicide of
an Angolan asylum seeker, Manuel Bravo. Bravo had killed himself both out of despair and
as a tactic to ensure that his 13-year-old son Antonio could not be deported. In this ‘final,
astonishing act of parental love’, and in his ‘instrumental use of his own bare-life-in-death,
Manuel Bravo exercised political subjectivity in a place where he was presumed to have
none and thus reclaimed, for a time at least, the subjectivity of his son’ (Dauphinée and
Masters 2007: 230–2). Experiences of suicide, self-harm, brutality and depression were also
common for those confined in immigration prisons in Australia, where they were victims
both of the securitization of migration and the very structure of the political – national
sovereignty and the social contract – in which modern systems of security are embedded
(Burke 2008: 207–21). How then can a better ethical response be conceived?
It would take many books to adequately answer such a question, but, put briefly, in seeking
an answer it is possible to see both affinities and tensions between those critical approaches
informed by post-structuralism and post-Kantian critical theory. Post-structuralists have
generally been reluctant to speak the languages of emancipation and cosmopolitanism,
but it also seems clear that both traditions have been palpably concerned with systematic
injustices, evils and human wrongs that cause grave and avoidable suffering. While poststructuralists would do well to (critically) reconnect with cosmopolitanism, by drawing
on ethical traditions from Jacques Derrida, Emmanuel Levinas, Martin Buber and others
they have conducted a profound rethinking of security, sovereignty and being; one that
displaces the sovereign Cartesian ego of the post-Westphalian state in favour of a vision of
political being and transnational community based on a fundamental interdependence, and
structure of responsibility, between selves and others (Burke 2007b; Butler 2004; Campbell
1998; Dauphinée 2007).
This offers two forms of ethical promise. The first is a route out of the system of
existential alienation that is the social contract and the anarchic world of power-seeking
states. This would break Kant’s vision of a pacific federation of liberal republics into
something more complex: to reduce national sovereignty to a construct of legal equality
in international affairs and a principle of self-determination, while dramatically softening,
if not abandoning, its claim to make and remake being, which is now thought in terms of
complex relations of commonality and difference in a culturally, economically, politically
and ecologically interconnected world. We may still make our identity in relation to the
97
a n th o n y b urk e
local and the near, but never in abandonment of our fundamental, and global, relation to
Others, without whom we do not meaningfully exist.
The second is a perspective on biopolitics. Post-structural thought has done the great
service of making biopolitics visible, and thus making it challengeable as a system of
power. Harder is the question of whether, and how, a decent politics of the human can
be detached from its systems of making human being political. Any such project must
negotiate a minefield of contradictions. Traditional efforts to codify and enforce universal
human rights standards are worth pursuing, but are often attenuated, in both domestic and
international law, by sovereign ‘interests’.At the same time the era of counter-terrorist wars,
humanitarian interventions and liberal imperialism has shown that biopolitics operates
even where ‘humanity’ is the object of concern. On the other hand, it seems self-defeating
to devalue the human as such, out of fear of its ever-potential reducibility to ‘bare life’, or
to hope for a world where biopolitics – and thus power itself – disappears as a possibility
(Owens 2008: 45). Every criticism of a biopolitical abuse rests implicitly on an ethic of the
human and a humanism of the ethical, and the dark connection between what Michel
Foucault called ‘life insurance and death command’ cannot be escaped (2002: 405). It can
only be made subject to a more stringent ethic of responsibility: its powers contained in a
critical political economy where life is affirmed, emancipated and protected on the most
general basis possible. Whether this should take the name of ‘security’ remains an open
question.
War and Security: Strategy and Just War Theory
By exposing combatants, civilians and – increasingly – entire populations to the danger of
death, injury and displacement, war raises profound ethical questions that exhaust what can
be discussed here. Here I wish to evaluate how critical and post-structuralist orientations
to ethics and security raise some tough questions for the most influential ethical approach
to war: the just war tradition.
This tradition – which specifies principles for legitimate resort to war (jus ad bellum)
and conduct in war (jus in bello) – has stimulated both an enormous literature in moral
and political philosophy, and been influential in shaping the international law of war and
international security norms, along with the ways governments justify war and militaries
conduct operations (Coady 2008; Nardin 1996). What is striking about this literature is
how rarely it relates its analysis systematically to a theory or practice of security, even in
writers who have written incisively about security (Bellamy 2006).The notable exception
here is Laura Sjoberg, who anchors her revision of just war theory in a ‘security ethic’ that
‘gives attention to individual and collective security at the political margins’. This ethic,
importantly, seeks to synthesize a Kantian ‘ethic of justice’ and a relational ‘ethic of care’ for
Others, in opposition to an ethic that would secure the autonomous, egoistic self (whether
that of a nation or individual) against a range of threatening others (2006: 45–6).
To be sure, just war theory has much to recommend it and has certainly helped to
reduce civilian suffering in war. However the link of a given just war orientation (and
they are diverse) to a given security paradigm is rarely clear. They are most commonly
marked by a disdain for an ethics of the Kantian or Levinasian kind, resting instead in a
realist ontological foundation in which the state is the privileged structure of political
community and war remains a politically (and morally) satisfying way of resolving conflict
in the last resort. Moral and generous behaviour is possible, and force may be a way of
98
security as ethic s
performing it, but peace is not (Elshtain 2004, 2005). In this way it moderates Clausewitz’s
amoral vision of strategy, but does nothing to challenge his modernist assumption that
force can be an effective and cost-free way to achieve political ends.
Just war doctrines are thus nested in the same fundamental ontology of alienation that
post-structuralists have been at pains to unpack in the national security state. Hence they
tend to add an ethical supplement to national security policy, rather than support a more
profound reorientation of security as holistic, human-centred and emancipatory. Broader
questions of the antecedents to and aftermath of war have thus received little attention,
and where just war argument has been utilized to support liberal interventions, it risks
moralizing war and politics with exaggerated claims about truth and justice, at the same
time as some of its principles – such as the double-effect and proportionality – excuse the
killing of innocents (Elshtain 2003, 2004, 2005). Here just war is complicit with biopolitics,
and both critical and post-structuralist approaches have questioned the ethical effects of
a schematic, modernist application of moral principles that lack the important moral
qualities of self-critique and respect for undecidability (Burke 2007b: 160–5; Campbell
and Shapiro 1999; Zehfuss 2007).
We could, perhaps, ask that just war theorists account for their arguments in security
terms. However if this takes place uncritically, a new danger appears: the securitization of
morality.
Conclusion: An Ethic of Progress, or Perpetual Critique?
There is a final feature of security analysis that has ethical implications: its attitude to
history and time. It is my feeling that most theories of security fail to adequately grapple
with the radical break with the past that modernity makes, with the tremendous ability to
destroy, make and remake worlds that is the stuff of modern politics and constitutes both
the nightmare and possibility of security as a set of practices and powers. For example, Ole
Wæver (1995) appeals to a tradition in which state security is the only one with historical
continuity, and must thus limit our ethical horizon; Elshtain (2005: 95) offers a vision of
historical recurrence where alienation and war are ever tragic possibilities, and perpetual
peace is a ‘solipsistic dream’ that would eliminate politics as such; and, at the other pole,
Ken Booth insists on the normative necessity of an idea of progress – both moral and
historical – even as he insists it must be self-reflexive and not be read in teleological terms
(2007: 124–33).
While Booth’s insistence on agency and responsibility is welcome, when used at such a
metaphysical level progress retains its historicist force, and risks underplaying the dangers
of acting in the service of even the best ends. Thus it is intriguing, in the wake of the
critique of biopolitics, to read Booth advocating an open orientation to what humanity
might become: emancipation as ‘inventing humanity’ (2007: 256). I have trouble seeing
progress as much more than the most cynical of signifiers, always obscuring the concrete
contradictions of its past. However enlightenment can continue to inspire us, if it changes
from a dialectic of history, however critical, to what Foucault called ‘a permanent critique
of our historical era’ (1984: 42).The terrible powers mobilized under the name of security
suggest that we dare invent a better humanity only if we perpetually criticize the ways
humanity is being invented. How security is about the human thus becomes the crucial
question of ethics for security.
99
Journal of Management Information Systems
ISSN: 0742-1222 (Print) 1557-928X (Online) Journal homepage: https://www.tandfonline.com/loi/mmis20
Organizations’ Information Security Policy
Compliance: Stick or Carrot Approach?
Yan Chen , K. Ramamurthy & Kuang-Wei Wen
To cite this article: Yan Chen , K. Ramamurthy & Kuang-Wei Wen (2012) Organizations’
Information Security Policy Compliance: Stick or Carrot Approach?, Journal of Management
Information Systems, 29:3, 157-188, DOI: 10.2753/MIS0742-1222290305
To link to this article: https://doi.org/10.2753/MIS0742-1222290305
Published online: 09 Dec 2014.
Submit your article to this journal
Article views: 1562
View related articles
Citing articles: 59 View citing articles
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=mmis20
Organizations’ Information Security Policy
Compliance: Stick or Carrot Approach?
Yan Chen, K. (Ram) Ramamurthy, and Kuang-Wei Wen
Yan Chen is a visiting assistant professor at the College of Business Administration,
University of Wisconsin–La Crosse. She received her Ph.D. in management science,
specializing in management information systems, from the University of Wisconsin–
Milwaukee. Her work has focused on information security, security tool interface
and issues, and e-commerce. Her research has been published in the International
Journal of Electronic Business and a number of refereed conference proceedings.
Her paper was nominated for the best student paper award at the Sixth International
Conference on Design Science Research in Information Systems and Technology
(DESRIST 2011). She is a member of the Association for Information Systems and
the Decision Sciences Institute.
K. (Ram) Ramamurthy is a professor and Roger L. Fitzsimonds Distinguished Scholar
in management information systems at the Sheldon B. Lubar School of Business,
University of Wisconsin–Milwaukee. He received his Ph.D. in business with a concentration in MIS from the University of Pittsburgh. He has 20 years of industry experience, having held several senior technical and executive positions. He served as an
associate editor of MIS Quarterly for four years. His current research interests include
electronic commerce and business; adoption, assimilation, and diffusion of modern
IT; data resource management and data warehousing; systems security compliance;
IT business value; IT outsourcing; decision and knowledge systems for individuals
and groups; systems security; and total quality management, including software
quality. He has published 50 research articles in major scholarly journals, including
MIS Quarterly, Journal of Management Information Systems, IEEE Transactions on
Software Engineering, IEEE Transactions on Systems, Man and Cybernetics, Deci‑
sion Sciences, European Journal of Information Systems, Decision Support Systems,
Information & Management, Journal of Organizational Computing and Electronic
Commerce, International Journal of Electronic Commerce, and IEEE Transactions on
Engineering Management, and over 29 articles in refereed conference proceedings.
He is a charter member of the Association for Information Systems.
Kuang-Wei Wen is a professor and chair of Information Systems Department at the
University of Wisconsin–La Crosse, the department he established in 1999. He received
his Ph.D. in architecture, decision and information systems from the Carnegie Mellon University, and an M.S. from the University of Wisconsin–Milwaukee. His main
research focuses on global study of e-value creation among small and medium-sized
enterprises, information security management, application of artificial intelligence to
Web site customization, and effective use of social computing. His research has been
published in scholarly journals, including Management Science, European Journal
of Operational Society, and International Journal of Electronic Business, plus more
than 30 refereed articles in conference proceedings.
Journal of Management Information Systems / Winter 2012–13, Vol. 29, No. 3, pp. 157–188.
© 2013 M.E. Sharpe, Inc. All rights reserved. Permissions: www.copyright.com
ISSN 0742–1222 (print) / ISSN 1557–928X (online)
DOI: 10.2753/MIS0742-1222290305
158
Chen, Ramamurthy, and Wen
Abstract: Companies’ information security efforts are often threatened by employee
negligence and insider breach. To deal with these insider issues, this study draws on
the compliance theory and the general deterrence theory to propose a research model
in which the relations among coercive control, which has been advocated by scholars
and widely practiced by companies; remunerative control, which is generally missing in both research and practice; and certainty of control are studied. A Web-based
field experiment involving real-world employees in their natural settings was used
to empirically test the model. While lending further support to the general deterrence theory, our findings highlight that reward enforcement, a remunerative control
mechanism in the information systems security context, could be an alternative for
organizations where sanctions do not successfully prevent violation. The significant
interactions between punishment and reward found in the study further indicate a
need for a more comprehensive enforcement system that should include a reward
enforcement scheme through which the organizational moral standards and values
are established or reemphasized. The findings of this study can potentially be used
to guide the design of more effective security enforcement systems that encompass
remunerative control mechanisms.
Key words and phrases: coercive control, compliance theory, general deterrence theory,
information security policy, punishment, remunerative control, reward.
It has long been a well-recognized fact that companies’ information security efforts
are threatened by employee negligence and insider breach (e.g., [44]). Information
security cannot be assured by using technological solutions alone. To deal with the
“insider” issues, companies have started to focus on various management and control
mechanisms such as security policies, procedures, and enforcement in addition to continually updating their security technologies [13, 18, 34, 61]. In the meantime, under
increasing pressure from various stakeholder action groups interested in security and
privacy concerns, the U.S. government and a few security conscientious industries
have stepped in by introducing specific regulations and standards. As a result of their
intervention, having a security policy in place is quite common among companies that
are required to comply with regulations and mandates such as the Payment Card Industry Data Security Standard (PCI DSS), Gramm–Leach–Bliley Act, Sarbanes–Oxley
Act, and Health Insurance Portability and Accountability Act (HIPAA) [30]. Regardless of this trend, however, human beings are still the weakest link in the information
security chain. A recent survey of over 500 security professionals in U.S. corporations,
government agencies, medical institutions, and universities that was conducted by the
Computer Security Institute [56] reported that the average monetary loss per respondent
was $288,618, and that 44 percent of the respondents reported insider security-related
abuse, making it the second-most frequently occurring computer security incident
(virus and malicious software infection incidents being the most frequent). Similar
results were found by the 2008 information security breaches survey sponsored by the
Department for Business, Enterprise, and Regulatory Reform (BERR) in the United
Kingdom [34]. The average cost experienced by a UK company’s single security-
Organizations’ Information Security Policy Compliance
159
compromised incident was between £10,000 and £20,000. For very large businesses,
this cost was between £1 million and £2 million. Furthermore, 62 percent of the worst
security incidents had an internal cause [34]. Clearly, employees could just bypass their
company’s security policies in order to get their job done more conveniently even if
they were aware of their company’s published security policies [72].
Employees do not seem motivated to follow security policies and procedures. They
appear to more often follow their well-honed habits and day-to-day routines and are
resistant to behavioral changes [10, 33]. They often use “neutralization” techniques
or make excuses for their policy violations [61]. Since effective information security
requires employees to comply with established security policies and procedures, the
area of information security management that focuses on issues such as effectiveness
and cost of security policy enforcement, balance between productivity and strict security, and between security level and information technology (IT) budget has become
one of the top areas of security concerns for businesses [56].
To address the compliance concern, different strategies for effective security policy
enforcement have been proposed. Drawing on the general deterrence theory (GDT) [65,
66], scholars usually advocate the negative enforcement strategy—punishment. The
GDT proposes that as punishment certainty and severity increase, unwanted behaviors
can be deterred [33, 65, 66]. But, borrowing from theories in organizational literature,
some scholars support the positive enforcement strategy—reward. Some argue that
reward provides needed incentive and motivation for compliance [10] and that reward
combined with sanction is one of the important factors that can influence individual
employees’ rational cost–benefit assessment of compliance vis-à-vis noncompliance
behaviors [13].
From a control perspective, both reward and punishment are control mechanisms
to achieve organizational goals [21]. To be effective, such control mechanisms need
to tie into the certainty of how often those control mechanisms are enforced or materialized. Certainty of control, referring to the probability of the enforcement strategy
materializing, has been an influential factor that may contribute to the effectiveness
of the enforcement strategy of policy compliance (e.g., [5, 33, 65, 66]). However, to
the best of our knowledge, no prior studies in information systems (IS) have examined
the interaction effects between punishment and reward for enforcing security policy
compliance. Moreover, the empirical findings regarding the influence of reward on
security policy compliance in the IS security literature are inconsistent: rewards
were not found to affect compliance intention or actual compliance in some studies
(e.g., [10, 51]), whereas rewards were found to significantly influence an employee’s
belief in the benefit of security policy compliance (e.g., [13]). Thus, the difference
in the effectiveness of these two enforcement strategies is far from clear in the IS
field. In addition, although the choice between punishment and reward has long
been an important and interesting topic in other fields such as social psychology and
organizational management, even in these well-established fields, research findings
are still not in agreement. In particular, the joint effects of punishments and rewards
are even more unclear [4, 24]. Finally, the interaction effects between punishment/
reward and certainty of control are also not clear in the IS security literature, although
160
Chen, Ramamurthy, and Wen
some recent attention has been given to the joint effects of punishment and certainty
of control (e.g., [61]).
Motivated by those shortcomings in the literature, we believe it will be revealing to
understand the different effects and interaction effects (if any) of the two enforcement
strategies as well as the main and interaction effects (if any) between the two enforcement strategies and certainty of control in the context of security policy compliance.
Drawing on the compliance theory [22] and GDT [65, 66], this study investigates
these two enforcement strategies and their interaction in the context of security policy
compliance. The main variables of interest are severity of punishment, significance of
reward, and certainty of control. Our central research questions are
RQ1: How does punitive enforcement affect employees’ security policy
compliance?
RQ2: How does rewarding enforcement affect employees’ security policy
compliance?
RQ3: How does enforcement certainty affect employees’ security policy
compliance?
RQ4: What is the combined effect of punitive and rewarding enforcement on
employees’ security policy compliance?
The rest of the paper is organized as follows. In the next section, a literature review
related to reward and punishment in organizations and in IS security is discussed,
and the significance of this study is elaborated. The third section draws on the major
elements of the general deterrence and compliance theories to develop the study’s
six hypotheses. In the fourth section, the research design for examining the study’s
hypotheses is presented. The data analysis and results are presented in the fifth section and discussed in the sixth section. The final section discusses the contributions
and implications of this study for research and practice in IS security as well as its
limitations and future research extensions.
Literature Review
Using negative stimuli (punishment or sanction) to discourage undesirable behaviors
or using positive stimuli (reward) to encourage desirable behaviors has long been a
topic in fields such as education, social psychology, and organization. Nevertheless,
studies in these fields have so far not reached a consistent conclusion about the effectiveness of punishment or reward on the investigated behavior. Some scholars argue
that incentives/rewards do not work and that punishment is a better choice for deterring commitment of a deviant act, whereas others believe in “the redemptive power
of reward” [40, p. 54]. On the punishment side, Arvey and Ivancevich [5] pointed out
that there were two different points of view about punishment in the organizational
behavior and management literature. Some studies have shown that punishment is
not a high priority choice for managerial application because the presumed negative
Organizations’ Information Security Policy Compliance
161
consequences may outweigh any benefits it renders [59]. It is reasoned that the use
of punishment by an organization would result in undesirable emotional side effects
such as anxiety, aggressive acts, or withdrawal. Moreover, employees might display
hostility toward and retaliate against the punishing agent in the organization. However,
empirical evidence found in other studies indicates that these presumed side effects
are particularly weak and might occur only in situations where the punishing agent
administers punishment indiscriminately. Arvey and Ivancevich [5] further pointed
out that punishment is a frequent and naturally occurring event in all of our lives and
that it shapes a large part of our psyche and behavior. Therefore, a careful examination of punishment, particularly factors influencing the effectiveness of punishment,
is necessary. Sims [59] argued that reward tends to have a much stronger effect on
employee performance and that punishment tends to be more of a result than a cause
of employee behavior. Proponents of punishment argue that punishment may serve
to uphold social norm within an organization, signal appropriate and inappropriate
behaviors to employees, and deter deviant acts [70]. Therefore, punishment as a deterrent strategy can actually result in positive outcomes.
Researchers and practitioners in the IS literature also recommend the use of deterrent
strategy against undesirable behavior such as computer abuse and noncompliance of
security policy. Drawing on theories in criminology, the GDT has been used in studies
on preventing and reducing computer abuse in organizations (e.g., [16]). Computer
abuse is a major source of security incidents that accounts for 50 percent to 75 percent of
all incidents originating from within an organization, and it causes significant financial
losses to the organization [16]. It has been found that direct punishment associated
with computer abuse leads to a decrease in abuse intention on the part of employees
when the perceived certainty of enforcement and perceived severity of punishment
increase. Straub [65] surveyed 1,211 organizations and found that besides preventive
security software, deterrent administrative procedures that focused on disincentives or
sanctions against computer abuse resulted in significantly lower computer abuse. In
two subsequent studies, Straub and his colleague [64, 66] provided similar suggestions
that punishment or disciplinary procedures can deter computer abuse.
At the same time, to promote desirable behaviors and improved performance, employers often use rewards. But, as with punishment, the effect of reward has been challenged
by scholarly research. According to the control theory, control is an important facet of
organizational design. A critical aspect of exercising control is a formally documented
statement articulating desirable behaviors or outcomes. Control can be accomplished
through evaluation and reward. Reward signals to employees that their work or behaviors meet the expectations of the organization [10, 21]. Eisenhardt [21] argued that in
organizations, reward is implicit. She noted that an organization’s emphasis on rewards
can capture “the reward linkage of control arrangements” [21, p. 138].
Reward can be viewed as a contract through which an organization can exercise
its control through intangible rewards (e.g., potential promotion, honor of being the
employee of the month) and tangible rewards (e.g., bonus and vacation). Eisenhardt
further pointed out that “the contracting emphasis makes rewards explicit” [21,
p. 138]. Not surprisingly, many scholars in organizational management believe that
162
Chen, Ramamurthy, and Wen
the proper use of rewards as a means of controlling and managing behaviors and
performance can benefit organizations in various ways, such as directing employees’
behaviors, motivating employees, promoting excellence, attracting and retaining talent, and increasing job satisfaction (e.g., [28, 77]). Furthermore, when compared to
punishment, reward is capable of creating harmonious instead of hostile relations in
organizations.
However, the positive effects of reward have also been challenged by critics (e.g., [2,
40]) for a number of reasons. First, rewards may just facilitate temporary compliance.
It is a version of extrinsic motivators that seldom alter the attitudes that underline
employees’ behaviors and do not create a lasting commitment [40, 58]. Once rewards
are gone, employees may revert to their old behaviors. Second, rewards have punitive
side effects. Employees may experience feelings of being controlled or manipulated
by managers. As a result, rewards could create a controlling, not a motivating, work
environment [40]. Rewards could also generate tense or hostile relations in an organization. Because outcomes or performance are not easily programmable or measurable given the complexity of tasks in organizations, a phenomenon of “divergence of
preferences (i.e., people side of control)” could occur [21, p. 136] when rewards are to
be decided. Determining how to reward is a judgment call by managers that depends
on their perspectives, values, and experiences. As a result, individual employees could
often be rewarded for the wrong things or not rewarded for the right things because
of divergence of preferences [6]. One of the worst situations could be a manager
rewarding employees not based on performance but on his or her personal relationship with those employees. In addition, employees often perceive rewards as being
drawn from a fixed or scarce resource pool; more for one person often means less for
another [77]. Therefore, reward could produce damaging reactions. Finally, rewards
can push employees to aim at individual gains instead of organizational goals [9]. Suppose the performance of the chief executive officer (CEO) of a company is evaluated
in terms of the stock price of the company and that his or her benefits, reputation, and
annual bonus depend on the performance. The CEO may manipulate the stock price
in order to have “look-good” performance, whereas his or her action actually may
be hurting the company. So, performance-based corruption control is emerging as an
organizational challenge [27].
Researchers and practitioners in the IS security literature also recommend reward as
a control mechanism for compliance. Boss et al. [10] pointed out that persistent issues
regarding compliance to security policies and procedures indicate that not all employees
of an organization regard those policies and procedures as mandatory and, therefore,
do not comply with them. In addition, the fact that security policies and procedures
are put in place does not necessarily mean that employees will interpret and comply
with those policies and procedures collectively and continue to adhere to them over
time. Rewards can send strong and additional signals to employees that compliance
with security policies and procedures is mandatory [10]. Recent research [13] also
suggests that reward, along with other types of benefits and costs, is an influencing
factor when employees make a rational choice of compliance or noncompliance. Thus,
rewards can help to enforce security policy compliance.
Organizations’ Information Security Policy Compliance
163
Because of undesirable side effects of punishments alone or rewards alone, many
organizations use both coercive and remunerative mechanisms in conjunction with
other control mechanisms to enforce compliance [22]. Many scholars (e.g., [4])
also believe that rewards and punishments are distinct forces to promote desirable
behaviors in social exchanges and that rewards and punishments interact. However,
findings regarding the joint effects of rewards and punishments remain inconsistent
in previous research. Andreoni et al. [4] found that punishments and rewards jointly
have a significant influence on cooperation, but punishments alone or rewards alone
have little or no influence on it. Nevertheless, Fehr and Schmidt [24] surprisingly did
not find the significant interaction of bonuses and fines on the agents’ effort to carry
out the principles’ contracts.
Given the importance of security policy compliance for securing organizations
from various types of attacks and that reward and punishment are two common policy
enforcement strategies, it is necessary to understand if the effectiveness of these
strategies is different within the context of security policy compliance and, if so, in
what ways. In addition, inconsistent findings about the effectiveness of reward and
punishment from studies in other fields also suggest that we need further research on
the issue to provide more empirical evidence for organizational management to make
the right decision on security policy enforcement strategy.
Theoretical Frames and Hypotheses Development
The compliance theory of Etzioni [22] points out that compliance, which is a central element in organizations, refers to members of organizations acting as per their
organizational directives. To enforce compliance, organizations in general exercise
three types of control: coercive, remunerative, and normative [22]. In coercive control, organizations use threats and punishments (“the stick”) as a means to regulate
compliance and punish noncompliance. Remunerative control refers to a policy instrument by which organizations use some forms of economic incentives (“the carrot”),
such as bonus, promotion, and commissions, in exchange for members’ compliance.
When it comes to normative control, symbolic and moral reasoning behind compliance and values of compliance are emphasized [8, 22, 48]. Etizioni’s [22] definition
of coercive control is limited to the application or threat of physical force and pain,
and he defined remunerative control as the power of controlling material resources.
More recently, from a resource dependence perspective [54], researchers argue that
coercive control means using negative reinforcement strategy—depriving resources
valuable to employees upon noncompliance, while remunerative control rests on positive reinforcement strategy—stratifying employees with the addition of resources in
exchange for compliance [67]. Following this more practical perspective, we view
that coercive control is control of punishment and remunerative control is control of
rewards [15]. Organizations generally do not depend on just one type of control to
enforce compliance. Indeed, most organizations employ all three types of control, while
each organization may apply a different amount of each type of control. The choice of
organizational control to enforce compliance is complicated since, along with many
164
Chen, Ramamurthy, and Wen
Figure 1. Research Model
other organizational factors, the three types of control mechanisms themselves may
interact with each other to affect compliance.
In the field of information security policy compliance, coercive control with its
underpinning in the GDT from criminology dominates research and practice (e.g., [16,
35, 64, 65, 66]), while the effect of normative control, such as moral reasoning, on
employees’ information security policy compliance has also been found (e.g., [48]).
Remunerative control, in general, is missing in the field of information security policy
compliance, although it has drawn some attention recently (e.g., [10, 13]). Moreover,
we know little about the different effects of reward, punishment, and their interaction
on compliance intention when both of these control mechanisms are in place.
Conceptually, both coercive control and remunerative control belong to formal forms
of control, and normative control can be termed as an “informal form of control.” In
formal control, written documents in terms of rules, goals, procedures, and regulations
are in place to specify desirable behaviors, while in normative control, organizational
values, norms, and cultures are emphasized to influence compliance [17]. Since formal
control is dominant in organizations, we take it as the focus of this study.
With the above reasoning, we synthesize the compliance theory with the GDT by
introducing remunerative control—that is, reward—in this study. We propose that
both punishment and reward, certainty in enforcing these options, and the interactions
of these factors influence employees’ intention to comply with information security
policy. The research model is shown in Figure 1.
Punishment
As noted, the GDT suggests that sanctions or punishments could serve as a deterrence
mechanism against deviant behavior. This theory assumes that when potential violators are aware of organizational efforts to control undesirable behaviors, they are less
likely to commit a deviant act. The efforts of sanction or punishment are measured by
two subconstructs: severity of punishment and certainty of punishment [65]. Severity of punishment refers to the degree of punishment associated with not complying
with security policy, and certainty of punishment refers to the probability of being
punished. If potential violators realize that the likelihood of being punished is high and
Organizations’ Information Security Policy Compliance
165
penalties are severe for violation, they are more likely to be deterred from engaging in
undesirable acts and to adhere to desirable acts. Otherwise, they may engage in such
deviant acts because benefits from pursuing such acts may be great. For instance, surfing on the Internet in the workplace is more enjoyable than doing work, and sharing
the password among project team members is convenient. Findings in punishment
research also suggest that for punishment to be effective in organizations, it must start
out at a relatively severe level (e.g., [5]). They point out that in organizational contexts,
moderate or severe punishment may be more effective in coping with undesirable
behaviors than mild or no punishment. Hence,
Hypothesis 1: The level of punishment for not complying with security policies is
positively associated with the intention to comply with security policies.
Reward
There is both theoretical and empirical evidence that rewards can motivate employees
to improve performance, productivity, creativity, and compliance (e.g., [20, 22, 43]).
According to the agency theory, the agents (employees) are rational and self-interested
and, therefore, may act to maximize their own outcomes without extending effort
toward achieving the principal’s (the organization’s) goals [9, 43]. Reward structures,
when properly designed, can facilitate harmonizing the goals of agents and their
principal. Thus, rewards can be useful for altering the agents’ behaviors to realize the
principal’s goals.
Control theory also suggests tying rewards to desired behaviors [10, 21, 38]. Even if
security policies are stated and employees’ compliance is evaluated, compliance could
be poor in the absence of proper rewards for compliance. Employees could interpret
that security policies are not important and mandatory because compliance or noncompliance makes no difference [10]. It may be noted that compliance with security
policies and procedures is traditionally not a part of merit-pay schemes that assess
performance [10] and rewarding security policy compliance may not yet be common
in organizations [13]. However, the importance of reward in promoting compliance
intentions and behaviors (e.g., [13]) and in signaling moral standard of compliance [10]
has increasingly been discussed in the IS field. In addition, prior research on ethical
conduct and compliance management has found that even if performance of ethical
conduct and compliance is hard to measure, employees’ perceptions that ethical conduct and compliance are valued and would be rewarded are critical to create an ethical
culture that can significantly improve the effectiveness of compliance programs [71].
Similarly, to promote compliance in organizations, using rewards to demonstrate
that desirable behaviors are recognized may create a security compliance culture and
thus significantly improve compliance [26]. Therefore, we would argue that without
rewards, the control signal for compliance to security policies and procedures could
be weak, and thus, desirable behaviors are not reinforced. A reward system tied to
security compliance sends a strong signal that compliance is mandatory, and thus
increases the intention to comply with stated security policies.
166
Chen, Ramamurthy, and Wen
Hypothesis 2: The level of reward for complying with security policies is positively
associated with the intention to comply with security policies.
Certainty of Control
In both the punishment and reward literature, prior research suggests that certainty of
punishment or reward could directly affect the effectiveness of punishment or reward.
Employees analyze and infer the level of certainty based on how they interpret the
timing, schedule, and contingence of punishment and reward. Arvey and Ivancevich [5]
argue that the timing and schedule of punishment are two important determinants of
the effectiveness of punishment. Punishment is more effective in deterring undesirable behaviors if the punishment is imposed immediately and it consistently occurs
after each undesirable behavior is observed than if the punishment is delayed or it is
inconsistently imposed. In other words, if employees realize that their noncompliance
behaviors are continuously monitored and punished immediately and consistently,
their intention to comply with security policies will increase.
The IS security literature also indicates that to effectively deter noncompliance behaviors, monitoring such behaviors and imposing penalties upon detection
are necessary [33, 65]. Simply having security policies in place will do little to
change employees’ noncompliance behaviors if they believe those policies are not
enforced [53]. An organization’s deterrence efforts directly influence the employees’
corresponding compliance behaviors. If employees are aware that their organization
never really values their compliance behavior and never investigates their noncompliance behaviors, they may adhere to any current noncompliance behaviors because the
chance of being caught is low. High certainty of control sends signals to employees of
the organizational efforts to monitor, evaluate, and punish noncompliance behaviors.
Consequently, their intentions to comply will increase because the chance of being
caught and being punished is high.
Further, in a rewards policy context, organizational research has found that “instrumentality,” referring to a belief of the likelihood that the employee will obtain the
reward if he or she meets the performance expectation, is an influential factor on
motivation [74]. Policies stating the certainty that performance will result in rewards
augment the instrumentality [74]. Thus, we argue that linking certainty to reward is
often essential to shaping and maintaining desirable behaviors and attitudes toward
such behaviors.
From a control perspective, both reward and punishment are control mechanisms
to achieve organizational goals [21]—specifically, in this study, compliance with
security policies. Certainty of control is the probability of the enforcement strategy
materializing. If employees believe there is high certainty of control associated with
compliance or noncompliance, their intention to comply with security policies will
increase. Hence,
Hypothesis 3: Certainty of control will positively influence the intention to comply
with security policies.
Organizations’ Information Security Policy Compliance
167
Interactions: Punishment × Certainty of Control and
Reward × Certainty of Control
Theories of decision making under uncertainty suggest that when people make a
decision associated with an event, they look at not only the event itself but also the
probability of the event. Their cognitive process tries to capture both the impact and
the likelihood of the event. They make their decision based on a utility function [55].
Applied to the context of this study, when employees make a decision on whether to
comply with their organization’s security policies, they evaluate and implicitly factor
the potential effects on their utility function of a loss (punishment) or gain (reward).
However, they evaluate not only the potential positive (negative) effect of reward
(punishment) but also the likelihood of reward (punishment). Moreover, organizations,
as complex systems, exhibit nonlinear patterns such as interaction terms because in
organizations almost each influential factor/event is related to a probability of occurrence [3]. To specify such patterns, it is critical to assign the probability of occurrence
to the focal factor and examine their interaction terms [3].
Research in criminology points out that deterrence theory is based on a utilitarian
perspective, and an “interaction hypothesis is more consistent with the utilitarian
perspective” [31, p. 473] because sanction severity will have little or no effect on
those who do not perceive they will be caught. Inconsistent findings about the impact
of punishment severity and certainty from prior IS security research may further
indicate the existence of an interaction effect between the two factors. Similarly, it
is has long been observed that the effect of rewards is moderated by the probability
of occurrence that rewards are offered [19]. To promote compliance, employees need
to realize that no matter the sanctions or rewards, enforcement is real and compliance
or noncompliance behaviors are acted upon [71]. Thus, we argue that the enforcement certainty influences an employee’s judgment about the effectiveness of reward
or punishment: the impact of the magnitude of difference in reward or punishment
will be moderated by the certainty of control. Hence, we offer the following two
hypotheses:
Hypothesis 4: The impact of punishment on the intention to comply with security
policies is moderated by the certainty of control: the difference in impact on
intention to comply between high and low levels of punishment contexts in high
certainty of control environments is smaller than in low certainty environments.
Hypothesis 5: The impact of reward on the intention to comply with security poli‑
cies is moderated by the certainty of control: the difference in impact on intention
to comply between high and low levels of reward contexts in high certainty of
control environments is smaller than in low certainty environments.
Notice that although it is tempting to simplify our model by multiplying reward and
punishment by their respective certainties and thereby circumventing the factor interaction issue, we do not adopt this strategy for the reason of not making unsupported
assumptions on employees’ risk posture. Once reward (or punishment) is multiplied
168
Chen, Ramamurthy, and Wen
by the certainty factor to yield an expected value, continuing to use this value in the
decision calculus would necessitate the assumption of risk neutrality in the preference
of the employee. As such, the employee would be indifferent between a large reward
with low certainty and a small reward with high certainty because both prospects bring
the same level of expected value. This is, however, a rather unusual situation arising in
real life among ordinary people. We believe the imposition of risk neutrality might be
acceptable in cases where group preferences are modeled or the nature of risk aversion
is not the central focus of decision making. For example, in Siponen and Vance’s [61]
study, the central focus was to build and validate the neutralization theory-based
compliance model. Their use of expected penalty to simplify the deterrence theory
components that exist only for nomological completeness of modeling appears to
be of no real concern in their study. Yet the GDT is in the center of our model, and
the interplay of reward/punishment with certainty must be explicitly explored in the
absence of any assumption of risk posture of the employee.
Interaction: Punishment × Reward
Organizations as complex systems seldom use coercive control alone or remunerative control alone, but often use both to increase compliance [22]. It has long been
observed that organizations as well as individuals often use a combination of rewards
and punishments to enforce desirable relationships in social exchanges [4]. When both
reward and punishment are in the policy enforcement scheme, the joint effect is not as
simple as adding up the two effects or canceling each other. In many cases, punishment
and reward interact with each other [22]. Prior research has found that punishments
alone or rewards alone have little or no influence on cooperation, but jointly they have
significant effects on cooperation [4]. Previous studies also found that punishment
could cause retaliation and hostile emotional reactions and that these reactions can
lead to strong resistance to compliance [46]. Sometimes, punishment is interpreted
by employees as “duress” so that their “perceptions of dispositional causation” are
diminished [32, p. 419]. Thus, adding a reward scheme to the punishment enforcement
can reduce such strong emotional reactions since reward can encourage cooperation
and boost self-esteem [4]. Furthermore, although many organizations predominantly
use coercive control, those coercive control mechanisms do not result in desirable
compliance behaviors unless they are used in conjunction with remunerative control
mechanisms [22, 62]. Ethical and compliance programs might be more effective if
they incorporate a reward system while having coercive control mechanisms in place
to follow up and punish noncompliance [71]. In other words, the effect of punishment
depends on reward. Further, if a person is threatened by punishment for noncompliance, then his or her decision to comply is constrained by the threat of punishment. To
a certain extent, he or she has to comply because of the threat of punishment. But, if a
person is offered reward for compliance, then his or her decision is less constrained.
He or she has an option for giving up the reward in exchange for noncompliance [32].
Thus, when the reward level is low, the levels of punishment more dominantly influ-
Organizations’ Information Security Policy Compliance
169
ence compliance intention than when the reward level is high. Hence, we offer the
following hypothesis:
Hypothesis 6: The impact of punishment on the intention to comply with security
policies is moderated by reward: the difference in impact on intention to comply
between mild and severe levels of punishment contexts in low levels of reward
environments is greater than in high levels of reward environments.
Research Methodology
Experiment Design
Given the nature of the hypotheses underlying this study, a Web-based experiment
involving real-world employees in their natural settings was deemed the most appropriate. We used a 2 × 2 × 2 mixed design. The first factor, punishment, was administrated
at two levels (severe and mild). The second factor, reward, was also administrated at
two levels (high and low). The third factor, certainty of control, was varied at two levels
(high and low). The first two factors, punishment and reward, are “within-subjects”
factors, and the third factor, certainty of control, is a “between-subject” factor. A set of
eight (four each for high and low certainty of control) scenarios was designed to test
the main effects and interaction effects of these three factors. The participants were
randomly assigned to two groups: one exposed to the four high level of certainty of
control scenarios and the other to the four low certainty of control scenarios (betweensubjects factor). Each set of the four scenarios reflected the combinations of the levels
of the first two within-subject experimental factors, punishment and reward, as shown
in Table 1. For example, Scenario 1 describes the manipulation of a low level of reward
and the mild level of punishment. Scenario 2 describes the manipulation of the low
level of reward and the severe level of punishment, and so forth. To control for any
order effects due to repeated trials, we used the concept of a Latin square design to
create a Latin square design matrix, as shown in Table 1. Each participant in the corresponding group was randomly assigned one of four presentation orders in the Latin
square design matrix.
As to the experiment procedure, details of the security policies related to password,
e‑mail use, and Internet use of a hypothetical company (iCorp) were first presented to
all of the participants. The participants were asked to assume that they were employees of this company and to thoroughly read and understand the policies. This was
then followed by a series of four different case scenarios (for one of the two levels
of certainty of control noted earlier to which they were assigned); the participants
were asked to go through each scenario and then answer questions about their intention to comply with the security policies imposed in this hypothetical company, as
well as to answer the manipulation check questions (see the Appendix, statements
MANI‑C1, MANI‑C2, MANI‑P, and MANI‑R) after each case scenario. Finally, the
participants were asked to answer a set of questions related to the control variables
and demographic profiles.
170
Chen, Ramamurthy, and Wen
Table 1. Latin Square Design Matrix
Order_1
Scenario 1
Scenario 2
Scenario 3
Scenario 4
Order_2
Scenario 2
Scenario 3
Scenario 4
Scenario 1
Order_3
Scenario 3
Scenario 4
Scenario 1
Scenario 2
Order_4
Scenario 4
Scenario 1
Scenario 2
Scenario 3
Note: Scenario 1, low reward and mild punishment; Scenario 2, low reward and severe
punishment; Scenario 3, high reward and mild punishment; Scenario 4, high reward and severe
punishment.
As noted, a hypothetical scenario technique was employed in this experiment design.
This research method has been widely used in IS research on a diverse range of topics such as software piracy [47], ethical IT use behavior [42], project escalation [37],
conveying bad news to project managers [63], IT outsourcing risks [69], and risk
perceptions in business process outsourcing [29]. Scenario-based techniques have
been commonly used in studying ethics-related security behaviors such as security
policy violation and computer abuse (e.g., [16, 61]). We made use of the scenario
analysis technique for a number of reasons. One primary reason is the reluctance of
and the resulting moratorium by real-world companies in allowing their employees
to divulge information security–related information for competitive and credibility
reasons. Another major reason is to avoid potential evaluation apprehension bias that
prompts respondents to provide ethically or socially desirable answers rather than
reveal their “unethical” intentions and behaviors (if any). Hypothetical scenarios that
tell another person’s story may help respondents to drop their guard and reveal their
true intentions [61]. We chose a design of multiple scenarios per respondent because
each scenario is associated with a relatively small number of survey items [36]. Following the suggestions of scenario development in the literature [25, 75, 76], we used
a fractional design in which each participant is given four scenarios to avoid possible
information overload and fatigue. At the same time, we ensured that each participant
was exposed to an adequate number of scenarios so that we could properly manipulate
our independent variables [75]. We controlled the possible order and carryover effects
by using a Latin square design matrix for the random assignment of scenarios [75].
We examined fairly extensively information security policy practices prevailing in
industry [35] and surveyed the existing literature to ensure that our scenarios were
realistic, familiar, and succinct, and that our corresponding findings were generalizable
based on the scenarios. In addition, the scenarios were pilot tested and commented on
twice by eight information security professionals and experts (see the Data Collection
section for details on the pretests). Since no “optimal” number of scenarios has been
suggested in the literature [76], we pilot tested the number of scenarios used in the
study to ensure its adequacy.
Finally, to ensure validity and reliability [11] as well as compatibility with extant
literature, the dependent variable, intention to comply with information security
Organizations’ Information Security Policy Compliance
171
policy, was measured by three seven-point Likert scale items adopted from Herath
and Rao [33], Ryan [57], and Venkatesh et al. [73] (see the Appendix).
Control Variables
It was necessary to control for influences of a number of variables to identify the true
effects of the study variables considered here. Previous research in the IS security
literature, for instance, suggests that individual characteristics such as age and gender
are related to security policy compliance intention [41, 66]. Therefore, we included
gender, age, and education as our control variables. Organizational security culture was
also included as a control variable because of potential differences in security policy
compliance among employees in different organizations [7, 16, 60]. For instance, in
financial and health-care institutions, because of the overall critical nature of information security to the business, organizational security cultures may exist within which
the value of information security is continually reinforced through daily practices and
routine training. However, this might not be the case for some firms in the manufacturing sector that may place less value on information security. Even within the same
industry (e.g., financial industry), financial institutions can vary in organizational
security cultures. Therefore, we felt it necessary to consider and control for the organizational security culture, measured by eight seven-point Likert scale items adapted
from Knapp et al. [39] (see the Appendix). In addition, we used organizational security
culture to control for a possible normative control effect, although it is not the focus
of this study. For the same reason, we controlled for the influences of organizational
security practices—measured by organizational security policy, security training, and
security monitoring, with four, four, and six seven-point Likert scale items, respectively,
adopted from D’Arcy et al. [16] (see the Appendix).
Data Collection
Following good research principles and practices, we conducted two pilot tests in two
U.S. Midwestern companies in the financial industry with eight information security
professionals who are responsible for their company’s information security policy
implementation. Thus, we validated each of the three bilevel experimental factors as
well as the eight case scenarios and questionnaire design. Necessary modifications
and refinements based on the results of the two pilot tests were incorporated to ensure
robustness of the research design. Then, we conducted a third pilot test on three IT
professionals enrolled in an MBA class in a major university in the midwest, and further
modifications and refinements were made based on the results of the test.
We recruited our participants from the same two midwest companies where the first
two pilot tests were done. A total of 50 employees, 25 from each company, participated
in our Web-based experiment; none of the pilot test participants were in this set. Each
participant was given four trials that followed the previously mentioned design. Thus,
the overall sample size for this research is 200, and each of the eight case scenarios
has 25 observations.
172
Chen, Ramamurthy, and Wen
Analyses and Results
Demographic and descriptive statistics of the participants show that there was a
good distribution of age and educational background of the participants; the median
age was about 35 years, and over 50 percent had at least an undergraduate degree.
The participants had, on average, been with their firms for 7 years and in the profession for over 15 years. This profile suggests that the participants in this study
were mature, educated, experienced, and knowledgeable; thus, their responses can
be considered to be dependable and used with confidence. Because the number
of female participants (Nfemale = 33) was more than twice the number of male participants (Nmale = 14) (three employees did not reveal their gender), we tested for
any significant difference in compliance intention between genders and found no
significant difference (p = 0.485).
The convergent and discriminant validity of the four control constructs and the
dependent construct were assessed by carrying out exploratory factor analyses (EFA)
with varimax rotation of the extracted factors; this was followed by testing reliability of
the constructs via examining the Cronbach alpha values. As per guidelines laid down
in previous research [14, 45], we dropped indicator items with low loadings (less than
0.60) and with high cross loadings (greater than 0.40). The final loadings and cross
loadings matrix from the EFA as well as the eigenvalues and Cronbach alpha values
of the constructs are shown in Table 2.
The results of the emergent 5-factor structure met the above criteria, with all the
predefined items loading on their corresponding latent variables (2 out of 27 indicators measuring the 5 constructs were dropped during scale refinement), supporting
discriminant validity [45]. Among the 5 constructs, the minimal eigenvalue was 1.57
(for compliance intention), greater than the recommended value of 1, which verifies
the convergent validity of each construct. All the Cronbach alpha values exceeded the
cutoff value of 0.70 [50], thus supporting the reliability of all 5 constructs.
Manipulation checks of the independent variables—reward, punishment, and certainty—and of the order effect were performed by running one-way ANOVAs (analyses
of variance). We first ran three one-way ANOVAs on the manipulation check questions
of punishment (MANI-P), reward (MANI-R), and certainty (mean of MANI-C1 and
MANI-C2) by the two levels of reward, punishment, and certainty, respectively (see
the Appendix for the details of manipulation statements MANI-P, MANI-R, MANIC1, and MANI-C2). As shown in Table 3, the results provide strong evidence that the
manipulations of the three independent variables were correctly interpreted by the
participants as originally anticipated. The differences between the manipulations were
all significant (p < 0.001). We then ran a one-way ANOVA on the dependent variable
of compliance intention and manipulation check questions of reward, punishment,
and certainty by the order shown in Table 1. The results presented in Table 4 show
that the order of presenting the four scenarios had no significant effects on the major
variables of this study (p > 0.05), except for perceived severity of punishment. The
post hoc analysis shows that only Order 2 and 3 were marginally different from each
other (p = 0.069). Therefore, we argue that our manipulations are successful and that
the order effect is not an issue in this study.
Organizations’ Information Security Policy Compliance
173
Table 2. Validity (Joint Factor Analysis) and Reliability Test Results
Indicator items
Complicance_Intent1
Complicance_Intent2
Complicance_Intent3
Security_Culture1
Security_Culture2
Security_Culture3
Security_Culture4
Security_Culture5
Security_Culture6
Security_Culture7
Security_Culture8
Security_Policy1
Security_Policy2
Security_Policy3
Security_Policy5
Security_Training2
Security_Training3
Security_Training4
Security_Training5
Security_Monitoring1
Security_Monitoring2
Security_Monitoring3
Security_Monitoring4
Security_Monitoring5
Security_Monitoring6
Eigenvalue
Variance explained
(percent)
Cumulative variance
(percent)
Cronbach’s alpha
F1:
Security
culture
F2:
Security
monitoring
F3:
Security
policy
F4:
Security
training
F5:
Compliance
intention
0.030
0.034
0.009
0.756
0.769
0.817
0.902
0.848
0.669
0.922
0.811
0.148
0.344
–0.075
0.055
0.239
0.117
0.250
0.160
0.208
0.150
–0.030
0.142
0.099
0.278
10.239
35.31
–0.009
0.031
0.146
0.263
0.187
0.097
0.132
0.128
0.334
0.119
–0.067
0.020
0.087
0.273
0.045
0.349
0.207
–0.110
0.261
0.616
0.734
0.681
0.857
0.791
0.816
3.549
12.24
–0.041
–0.005
0.029
0.136
0.147
0.165
0.058
0.023
–0.032
0.157
0.071
0.936
0.779
0.695
0.866
0.027
0.287
0.191
0.377
0.187
0.103
0.295
–0.007
0.101
–0.031
3.129
10.67
0.040
0.059
–0.005
0.231
0.295
0.114
0.082
0.321
0.503
0.009
–0.072
0.088
0.235
0.092
0.192
0.744
0.717
0.697
0.642
0.297
0.111
0.326
–0.094
0.299
0.091
2.823
9.94
0.934
0.968
0.932
0.090
0.055
0.179
–0.031
–0.063
0.175
–0.021
0.002
0.040
–0.009
–0.033
0.031
0.107
0.036
–0.011
–0.294
–0.080
–0.015
0.153
0.140
0.117
–0.143
1.571
5.42
35.31
47.55
58.33
68.07
73.49
0.950
0.882
0.863
0.823
0.956
Note: Items of a common factor appear together in boldface.
A repeated-measure ANOVA with a between-subjects factor was performed to test
our hypotheses. The results in Table 5 show that the main effect of punishment was
significant (F1,48 = 5.07, p = 0.029), supporting H1 that the severity level of punishment enforcement policy has a significant effect on policy compliance intention. The
results strongly support H2 that the level of reward can significantly affect employees’
compliance intention (F1,48 = 12.73, p = 0.001). H3, which tests the main effect of
enforcement certainty, was supported as well (F1,48 = 6.07, p = 0.017). The two-way
interaction between punishment and certainty of control was significant (F1,48 = 3.12,
p = 0.084). Further, Plot A in Figure 2 indicates that the impact difference between the
high and low levels of punishment condition in the high certainty of control condition
was smaller than in the low certainty condition. Therefore, H4 was supported. The
174
Chen, Ramamurthy, and Wen
Table 3. Manipulation Checks of Independent Variables
Low certainty
(n = 100)
High certainty
(n = 100)
Study variables
Mean (SD)
Mean (SD)
F-value
(df)
Significant
difference
Perceived
severity of
punishment
Perceived
significance of
reward
Perceived
enforcement
certainty
3.51
(1.76)
5.66
(1.68)
77.90***
(1, 198)
Yes
3.48
(1.83)
5.81
(1.28)
108.72***
(1, 198)
Yes
4.75
(1.46)
5.74
(1.23)
26.78***
(1, 198)
Yes
Notes: SD = standard deviation; df = degrees of freedom. *** p < 0.01.
two-way interaction between punishment and reward was also significant (F1,48 = 7.67,
p = 0.008), strongly supporting H6.
Plot B in Figure 2 provides further graphical demonstration, supporting H6 that the
impact of punishment on the intention to comply with security policies is greater when
reward is low than when reward is high. However, the two-way interaction between
reward and certainty of control (H5) was not supported (F1,48 = 0.68, p = 0.414), as
shown in Table 5. Plot C in Figure 2 graphically shows that the impact difference
between the high and low reward on compliance intention is statistically the same
at the high and low levels of certainty of control. Note that since we pooled data
from two organizations, we also ran a one-way ANOVA on the dependent variable
of compliance intention and manipulation check questions of reward, punishment,
and certainty level by organization type (the two organizations were coded as 2 and
3, respectively). The results show that participants from the two organizations were
not significantly different on the major variables of this study (p > 0.05) except for
perceived certainty of control. However, we still controlled for organization type in
our analysis. All the control variables as well as organization type were initially input
as covariates, and the results show that their main effects were insignificant; therefore,
we did not include them in further analysis.
Discussion
Overall, the results confirm support for five of the six hypotheses, substantively
supporting our theoretical model. As hypothesized, we found that the main effects of
severity of punishment, significance of reward, and certainty of control were all significant. Beyond lending further support to the GDT that severity of punishment and
certainty of punishment deter employees from security policy violation, the study’s
findings highlight that reward enforcement, a remunerative control mechanism in the
6.19
(1.24)
4.81
(1.79)
4.38
(2.11)
5.02
(1.75)
6.19
(1.24)
4.07
(2.14)
4.75
(2.03)
5.42
(1.34)
Mean
(SD)
Mean
(SD)
6.37
(1.14)
5.11
(1.97)
4.64
(1.82)
5.04
(1.38)
Mean
(SD)
Order-3
(n = 36)
6.19
(0.80)
4.75
(2.00)
4.77
(1.82)
5.37
(1.20)
Mean
(SD)
Order-4
(n = 44)
0.24n.s.
(3, 196)
2.73*
(3, 196)
0.43n.s.
(3, 196)
1.10n.s.
(3, 196)
F-value
(df)
None
None
2–3
None
Significant
difference1
Notes: SD = standard deviation; df = degrees of freedom. 1 Bonferroni as well as Scheffe tests of paired contrasts. * p < 0.1; n.s. = not significant.
Perceived severity of
punishment
Perceived significance of
reward
Perceived enforcement
certainty
Compliance intention
Study Variables
Order-2
(n = 72)
Order-1
(n = 48)
Table 4. Manipulation Check—Scenario Presentation Order by Study Variables
Organizations’ Information Security Policy Compliance
175
176
Chen, Ramamurthy, and Wen
Table 5. Summary of ANOVA Results and Hypotheses Test Results
Hypothesis
H1: Punishment × Intention
H2: Reward × Intention
H3: Certainty × Intention
H4: Punishment × Certainty ×
Intention
H5: Reward × Certainty × Intention
H6: Punishment × Reward × Intention
Mean
square
F-value
p-value
Support?
2.35
7.61
31.21
1.45
5.07
12.73
6.07
3.12
0.029**
0.001***
0.017**
0.084*
Yes
Yes
Yes
Yes
0.41
2.21
0.68
7.67
0.414n.s.
0.008***
No
Yes
Notes: df = 1, 48. * p < 0.1; ** p < 0.05; *** p < 0.01; n.s. = not significant.
Figure 2. The Plots of Interaction Terms
Organizations’ Information Security Policy Compliance
177
IS security context, could be an alternative for organizations where sanctions do not
successfully prevent violation. Indeed, our respondents’ answers to an open, voluntary
question at the end of the survey revealed that they dislike the “unmotivated atmosphere” caused by the “pure” sanction enforcement policy of their organization (see
the comment below as one example). Such kinds of sentiments and dislike may be
indicative of the ineffectiveness of coercive control in the current IS security policy
compliance efforts:
Our organization doesn’t reward good practices on the computer, but will punish
you. Very unmotivated atmosphere.
Another important finding is support for the interaction effect between severity of
punishment and certainty of control. As two important factors of the GDT, their direc...