Application 1 – Analysis and Synthesis of Prior Research

Requirements Discovery, Systems Modeling, and Architectural Design 

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

 Application 1 – Analysis and Synthesis of Prior Research

At professional conferences, blocks of time may be set aside for what are termed “poster sessions.” A hotel ballroom or large open area will be ringed with individuals who use displays such as posters or electronic presentations displayed via projectors. These sessions provide an opportunity to share one’s research in an intimate setting, with a small group gathered around who share a similar interest. The seminar format of this course is very similar to this academic exchange. During one set of paired weeks, you will be appointed as a Group Leader. If you are one of the Group Leaders for this week, you are to prepare an academic presentation, much like a poster session.

Your presentation should present analysis and synthesis of prior research and will begin the interaction with your colleagues. You will prepare an academic paper of between 5–7 pages in APA format, as well as a PowerPoint presentation of 7–10 slides. This analysis will be an open-ended introduction to relevant topics of study regarding systems design, analysis, and implementation. Your goal, as the presenter, should be to persuade your discussants that the approach(es) you have analyzed and synthesized is/are a sound means for discovering new methods in the field. You should acknowledge that there are other models, or means to study various types of systems, but you should strive to be as persuasive as possible that the specific concepts you have reviewed are exciting research avenues and that they are potentially breakthrough areas for advancing the understanding of systems development.

Your paper and presentation should contain the following elements:

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper
  • An incorporation and analysis of at least 5 of the Required Resources from this pair of weeks;
  • The incorporation and analysis of 5 additional resources from the Walden Library;
  • An identification of principal schools of thought, tendencies in the academic literature, or commonalities that define the academic scholarship regarding your topic;
  • An evaluation of the main concepts with a focus on their application to management practice and their impact on positive social change;
  • Direct evidence of addressing the learning outcomes from this pair of weeks.

In addition to the above elements, the Group Leader(s) for this week will focus thematically on:

  • Articulating the relationship between requirements and requirements specification;
  • Identifying the role of management in requirements elicitation, analysis, and validation;
  • Conceptualizing the differences among the four software system modeling perspectives: external, interaction, structural and behavioral;
  • Analyzing the role of software architecture and architectural design in organizing and designing a software system.

HOW INTUITIVE IS
OBJECT-ORIENTED DESIGN?

By Irit Hadar and Uri Leron

Intuition is a powerful tool that helps us navigate through life,
but it can get in the way of more formal processes.

T
he object-oriented programming paradigm was created partly to deal
with the ever-increasing complexity of software systems. The idea was to
exploit the human mind’s natural capabilities for thinking about the
world in terms of objects and classes, thus recruiting our intuitive pow-
ers for building formal software systems. Indeed, it has commonly been
assumed that the intuitive and formal systems of objects and classes are
similar and that fluency in the former helps one deal efficiently with the
latter. However, recent studies show that object-oriented programming is

quite difficult to learn and practice [1, 3, 7]. In this article, we document several such dif-
ficulties in the context of experts participating in workshops on object-oriented design
(OOD). We use recent research from cognitive psychology to trace the sources of these
difficulties to a clash between the intuitive and analytical modes of thinking.

COMMUNICATIONS OF THE ACM May 2008/Vol. 51, No. 5 41

Hadar_ lo:Intro_ lo 4/28/08 3:05 PM Page 41

42 May 2008/Vol. 51, No. 5 COMMUNICATIONS OF THE ACM

Recent research in cognitivepsychology shows that peo-ple consistently make mis-takes on simple everydaytasks, evenwhen the subjectsare knowledgeable, intelli-gent people, who undoubt-edly possess the necessary
knowledge and skills to perform correctly on those
tasks. The source of these mistakes is often shown to
be the insuppressible influence of intuitive thinking.
This research, the heuristics and biases program, has
been carried out byKahneman andTversky andoth-

ers during the last 30 years, and has led to Kahne-
man’s receiving the 2002 Nobel Prize in economics.1

In his Nobel Prize lecture, Kahneman opened with
the following story:

Abaseball bat andball cost togetheronedollar and
10 cents.Thebat costs onedollarmore than the ball.
How much does the ball cost?
Almost everyone reports an initial tendency to

answer “10 cents” because the sum $1.10 separates
naturally into $1 and 10 cents, and 10 cents is about
the right magnitude. Indeed, many intelligent people
yield to this immediate impulse: 50% (47/93) of
Princeton students and 56% (164/293) of students at
theUniversity ofMichigan gave thewrong answer [2,
4].

What are our mind’s mechanisms that may
account for these empirical findings? One current
influential model in cognitive psychology is Dual-
Process Theory [4, 10, 11]. According to this theory,
our cognition and behavior operate in parallel in two
quite different modes, called System 1 (S1) and Sys-
tem 2 (S2), roughly corresponding to our common
sense notions of intuitive and analytical thinking.
These modes operate in different ways, are acti-

vated by different parts of the brain, and have differ-
ent evolutionary origins (S2 being evolutionarily

more recent and, in fact, largely reflecting cultural
evolution). S1 processes are characterized as being
fast, automatic, effortless, unconscious, and inflexible
(difficult to change or overcome). In contrast, S2
processes are slow, conscious, effortful and relatively
flexible. Inaddition,S2 serves asmonitor andcriticof
the fast automatic responses of S1, with the “author-
ity” to override them when necessary. In many situa-
tions, S1 and S2 work in concert, but there are
situations (such as the ones concocted in the heuris-
tics and biases research) in which S1 produces quick
automatic non-normative responses,while S2mayor
may not intervene in its role as monitor and critic.
A brief analysis of the bat-and-ball data can

demonstrate the usefulness of dual-process theory for
the interpretationof empirical data.According to this

theory, we may think of this phenomenon as a “cog-
nitive illusion” analogous to the famous optical illu-
sions from cognitive psychology. The surface features
of the problem cause S1 to jump immediately with
the answer of 10 cents, since the numbers one dollar
and 10 cents are salient, and since the orders of mag-
nitude are roughly appropriate. The roughly 50% of
students who answer 10 cents simply accept S1’s
response uncritically. For the rest, S1 also jumps
immediately with this answer, but in the next stage,
S2 interferes critically andmakes thenecessary adjust-
ments to give the correct answer (five cents).
Recently, a similar phenomenonhasbeen found in

advanced mathematical thinking, with college stu-
dents learning abstract algebra [6]. While it seems
natural that people in everyday situations prefer
(however unconsciously) quick approximate
responses that come easily to mind over careful sys-
tematic rule-bound reasoning, students solvingmath-
ematical problems during a university course would
be expected to consciously train theirmethodological
thinking to check, and override if necessary, their
immediate intuitive responses. From these findings
we may understand the strong influence intuition,
especially its tendency to be influenced by surface
clues, has on our thinking. In this article we demon-
strate that a similar phenomenon—and a similar
explanation—may also hold for OOD tasks carried
out by experts in industry.
Anote on terminology:We followKahneman and

While people in everyday situations prefer responses over careful
systemic reasoning, students solving mathematical problems would be expected to consciously train
their methodical thinking to check, and override if necessary, their immediate intuitive responses.
From these findings we may understand the strong influence intuition.

1Tversky unfortunately died several years earlier.

Hadar_ lo:Intro_ lo 4/28/08 3:05 PM Page 42

other cognitive psychologists in using “intuition” in
its folk meaning of everyday thinking. This meaning
is elaborated in the description of System 1, and is
mainly used in contradistinction to analytical think-
ing or to reasoning. The title of this article should
thus be understood as an inquiry into the nature of
the gap between the everyday “natural” meaning of
objects and categories vs. their formal meaning in
OOD. It should further be noted that intuition may
have different meanings in different contexts. For
example, our use of the term is quite different from
the way a mathematician might use it when he or she
says: “I had the intuitive idea of the proof long before
I was able to complete the formal proof.”

INTUITIVE THINKING IN OOD
OOD is a complex domain, requiring formal train-
ing and effortful thinking, which is just the kind of
process System 2 would be expected to appropriate.
However, our research indi-
cates thathere too the auto-
matic, quick, and effortless
operation of System 1 may
hijack software developers’
attention and lead them to
decisions that are not ade-
quate and may even clash
with their own knowledge.
We discuss several exam-

ples of this phenomenon exhibited by experienced
software developers in industry while practicing
designactivities, andexplain themin lightof thedual-
process theory. We invoke this theory in the domain
of OOD in an attempt to understand the relatively
elementary mistakes we observed in the responses of
intelligent capable professionals, even in cases when
they have the necessary knowledge to avoid suchmis-
takes.
Our observations took place within advanced

UMLworkshops [8] conducted in the industry.Dur-
ing these workshops the participants were asked on
several occasions to analyze simple design tasks. The
participants worked on these tasks either individually
or in small groups and their solutions were subse-
quently discussed within the whole group. Our data
includes the written solutions of the participants in
theworkshops,documentationsof their groupdiscus-
sions as observed and documented by the researchers,
and transcripts of class discussions.The research pop-
ulation included 41 software developers with experi-
ence of 2–12 years in OO development. Because our
objective was to describe a complex situation in its
natural settings and its full complexity, we have used
the qualitative research paradigm [12], which focuses

on case studies for obtaining specific insights rather
than on large populations, simplified experiments,
and statistical methods for discovering universal laws.
(This is analogous to themethods usedby anthropol-
ogists studying unfamiliar cultures.) During the
research we documented, videotaped, and analyzed
many relevant incidents and processes. The data
analysis included coding the data obtained, and char-
acterizing and classifying it to emerging categories.
The full research findings and evidence will be
described elsewhere. Here, we provide a selection of
examples to demonstrate our findings.
Confusing the direction of inheritance. A design task

concerning a hotel reservation system was presented
for a discussion to a group of experienced engineers
participating in a UML workshop. The instructor
suggested using three classes (email, fax, and phone),
to represent the three corresponding modes of enter-
ing reservations. The possibility then arose of using

inheritance relations
between concrete classes
to exploit shared func-
tionality and features,
such as checking the
availability of a room.
For example, the class

faxcould inherit fromthe
class email, since a fax object requires more handling
(such as scanning and digitizing), hence has more
functionality, than an email object.

Instructor: Under the restriction that for now we
only use these three classes, can any of these
classes inherit from another class? Can we use the
fact that they have many things in common?
[The participants hesitate]…
Instructor: For example, fax is like email, only with
a few more tests.

Dan: Email inherits from fax, because email is the
same as fax, only with fewer tests.

Instructor: So, email has less functionality than…
Dan [hastily]: Oh, right, it should be the other way
around.

In view of this and similar observations, we pre-
sented a group of 10 software developers with a sim-
ilar question in order to check this phenomenon
moredirectly.The answersweredivided5:5between
the two possible directions of inheritance. Signifi-
cantly, as in the bat-and-ball and in Dan’s case, the
participants who chose the wrong direction required
only a small nudge (with no informational or
explanatory content) to quickly change their mind.
Analysis: All the participants in the research have

COMMUNICATIONS OF THE ACM May 2008/Vol. 51, No. 5 43

Client

Name

Login
Register

Server

ValidateUser
InsertNewUser

A design of authorization
system.

Hadar_ lo:Intro_ lo 4/28/08 3:05 PM Page 43

several years of experience in OO software develop-
ment. Why do intelligent and experienced profes-
sionals have difficulties with such an elementary
issue? We propose that the same mechanism used by
Kahneman to explain the bat-and-ball phenomenon
is also inoperationhere.Specifically, S1with itsquick
andeffortless operation“hijacks” the thinkingprocess
andproduces a response that seems roughly appropri-

ate,while the slow and effortful S2 remains dormant.
This analysis gets additional support from the obser-
vation that the small cueofferedby the instructordid-
n’t teach the participant anything new, only served to
wake up S2; the necessary knowledge was there all
along, but the dual system analysis is needed to
explain why it was not mobilized.

W
hy would S1 and S2 clash
about the meaning of
inheritance? In people’s
everyday intuition (S1),
inheritance is about trans-
ferring “stuff” (such as
property or money), and
the direction is usually

from the person who has more to the one who has
less. For example, in an informal poll we asked stu-
dents, in the context of OOD, what is the relation
between a doctor and a paramedic in an ambulance?
A typical reaction was, “paramedic inherits from the
doctor because the doctor has more qualifications.”
Similarly, we predict that most people would say that
a student “inherits” from the professor (because the
professor has more knowledge) and not vice versa.
But in the OOD formalism (S2), the reverse is true:
the classwithmore functionally inherits fromtheone
with less.
Difficulties in identifying objects. One of the first

tasks inOODis“carvingagiven scenarioat its joints”
in terms of objects and classes. In one of the work-
shops the participantswere asked to design an autho-
rization system that will route users as follows:

• An existing user will login into the system.
• A new user will register and receive authorization.

A typical design would look like the accompanying

figure. The following discussion took place while
the participants were working in pairs on the task.

Ron: Let’s define login and register as objects.
Sharon: Do login and register seem like objects to
you?

Ron: Why not?
Sharon: An object is a client, for example.

Ron: Client is also an object. Login and register are
activated and operate within the system; therefore
they can be defined as objects.

Sharon: I’ve never seen an object login.
Ron: Don’t worry, it will be okay. You’ll see how I
design the system; it will be just fine.

Sharon [hesitates, at last reluctantly giving in]:
Okay, fine, although it doesn’t sound good.

Analysis: Ron’s decision is a typical S1 behavior, simi-
lar to thatobserved in thebat-and-ball task. In search-
ing for objects he is influencedby the surface features
of the task (the salience of the terms login and regis-
ter in the task description) rather than its essential
(though implicit) components. Unlike the bat-and-
ball phenomenon,Ronrequiresmore thananudge to
change his mind, which seems to imply that his S2
knowledge in this regard is not too firm either.
Sharon, in contrast, seems tohave a firmer senseof

the right objects, but this too is S1 knowledge, in the
sense that she cannot explainher choice.Her attempts
at convincing Ron involve expressions like “I’ve never
seen an object login,” and “it doesn’t sound good,”
which show that she relies on her vast past experience
(S1) rather than on analytical rule-based reasoning
(S2). Sharon’s example, in contrast to the other exam-
ples presented in this article, demonstrates how using
intuitionmay in fact contribute positively, even in sit-
uations of formal problem solving.

CONCRETIZING ABSTRACT CLASS
Confusing characteristics of abstract and concrete classes.
Abstract class is a class with at least one virtual func-
tion. Thus one can’t instantiate concrete objects
directly from an abstract class, but only through a
(concrete) inheriting class. In this example, Rebecca
chose to define an abstract class car and the following
discussion ensued.

44 May 2008/Vol. 51, No. 5 COMMUNICATIONS OF THE ACM

Under the demands of abstraction, formalization, and executability, the formal
OO paradigm has come to sometimes clash with the very institutions that produced it.

Hadar_ lo:Intro_ lo 4/28/08 3:05 PM Page 44

Rebecca: Let’s say car is an abstract class. Then, in
one design I can inherit from it Chevrolet and
Rolls-Royce, and in another design I will instanti-
ate an object car with manufacturer value
Chevrolet.

Instructor: Is car an abstract class?
Rebecca: No, yes, that’s not the point…

In a subsequent interview with Rebecca, the
researcher probed the matter further.

Researcher: Rebecca, what did you mean by the car
example?

Rebecca: I just tried to show that there are two
design possibilities using an abstract class, but I
got mixed up.

Researcher: What was the problem?
Rebecca: I wanted to show that you can instantiate
objects with parameters instead of using inheri-
tance tree… but it didn’t work out.

Researcher: Why?
Rebecca: Because the moment I instantiate objects, I
cannot define the class as abstract.

We note that this was not an isolated case. While it
seems that the participants in this study recognize
the distinction between abstract and concrete classes
in theory, several cases were observed where they
referred to abstract classes as if they had the charac-
teristics of concrete classes. Even in some written
solutions, we found cases where an abstract class was
defined but was subsequently used as a concrete
class.
Analysis: Rebecca knows the difference between

concrete andabstract classes,but this isS2knowledge.
Our interpretation of howS1worked in this example
follows from the dual nature of the relationship
between thenatural and the formal conceptual frame-
work concerning categories and objects. On the one
hand, OOD builds on the intuitions of the natural
concepts, but on the other hand, the natural system
sometimes clashes with the formal one. We propose
that this is what happened in Rebecca’s case. Specifi-
cally, in the natural categorization system [5], there is
no parallel for the formal OOD concept of abstract
class (a class from which no concrete objects can be
instantiated). Hence, when Rebecca’s S2 was not on
guard, her S1 took over and slipped from abstract to
concrete class.Asbefore, a small nudgewas enough to
wake up S2 and lead Rebecca to make the necessary
distinction.
Identifying software development with coding. Cod-

ing is an important softwaredevelopment activity, but
other no less important activities contribute to soft-

ware development, such as requirements analysis,
design, and testing. We observed participants under-
weighting these other activities, to the extent of iden-
tifying software development with coding.2 The
following discussion occurred in an interview regard-
ing time invested in different activities:

Ann: Most of the time I was occupied with devel-
opment.

Researcher: What do you mean development?
Ann: You know, writing the code. For me coding
and developing are the same thing, even though I
know this is not correct.

Analysis: Ann’s first automatic response, that devel-
oping is the same as coding, is an S1 response. S1
consists of what is most accessible and what comes
most easily tomind; here the viewofdevelopment as
coding comes tomind, presumably because the code
is the final and most tangible product of the whole
process, while the other components (such as design
and requirement analysis) are less conspicuous. The
interviewer’s question served as a nudge that woke
up S2, hence the utterance: “though I know it is not
correct.” In fact, her second pronouncement is a
good demonstration of an actual clash between the
two systems: S1 expressing the view that “coding and
developing are the same thing,” but simultaneously,
S2 objecting that “I know it is not correct.”

HOW INTUITIVE IS OO DESIGN?
So, how intuitive is OOD? Well, in a certain sense it
indeed is intuitive: our cognitive system certainly
makes extensive use of objects and categories, on
which this paradigm is built. However, as often hap-
pens in the evolution of formal systems, this rela-
tionship has a flip side [9]. Under the demands of
abstraction, formalization, and executability, the for-
malOOparadigmhas come to sometimes clashwith
the very intuitions that produced it. Thus, while
objects, classes, and inheritance certainly have an
intuitive flavor, their formal version in OOD is dif-
ferent in importantways from their intuitive origins.
Dual-process theory, imported fromcontemporary

cognitivepsychology,highlights theunderlyingmech-
anism of those situations where our intuitions clash
with our more disciplined knowledge and reasoning.
Or, put in Kahneman’s words [4]: “Highly accessible
features will influence decisions, while features of low
accessibility will be largely ignored. Unfortunately,
there is no reason to believe that the most accessible
features are also themost relevant to a gooddecision.”

COMMUNICATIONS OF THE ACM May 2008/Vol. 51, No. 5 45

2This observation was obtained in a joint study with Peleg Yiftachel.

Hadar_ lo:Intro_ lo 4/28/08 3:05 PM Page 45

Indeed,wehave seen that, under the force of these
general cognitive mechanisms, deciding on appropri-
ate objects, classes, and relations is sometimes influ-
enced by irrelevant surface clues or everyday
meanings of these concepts, thus leading to inappro-
priate choices. Intuition is a powerful tool, which
helps us navigate successfully through most everyday
tasks, butmay at times get in thewayofmore formal
processes.Wehope this articlemay contribute tobet-
ter understanding of this problem, andpoint theway
to thinking about its resolution.

REFERENCES
1. Armstrong, D.J. The quarks of object-oriented development. Com-
mun. ACM 49, 2 (Feb. 2006), 123–128.

2. Gilovich, T., Griffin, D., and Kahneman, D., Eds. Heuristics and
Biases: The Psychology of Intuitive Judgment. Cambridge University
Press, 2002.

3. Holmboe, C. A cognitive framework for knowledge in informatics:
The case of object-orientation. ITiCSE’99 Conference Proceedings,
(June 1999), 17–20.

4. Kahneman, D. (Nobel Prize Lecture). Maps of bounded rationality: A
perspective on intuitive judgment and choice. In Les Prix Nobel, T.
Frangsmyr, Ed. (2002), 416-499;
www.nobel.se/economics/laureates/2002/kahnemann-lecture .

5. Lakoff, G. Women, Fire, and Dangerous Things. What Categories Reveal
about the Mind. The University of Chicago, 1987.

6. Leron, U. and Hazzan, O. The rationality debate: Application of cog-
nitive psychology to mathematics education. Educational Studies in
Mathematics 62, 2 (2006), 105–126.

7. Morris, M.G., Speier, C., and Hoffer, J.A. An examination of proce-

dural and object-oriented systems analysis methods: Does prior experi-
ence help or hinder performance? Decision Sciences 30, 1 (Winter
1999), 107–136.

8. OMG Object Management Group. UML Notation Guide. Version 1.3,
1999.

9. Paz, T. and Leron, U. The slippery road from actions on objects to
functions and variables. Journal of Research in Mathematics Education;
http://edu.technion.ac.il/Faculty/uril/papers/Paz_Leron_Actions_vs_
Functions .

10. Stanovich, K.E. and West, R.F. Individual differences in reasoning:
Implications for the rationality Debate. Behavioural and Brain Sciences
23 (2000), 645–726.

11. Stanovich,K.E. andWest,R.F.Evolutionary versus instrumental goals:
How evolutionary psychology misconceives human rationality. Psy-
chology Press, 2003, 171–230.

12. Strauss, A. andCorbin, J. Basics of Qualitative Research: Grounded The-
ory Procedures and Techniques. Sage, Newbury Park, 1990.

Irit Hadar (hadari@mis.haifa.ac.il) is a lecturer at the
Department of MIS, University of Haifa, Israel.
Uri Leron (uril@technion.ac.il) is a Churchill Family Professor
(Emeritus) of Science and Technology Education at the Technion—
Israel Institute of Technology, Haifa, Israel.

Permission to make digital or hard copies of all or part of this work for personal or class-
room use is granted without fee provided that copies are not made or distributed for
profit or commercial advantage and that copies bear this notice and the full citation on
the first page. To copy otherwise, to republish, to post on servers or to redistribute to
lists, requires prior specific permission and/or a fee.

© 2008 ACM 0001-0782/08/0500 $5.00

DOI: 10.1145/1342327.1342336

c

46 May 2008/Vol. 51, No. 5 COMMUNICATIONS OF THE ACM

Hadar_ lo:Intro_ lo 4/28/08 3:05 PM Page 46

Abstract—Structured and disciplined communication is a

prerequisite for effective management of requirements. In this
paper, we investigate what requirement management
information is communicated within a software development
cycle. We do this by studying the management of requirements
information within one Canadian organization. Our results
show that most of the information as designated in our template
is recorded by the organization studied.

Index Terms—requirements specification, tool, lightweight
and heavyweight software development.

I. INTRODUCTION
To aid in maximizing the quality of the development

process, one should provide guidance for what information to
collect about requirements management and how to structure
it. Usually, such guidance is provided in form of templates.

In this paper, we investigate what requirements
management information is communicated both within
lightweight and heavyweight software development. We do
this by creating a template of information required for
describing and managing software requirements within a
development cycle and by finding out how it is implemented
in one Canadian company. We call our template Software
Requirements Management Template (SRMT). Our primary
goal is to elicit information that is needed for communicating
information about requirements and their implementation
within the development cycle. However, we do not aim at
distinguishing which information is used in different
development approaches. Our secondary goal is to find out
the state of practice within the organization studied using the
SRMT template as a basis.

The remainder of this paper is as follows. Section 2
describes the research method taken when conducting our
study. Section 3 briefly presents the SRMT template covering
information required for communicating1 requirements and
their realization. Section 4 describes the requirements
information communicated within the organization studied.
Finally, Section 5 provides concluding remarks.

Manuscript received January 7, 2008.
Mira Kajko-Mattsson is with the Department of Computer and Systems

Sciences, Stockholm University/Royal Institute of Technology, Forum 100,
SE-16440. Kista, Sweden. (Phone: +46-8-162000; fax: +46-8-7039025;
e-mail: mira @dsv.su.se).

Jaana Nyfjord is with the Department of Computer and Systems Sciences,
Stockholm University/Royal Institute of Technology, Forum 100, SE-16440.
Kista, Sweden. (E-mail: jaana@dsv.su.se).

1By communication, we mean both oral and written communication.

II. RESEARCH METHOD
This section describes the research method taken during

our study. Section II.A lists and describes the research steps.
Section II.B describes the organization studied.

A. Research Steps
As a first step, we decided to create the SRMT. Hence, we
started our work by studying current literature in search of
publications suggesting any templates. Unfortunately, we
were not very successful. The only publications we could
find were [1] [2] [3] [5] and the templates they suggested were
quite coarse-grained. They mainly concentrated on
suggesting general templates for how to describe
requirements in the initial development phases, but not on
how to communicate them during the whole development
process cycle. Hence, these publications did not provide us
with enough support for describing and managing
requirements within development. They only constituted a
starting point for outlining the first out of eight clusters of our
preliminary template (see General Requirements Description
cluster in Figure 1). This preliminary template was then
complemented with the information found in various
publications such as [4] [8] [9].

As a next step, we created a questionnaire. As illustrated in
Figure 2, the questionnaire was open-ended and
semi-structured. It focused on finding out the type of
requirements information that was managed in our company.

The questionnaire consisted of two groups of questions,
(1) introductory questions and (2) questions concerning the
management of requirements information.

To cover the template, 130 questions were created. Due to
space restrictions, we cannot list them all. However, the
majority of them were structured according to the following
pattern (1) does your organization record this information
(attribute) 2) could you please provide an example, (3) if
yes/no, please motivate why.

Not all types of information (attributes) were amenable to
this pattern. Hence, the pattern had to be complemented with
questions specific for each attribute studied. Examples of
these questions can be found in Section B under
Complementary Questions for specific fields in Figure 2.

As a next step, we interviewed one representative from our
Canadian software company. For confidentiality reasons, we
do not name this organization. It is however briefly described
in Section II.B. The results from the interview have helped
verify the usefulness of our template.

A Template for Communicating Information
about Requirements and their Realization

Mira Kajko-Mattsson and Jaana Nyfjord

1020 IMECS 2008

Figure 1. Our software requirement management template

B. Organization Studied
Regarding the organization studied, we interviewed one

representative (a process owner) of one Canadian systems
development organization. The company was selected
according to its relative ease of access, i.e. by the
convenience sampling method [7]. The company develops
products ranging from ground stations for satellite radar
systems to e-commerce applications. It uses both lightweight
and heavyweight software development processes.

III. TEMPLATE
The SRMT consists of eight clusters of information, each

dedicated to a particular requirement aspect. As listed in
Figure 1, each cluster covers a set of attributes bearing on
coherent information. Below, we briefly describe the
clusters.

• General Requirement Description describes basic
requirement information needed for identifying,
understanding, and classifying requirements [1] [3] .

• Requirement Evaluation Data describes the data essential
for evaluating and prioritizing the requirements [4] [5].

• Other Description Data provides the context of the
requirement and its management process [4]. It covers data
regarding products, methods, projects, and the like.

• Requirement Reporting Data records when and by whom
the requirement has been identified and to whom it has been
assigned [4].

• Requirement Management Data communicates
information about the requirement management process. It
covers both planned and actual actions taken to implement
the requirement, identifies roles involved in these actions,
records effort required for implementing the requirement,
and the effectiveness of the implementation activities [1] [3].

• Requirement Management Progress tracks the status of
the requirement implementation process essential for

IMECS 2008 1021

Figure 2. Our questionnaire

monitoring and controlling requirements [4]. It records the
status value, the date when the requirement changes status
values, the overall requirement implementation progress
status value, and the requirement age.

• Requirement Completion Data covers information about
the completion of the requirement implementation process
[4] [8]. It records planned and actual completion date, roles
involved in approving and signing off the completion, and
the total effort spent on requirement implementation.

• Post Implementation Data holds information on the
post-mortem analysis of the requirement implementation
process. The analysis results should provide an important
feedback for improving the future requirements
management.

IV. INTERVIEW RESULTS
In this section, we present the results of our study. Our

presentation is structured according to the eight clusters and
their attributes as outlined in the SRMT template in Figure 1.

The organization studied documents the requirements in
two ways: in a requirements management tool and in a
separate document called requirements specification.
Requirements specification mainly describes the

requirements, but not the information about their realization.
The tool, on the other hand, does both. Hence, some of the
information in these two sources overlaps. When presenting
our results, we present the results as recorded in the tool.

Many of the attributes as identified in Figure 1 are not
always explicitly distinguished in form of a field in the tool.
They may however be recorded in free text together with
other information (other attributes). When presenting our
results, we will point this out by stating that the attribute is
recorded in free text.

A. General Requirement Description
Due to the fact that the organization studied follows the

IEEE standard guidelines [2], it records the majority of the
attributes as listed in the General Requirement Description
cluster. The only attributes that they do not record are
Rationale, Budget Constraints, Resource Constraints,
Customer Satisfaction and Customer Dissatisfaction. Some
of these attributes however may be recorded under different
guises in later phases. Below, we report on how the attributes
are managed.
• Requirements ID: All the requirements are uniquely
identified with an ID. Usually, the ID corresponds to a
numerical value. Some of the requirements however may be
identified with an alphanumerical value, where a letter
indicates the requirement type (functional, non-functional, or
other).
• Requirements Title: In addition to an ID value, each
requirement is identified with a title in the organization
studied. A title is a short name of the requirement. It usually
consists of several keywords. It is very helpful in doing
manual searches in the tool. It allows one to quickly browse
through requirements list without having to read the whole
requirements description.
• Requirements Description: The organization studied
describes its requirement in free text in an explicitly
dedicated field for this purpose. The organization does not
pose any restrictions on the description. The only restrictions
they have concern the wording and the description length.
The descriptions should use the words Shall or May. They
should be short; one-sentence or two to three sentences per
requirement at the most. If the description is longer, then
probably the requirement has to be further broken down.
• Requirements Type: The organization studied classifies
their requirements into three categories: Functional,
Non-functional, and Specialty. Specialty requirements
concern specific aspects of the system, such as domain,
construction, and other system requirements. The
descriptions of the three requirements types do not differ
much. The merely follow the same pattern. However, the
relationship between them is not formally managed. It is
recorded in free text in the original functional requirement,
e.g. “this function has to perform according to XX
specifications”.
• Rationale: The attribute for describing the rationale behind
each requirement is not used by the organization studied. The
interviewee did not even recognize this attribute and its
purpose.
• Even/Use Case ID and Reference Documents: The use
cases are always identified. Generally, the use cases and the
Operational Concept [6] are produced first. They then

1022 IMECS 2008

Figure 3. Relationship between original and derived requirements

provide a basis for specifying the requirements. Together
with other relevant documents, they are identified as
Reference Documents.
• Related requirements: Relationships among the
functional requirements are always identified. In the tool, the
related requirements are identified as a link. In a
requirements specification document, a high-level
requirement is described in one section. Its related
lower-level requirements are described in its subsections.
• Conflicting requirements: The organization studied
manages information about the conflicting requirements.
Usually, however, they start identifying the conflicting
requirements in the design phase where they encounter
conflicts. If a conflict occurs, then a comment is added in the
free text describing the requirement and the conflict.
• Constraints: The organization studied only indicates
design or technical constraints. Budget and resource
constraints are not very applicable on the requirements
specification level. They are however more applicable in
other higher level documents such as the Operational
Concept [6].
• Intended Users: Information about the Intended Users is
common, especially in IT type projects. The identification of
the end-users is however implicitly provided by linking use
cases to requirements.
• Customer Satisfaction and Customer Dissatisfaction:
The organization studied does not collect information about
customer satisfaction and dissatisfaction. To satisfy the
customers, they mainly prioritize the requirements (as
Critical or Optional) and create acceptance test
specifications. The acceptance test specifications however
are not recorded together with the requirement specifications.

Finally, we would like to point out that the organization
studied distinguishes between two types of requirements:
Original and Derived. As illustrated in Figure 3, the original
requirements correspond to high-level requirements as
provided by the customer or other internal role within the
organization. The derived requirements, on the other hand,
correspond to system requirements. They are derived from
the original requirements. They correspond to the
developers’ understanding and interpretation of the user
requirements. Because they are for internal use only, they are
expressed in technical terms.

In order to trace the derived requirements to the original
requirements, the organization studied relates them in a
parent-child relationship. As can be seen in Figure 3, the
original requirement is a parent, whereas the derived
requirements are the children.

The original requirement description is kept unchanged.

The reason is to create a fallback opportunity so that one can
follow the history of a change. By seeing what was changed
over time and why, one may avoid misinterpretation later on
in the project. Therefore, the original requirements should
not be modified. They should always stay intact. All
modifications to them must undergo a formal change and
approval process. This is due to the fact that changes to the
original requirements may impact customer satisfaction,
project scope, budget, or other factors. Hence, its change
generally requires more formalism.

B. Requirement Evaluation Data
The organization studied uses only Requirements Priority

and Acceptance Criteria in the Requirements Evaluation
Data cluster. The other three attributes, Business Value,
Other Values and Fit Criteria are not used. They may
however be managed in other forms. Below, we report on the
results for each of the attributes:
• Business Value is not recorded in the requirement
document. The business value is recorded in the business
case, which is a separate document produced at the business
and product planning levels [6]. However, it strongly affects
the value of the requirement priority.
• Other Values: This attribute is not used at all. The
interviewee could not think of any other values that might be
recorded in the requirement document.
• Requirements Priority: The organization studied
prioritizes all their requirements by assigning either required
or optional values to them. These values constitute a basic
and minimum level of stating the priority. The organization
also uses additional way of prioritizing requirements by
assigning to them the implementation priority value. This
value depends on various aspects such as whether the
requirement is critical for initial operation, whether it
provides a basis for negotiating the scope with the customer,
and other aspects.
• Acceptance Criteria: The organization studied manages
information about the Acceptance Criteria. This attribute
however is not part of the current requirements management
tool. Another tool is used for this purpose. At a minimum, the
acceptance criteria correspond to the descriptions of the
acceptance procedures. These procedures may either
correspond to analysis, inspection, test or demonstration.

C. Other Description Data
The organization studied uses only two attributes in the

cluster Other Description Data. They are System Data and
Interfacing System ID. Using them, one identifies the system,
subsystem or component affected by the requirement, and the
interfacing systems. Regarding the attributes, Environment
and Assumptions, they are not used within the organization
studied.

D. Requirement Reporting Data
None of the attributes in the Requirement Reporting Data

cluster are fully utilized within the organization studied. The
Reporting Data attribute and its sub-attributes are only used
in a few cases depending on the project team and their needs.
Regarding Requirement Ownership, it is implicitly implied
by other information. A requirements owner is the role2 who

2 A role may correspond to one or several persons.

IMECS 2008 1023

owns an entire system component rather than an individual
requirement. All the requirements allocated to that
component are automatically owned by this role. Usually, a
component gets allocated to one team.

E. Requirement Management Data
The organization manages most of the attributes in the

Requirements Management Data cluster. By large, all the
planning in the studied organization is done on a component
and not on a requirement level. A component represents a
group of related requirements or part of a system. Once one
has allocated requirements to components, one starts
planning their implementation using the attributes as
designated in the Requirements Management Data cluster as
listed in Figure 1.

F. Requirement Management Progress Data
The organization studied does not record the information

as defined in the Requirement Management Progress cluster.
It only tracks the status of the requirement implementation
progress via the existing requirements management tool.

G. Requirement Completion Data
The organization records all the information in the

Requirement Completion Data cluster. However, the
information is recorded on a component level, not on an
individual requirement level. For each component, they
record the planned and actual completion dates, roles
involved in approving and signing off the completion, and
the total effort spent on the component implementation.

If they wish to track the completion data for individual
requirements, then they have to do it manually. Doing it
however does not belong to their ordinary procedure. Once
they have analyzed the requirements and assigned them to the
components, they just keep track of the status of the
components.

H. Post Implementation Data
Regarding the information in the Post Implementation

Data cluster, the organization studied does not record
post-mortem analysis or lessons learned for individual
requirements. The information may however be recorded for
components.

The organization conducts post-mortem analysis on the
project level once the project is completed. Part of this
analysis involves tracking what areas of the requirements
have changed, e.g. in comparison to other areas where the
requirements were quite stable, and if they were successful in
managing the scope. The analysis results provide an
important feedback for improving the future requirements
management for the same type of requirements.

They also identify lessons learned. The lessons learned are
continuously considered especially in projects of iterative
and agile nature.

All the information resulting from the analysis is recorded
in a report that is kept in a common repository so that others
can go back and read the lessons learned.

V. CONCLUDING REMARKS
In this paper, we have created a preliminary template,

Software Requirements Management Template (SRMT),

covering information about software requirements and their
realization during a software development cycle. We then
evaluated it within one Canadian company. This has helped
us to evaluate our template and establish a state of practice
within this company.

Our results show that all the attributes as suggested in our
template are highly relevant both within heavyweight and
lightweight software development. Many of them however,
were not explicitly recorded. They might however be
implicitly provided in other forms or in other documents or
tools. This concerns attributes such as Business Value,
Requirements Ownership, Lessons Learned, Post-Mortem
Analysis, and other.

Some of the important attributes that have been suggested
in many well-known standards and models were either not
implemented or recognized by the organization studied. This
concerns Rationale, Customer Satisfaction and Customer
Dissatisfaction. Fit Criteria, Assumptions, and some of the
attributes in the Requirement Reporting Data and the
Requirement Management Progress clusters.

Despite the fact that these attributes are not used within the
company studied, we do not modify our template. We
motivate this with the following:
• Rationale: Many times, one needs to understand why a
certain requirement needs be implemented. Hence, its raison
d’être needs be provided. This helps the organizations
understand the reason and intent behind the requirement and
thereby assign a right priority value to it [10].
• Customer Satisfaction and Customer Dissatisfaction:
This attribute indicates the degree of customer
satisfaction/dissatisfaction if the requirement is/is not
implemented. It indicates the customer priority, the value on
which the development organization bases their own
development priority value (see Requirement Priority in the
Requirement Evaluation Data cluster). It also provides a
basis for creating acceptance tests and evaluating the
fulfillment of the requirements [11].
• Fit Criteria: Software developers must be provided with a
set of criteria aiding them in assuring that they are building
the right product. Hence, it is important to record fit
(acceptance) criteria. Together with the Customer
Satisfaction and Customer Dissatisfaction values, they
constitute a basis for creating tests and evaluating the
fulfillment of the requirements [11].
• Assumptions: In many large systems, the operational
domain is unbounded. The software system, on the other
hand, is finite [10]. Hence, there is a gap between the system
and its operational domain. It must be bridged by
assumptions. These assumptions help understand how one
reasoned when developing the system.
• Requirement Reporting Data: This cluster contains
attributes such as Reporting Date and Originated By. Both of
them are very important for managing requirements. The
reporting date indicates the age of the requirement and
together with the requirement priority value, it constitutes an
important basis for planning development. Regarding the
Originated By attribute, it identifies the stakeholder who
originated the requirement. Admittedly, the organization
studied identifies the originator via use cases. However, due
to its importance, we believe that this information should be

1024 IMECS 2008

more visible. It (1) facilitates contact with the requirement
originator to resolve any conflicting issues, (2) enables the
delivery of the implementation to the right customer, and
finally (3) it substantially increases customer satisfaction [4].
• Requirement Management Progress: The development
process is usually divided into several phases. To enable an
effective planning and monitoring of the development
process, each development phase should be thoroughly
identified and assigned a status value. This enables (1)
determination of the development progress, (2) control of the
amount of work that has been done and that remains to be
done for a certain release, (3) control of the workload of each
engineer/team (4) improved process discipline, (5)
comparison of the planned and actual results, and other
important controls [4], and other benefits.

VI. EPILOGUE
This study has been made within only one company. Still

however, it provides a valuable feedback for preliminarily
evaluating the usefulness of the SRMT template in the
industry. It also provides a basis for further studies of the
requirements management information. Hence, we cordially
invite the software community to conduct similar studies in
order to extend and evaluate our template.

ACKNOWLEDGEMENT

Many thanks to the anonymous company and the
interviewee for contributing with valuable data to this study.

REFERENCES

[1] Atlantic Systems Guild, “Volare Requirements Specification
Template”. [Online] Available at:
http://www.systemsguild.com/GuildSite/Robs/Template.html.
Accessed in December 2007.

[2] IEEE, IEEE Guide to Software Requirements Specifications (Std
830-1993). The Institute of Electrical and Electronics Engineers Inc.,
New York, NY, 1993.

[3] Higgins S.A et. al, “Managing Product Requirements for Medical IT
Products”. Proceedings of Joint International Conference on
Requirements Engineering, 2002, pp 341-349.

[4] Kajko-Mattsson M, Corrective Maintenance Maturity Model: Problem
Management. Doctoral Thesis, Stockholm University/Royal Institute
of Technology, Sweden, 2001.

[5] Managing Requirements, “Templates and Guidance”. [Online]
Available at: http://www.jiludwig.com/Template_Guidance.html.
Accessed in December 2007.

[6] Nyfjord and Kajko-Mattsson, “Degree of Agility in
Pre-Implementation Process Phases”. Technical report, Department of
Computer and Systems Sciences, Stockholm University/KTH, Sweden.

[7] Robson C., Real World Research. Blackwell Publishing, 2002.
[8] Texas Department of Information Resources, Software Requirements

Specification Template, DIR Document 25SR-T1-0. [Online]
Available at:
http://www.dir.state.tx.us/pubs/framework/gate2/sdlc/srs/25SR-T1-0.
doc. Accessed in December 2007.

[9] Wiegers K. E, “Software Requirements Specification for Project”.
Available at: http://www.processimpact.com/process_assets/
srs_template . Accessed in December 2007.

[10] Lehmna M.M. and Ramil J.F., “Software Evolution in the Age of
Component Based Software Engineering”, IEEE Software, Vol.
147(6), 2000, pp. 249-255.

[11] Sommerville I., Software Engineering. 7th Ed. Addison Wesley, 2006.

IMECS 2008 1025

S P E C I A L I S S U E – R E ’ 0 7 B E S T P A P E R S

Exploring how to use scenarios to discover

requirements

Norbert Seyff Æ Neil Maiden Æ Kristine Karlsen Æ
James Lockerbie Æ Paul Grünbacher Æ
Florian Graf Æ Cornelius Ncube

Received: 5 September 2008 / Accepted: 29 January 2009 / Published online: 24 February 2009

� Springer-Verlag London Limited 2009

Abstract This paper investigates the effectiveness of dif-

ferent uses of scenarios on requirements discovery using

results from requirements processes in two projects. The first

specified requirements on a new aircraft management system

at a regional UK airport to reduce its environmental impact.

The second specified new work-based learning tools to be

adopted by a consortium of organizations. In both projects

scenarios were walked through both in facilitated workshops

and in the stakeholders’ workplaces using different forms of

a scenario tool. In the second project, scenarios were also

walked through with a software prototype and creativity

prompts. Results revealed both qualitative and quantitative

differences in discovered requirements that have potential

implications for models of scenario-based requirements

discovery and the design of scenario tools.

1 Different scenario uses

Scenarios are sequences of events with a narrative structure

[2]. They are simple, human things [2], and walking through

them is one of the more effective means by which stake-

holders discover requirements. Studies have reported

stakeholders walking through different types of scenarios,

from simple stories to surface new requirements [8] to sys-

tem simulations to discover emergent system properties [1,

10]. However, despite reported successes, we still lack data

with which to determine what are the more effective uses of

scenarios for discovering requirements. This paper reports

results from two scenario-driven processes to discover

requirements for an air traffic management system called

VANTAGE (Validation of Network-Centric, Technology

Rich ATM System Guided by the Need for Environmental

Governance) and a system to support work-integrated

learning in organizations called APOSDLE (Advanced

Process-Oriented Self-Directed Learning Environment).

ART-SCENE is a software environment for discovering

and documenting stakeholder requirements [17] by walk-

ing through scenarios that are automatically generated from

use case specifications. We run what we call scenario

workshop walkthroughs for same-time same-place discov-

ery of requirements using the desktop version of ART-

SCENE [17]. A workshop is a structured meeting that is

ran by a trained facilitator to ensure effective stakeholder

input to the meeting. The facilitator drives the

walkthrough

process whilst a scribe documents requirements and sce-

nario changes in ART-SCENE. Similar uses of scenarios

are reported in [9, 26]. Stakeholders walk through one

ART-SCENE-generated scenario displayed to them on a

large screen. In the VANTAGE project we ran four sce-

nario workshop walkthroughs to discover requirements.

Walking through software prototypes and scenarios

together has been shown to improve requirements com-

pleteness [28]. In the APOSDLE project a first prototype of

the work-integrated learning system had already been

developed. Therefore, we ran eight scenario workshop

walkthroughs that walked through ART-SCENE scenarios

and the APOSDLE prototype together. Each scenario

N. Maiden (&) � K. Karlsen � J. Lockerbie
Centre for HCI Design, City University London,

London EC1V 0HB, UK

e-mail: N.A.M.Maiden@city.ac.uk

N. Seyff � P. Grünbacher � F. Graf
Systems Engineering and Automation,

Johannes Kepler University, 4040 Linz, Austria

C. Ncube

Software Systems Research Centre, Bournemouth University,

Fern Barrow, Talbot Campus, Poole, Dorset BH12 5BB, UK

123

Requirements Eng (2009) 14:91–111

DOI 10.1007/s00766-009-0077-9

workshop walkthrough was supplemented by results from

an earlier creativity workshop held in the project [15] that

provided additional vision, requirements and design fea-

tures for future versions of the APOSDLE system.

However bringing stakeholders together in workshops

can be difficult and time-consuming, whilst removing them

from their workplace can miss important contextual trig-

gers for requirements. One alternative is to walk through

scenarios in the workplace using mobile technologies.

Previously we developed a version of ART-SCENE called

the Mobile Scenario Presenter (MSP) to run on a Personal

Digital Assistant (PDA). Evaluations of the MSP [19, 20]

revealed that it could be used to discover requirements in

the workplace; however analysts found it difficult to doc-

ument requirements using the stylus when moving and/or

communicating with the observed stakeholders. Therefore

we also walked through 1 ART-SCENE scenario in the

VANTAGE project and four different ART-SCENE sce-

narios in the APOSDLE project in the workplace to

provide empirical data about their effectiveness. We call

these walkthroughs scenario workplace

walkthroughs.

We used data from the scenario workshop

walkthroughs

and scenario workplace walkthroughs in the VANTAGE

and APOSDLE projects to answer three research questions:

Q1 Can a scenario workshop walkthrough, supported

with software prototypes and design features, trigger a

larger number of requirements than the walkthrough of

the scenario on its own?

Q2 Can a scenario workplace walkthrough trigger

requirements that might not be discovered with a

scenario workshop walkthrough?

Q3 Can a scenario workplace walkthrough trigger a

larger number of requirements than an equivalent

scenario workshop walkthrough?

The first research question Q1 explored whether sup-

plementing scenarios with design knowledge in the

software prototypes and creativity prompts would increase

the number of requirements generated. We distinguished

between Q2 and Q3 to investigate whether scenario

workplace walkthroughs generated different requirements

from scenario

workshop walkthroughs.

In the remainder of the paper, Sect. 2 reports the dif-

ferent uses of scenarios that were applied in VANTAGE

and APOSDLE. Section 3 reports a model of scenario-

based discovery that informs the three research

questions.

Sections 4 and 5 report results from the scenario walk-

throughs to answer the research questions. Section 6 uses

the results to answer the three research questions and

explore their validity. Sections 7 and 8 present lessons

learned for running future scenario walkthroughs and

report related work. Section 9 outlines future research to

improve scenario use in requirements processes.

2 ART-SCENE scenario walkthroughs

The VANTAGE and APOSDLE scenarios were generated

and walked through using the ART-SCENE environment.

2.1 ART-SCENE scenarios

The big idea that underpins ART-SCENE scenario walk-

throughs is very simple—that people are better at

identifying errors of commission rather than omission [4].

From this general trend in human cognition for recall to be

weaker than recognition, ART-SCENE scenarios in the

ART-SCENE software environment offer stakeholders

recognition cues in the form of automatically generated

alternative courses. If the alternative course is relevant to

the system being specified but not yet handled in the

specification, then a potential omission has been identified,

and ART-SCENE guides the analysts to specify and doc-

ument the relevant requirements.

ART-SCENE automatically generates scenarios in two

steps [17]. In the first it generates different normal course

scenarios from ordering rules in the use case specification.

Each different possible ordering of normal course events is a

different scenario. In the second the algorithm generates

candidate alternative courses, which are expressed as ‘what-

if’ questions for each normal course event, by querying a

database that implements a simple model of over 40 abnor-

mal behaviours and states in socio-technical systems

independent of the domain of interest. Some class hierarchies

were derived from definitions of scenario concepts such as

events and actions. Others were derived from error taxono-

mies in the cognitive science, human-computer interaction

and safety-critical disciplines [25]. ART-SCENE allows

specialization of these classes to selected domains such as air

traffic management and learning, as reported in [21].

ART-SCENE provides scenarios to stakeholders to

recognize events and discover requirements in two ways—

in scenario workshop walkthroughs and scenario work-

place walkthroughs. Each is described in turn.

2.2 Scenario workshop walkthroughs

In scenario workshop walkthroughs stakeholders interact

with one ART-SCENE component called the

Scenario

Presenter. Figure 1 depicts one scenario generated for

VANTAGE and presented using the Scenario Presenter. A

scenario workshop walkthrough typically lasts half a day

[17]. A facilitator guides the stakeholders to recognise

which scenario normal and alternative course events—

what-if capabilities—the new system must handle. The

facilitator then uses simple heuristics to discover one or

more requirements that, if satisfied, will enable the system

to avoid, respond to or mitigate the effects of the event.

92 Requirements Eng (2009) 14:91–111

123

The scribe documents all requirements in ART-SCENE

and links them to the scenario events that triggered them.

In a scenario workshop walkthrough the facilitator

walks the stakeholders through an ART-SCENE scenario,

but these scenarios can be supplemented with other arte-

facts. In APOSDLE we ran

scenario walkthrough

workshops, in which we walked through related design

features of an existing work-integrated software prototype

using an electronic whiteboard that stakeholders could

manipulate directly by touch to sketch and document

changes to supplement the requirements documented in

ART-SCENE. We also searched for and presented results

from an earlier creativity workshop [15] on a third display

to reuse visions, requirements and design ideas that had

been generated by stakeholders earlier in the requirements

process. As a result, at any time during each workshop,

stakeholders were interacting with the ART-SCENE sce-

nario, existing software prototype and documented ideas

from earlier activities.

However, one limitation of scenario workshop walk-

throughs is that stakeholders need to take time out of their

workplace to participate. As well as restricting access to

stakeholders who might not have the time to participate,

these sessions take place out of the workplace, thus

potentially diminishing the effectiveness in the require-

ments discovery process [30].

2.3 Scenario workplace walkthroughs

The MSP [19] is a PDA-based ASP.NET web application

that uses a mobile browser and wireless access to connect

to server-side ART-SCENE scenario and requirements

databases. The tool is optimized for Microsoft’s Pocket

Internet Explorer included with Microsoft’s Pocket PC OS.

The MSP allows its user to discover and document

requirements systematically in the workplace using struc-

tured scenarios generated by ART-SCENE. The MSP user

walks through scenarios of future system behaviour and

observes current system behaviour at the same time. What-

if capabilities—generated candidate alternative courses for

each event—enable the user to follow-up and ask questions

about abnormal and unusual behaviour in different work-

places, thus leading to more complete requirements

discovery. Figure 2 shows some normal and alternative

Fig. 1 One VANTAGE scenario (VA2 Approach/Arrival Sequence Control) in the desktop version of ART-SCENE used during a scenario
workshop walkthrough

Requirements Eng (2009) 14:91–111 93

123

course events of a second VANTAGE scenario, VA4On-

stand Operations, that we walked through using the MSP.

One important enhancement to the MSP used in VAN-

TAGE was audio recording of requirements. Earlier

versions captured new requirements in typed form using the

PDA stylus [9]. However, this was less successful than

expected due to the effort needed to type requirements.

Therefore, the reported scenario workplace walkthroughs

took place with audio recording of spoken requirements. To

deliver this capability the MSP implemented a new plug-in

solution on top of the full-screen browser application. The

solution was integrated into the menu of the full-screen

browser. The capability could be started using one touch

with the stylus once the plug-in solution was enacted. Audio

files of generated requirements were stored using common

file formats so that analysts could continue their work on the

desktop ART-SCENE after synchronization. All created

multimedia files were linked to the underlying MSP data-

base using special IDs included in the filename.

Another change undertaken in VANTAGE and APOS-

DLE was the process of the scenario

workplace

walkthrough. In previous applications we replaced work-

shops with a two-stage process—one-on-one observations

and fact gathering by a single analyst in the workplace

followed by project-wide interpretation sessions using the

desktop Scenario Presenter. However, this procedure was

problematic. A single analyst was not able to observe the

workplace, navigate the scenario, communicate with

stakeholders and document requirements using a stylus

because earlier audio-recording features were too cum-

bersome to use [19]. Therefore, in VANTAGE and

APOSDLE, we introduced the two roles of facilitator and

scribe. Whilst the scribe used the MSP to navigate between

scenario events and to document requirements and com-

ments, the facilitator observed the workplace, asked

questions about it, and filtered the raw data to generate

first-cut requirement statements, based on recognition cues

provided by the MSP.

3 A model of scenario-based requirements discovery

We investigated the scenario-driven requirements pro-

cesses in VANTAGE and APOSDLE to answer the three

research questions reported in Sect. 1 about the effective-

ness of different scenario walkthrough types. The questions

were grounded in a logical task model that describes

essential cognitive tasks that stakeholders undertake during

a scenario walkthrough to specify a future system using

tools such as the Scenario Presenter.

The model describes scenario-based requirements dis-

covery as an iteration of tasks. Stakeholders read scenario

event descriptions and recognize one or more as possible

and to be handled by the system. Recognizing whether an

event is relevant is an essential pre-requisite to discovering

requirements. Because stakeholders are better at recog-

nizing incorrect events rather than recalling missing ones,

the model predicts that stakeholders will discover more

requirements using scenario events that are presented to

them to recognize as relevant than they will from unaided

recall of such events. For each event recognized as possi-

ble, stakeholders generate one or more requirements.

Fig. 2 The VANTAGE
scenario VA4 On-stand
Operations in the MSP version
of ART-SCENE used in a

scenario workplace
walkthrough

94 Requirements Eng (2009) 14:91–111

123

We developed and validated the model for scenario

workshop walkthroughs with tools such as the ART-

SCENE Scenario Presenter. However, extending our sce-

narios, such as with prototypes, creativity prompts, and

walking through scenarios in the workplace challenges the

assumptions behind the model. For example, software

prototypes and creativity cues explicitly describe design

knowledge with which to infer requirements about the new

system, which might generate more requirements than

simply recognizing unhandled events in a scenario. On the

other hand, design decisions embedded in the software

prototype might lead to more requirements that encapsulate

design decisions and assumptions. Therefore we explored

the effect of recognition cues from scenario events, soft-

ware prototypes and creativity prompts to answer Q1 using

data from the APOSDLE scenario walkthroughs. Likewise,

walking through scenarios in the workplace might expose

analysts and stakeholders to more event recognition cues.

However the timing of these events cannot be controlled,

thus making it more difficult for analysts to capture and

specify new requirements. We also explored the effect of

combined recognition cues from scenarios and the work-

place and audio requirements recording techniques to

answer Q2 and Q3 with data from the scenario workplace

walkthroughs.

The types of dependent variable data that were used to

answer the three research questions are listed in Table 1.

The generated requirements data was drawn directly from

ART-SCENE’s scenario and requirements databases, and

scenario walkthrough data was taken from observational

notes recorded by the facilitator and scribe during the

scenario walkthroughs.

We did not investigate stakeholder perceptions about the

walkthroughs and the scenarios due to difficulties capturing

such data in the VANTAGE and APOSDLE

projects.

Stakeholders such as pilots and business consultants were

simply not available to be debriefed at the end of sessions,

and the availability of other stakeholders was restricted

during both projects.

The next two sections report the scenario walkthroughs

and results for the VANTAGE and APOSDLE projects,

respectively.

4 Walking through scenarios in VANTAGE

We walked through scenarios to discover requirements for

VANTAGE Phase-1, a system to reduce the environmental

impact of aircraft movements in and around airports. The

2-year project, funded by the UK’s Department of Trade

and Industry, integrated new technologies into the opera-

tions of regional airports in the United Kingdom to reduce

their environmental impact, measured as noise and gas

emissions. Partners who include Thales and Qinetiq intro-

duced new technologies at Belfast City Airport, BCA.

We walked stakeholders through scenarios as part of

RESCUE (Requirements Engineering with Scenarios in a

User-Centred Environment), a scenario-driven require-

ments process [18]. Prior to the walkthroughs the

requirements team had discovered requirements for the

new VANTAGE system using brainstorming sessions and

a creativity workshop, and generated requirements auto-

matically from i* models. A use case model specified 20

core use cases that specified how the future VANTAGE-

enhanced airport operations at BCA should behave during

landing, taxiing, on-stand operations and take-off, as well

as to support airport management such as producing daily

flight schedules. Requirements on the VANTAGE system

were associated with behaviour specified in the use cases.

However, we still needed VANTAGE stakeholders, who

included the BCA environmental manager, environmental

experts, technology specialists, and airline pilots and

operational staff, to discover complete requirements on

VANTAGE using projections of future operations at BCA.

This is where the scenario walkthroughs came in.

The scenario walkthroughs took place with real-world

project constraints such as time and availability of stake-

holders that could not be controlled in the walkthroughs

Table 1 Types of data used to
answer three research questions

Generated requirements data Scenario walkthrough data

Total number of requirements generated in a

scenario walkthrough

Duration of the scenario walkthrough

Requirements description text Number of stakeholders presented in the

scenario walkthrough

Assigned requirement type

Scenario that triggered generation of the

requirement

Scenario event that triggered generation of

the requirement

Requirements Eng (2009) 14:91–111 95

123

without disrupting the project or invalidating its results.

Therefore, we adopted an action research approach, gen-

erating and analyzing data in context, taking care when

drawing conclusions from the results.

4.1 The scenario walkthrough schedule

We generated a scenario walkthrough schedule from the

VANTAGE use case model. VANTAGE stakeholders

prioritized the use cases for the potential of the specified

behaviour to minimize environmental impact. Not sur-

prisingly, given aircraft movements are the main source of

noise and gas emissions, the priority use cases specified

VANTAGE requirements on the arrival, turnaround and

departure of an aircraft from the airport.

Time and resource constraints meant that only five use

cases could be investigated. We set up the walkthrough

schedule in Table 2 and used the ART-SCENE scenario

generation algorithm to generate one scenario per use case.

We chose to run scenario workshop walkthroughs to

walk through scenarios that describe aircraft movement,

such as VA2 Approach and arrival sequence control and

VA11 Coordinate flight departures. Not only were scenario

workplace walkthroughs of such scenarios on the airport

difficult (not to say dangerous!), but scenario workshop

walkthroughs were tractable because representatives of

stakeholders such as airlines (pilots and dispatchers), air

traffic controllers and managers, the airport environment

manager, and solution experts from VANTAGE technical

partners were all available to attend them.

Figure 1 shows part of the ART-SCENE scenario for

VA2 Approach and arrival sequence control. Typical

events included dispatcher communicates with ramp staff

and BCA approach controller makes decisions about

landing approach. Each scenario workshop walkthrough

took place in a meeting room with one facilitator and one

scribe. Each was designed to run for 2–4 h, depending on

the number of events in the scenario.

The one scenario in which aircraft do not move was

more amenable to walk through in the workplace. The VA4

On-stand operationsscenario specifies all behaviour related

to the turnaround of an aircraft from when it parks to its

pushback. Part of the scenario is shown in Fig. 2. Typical

events included refuel the aircraft and disembark passen-

gers. The walk through of this scenario took place over 4 h

on a weekday. The analyst who facilitated the scenario

workshop walkthroughs also facilitated the scenario

workplace walkthrough. A different scribe operated the

MSP. The facilitator observed on-stand operations on dif-

ferent aircraft turned around by airlines and the local

service operator. He asked questions to airport and airline

staff being observed. All spoken requirements and com-

ments were recorded in the MSP.

Figure 3 shows the facilitator and scribe walking

through the scenario in the workplace. The left-hand side

depicts the scribe in the cockpit of an A321 aircraft whilst

the facilitator asked questions about ground system-aircraft

uploads, whilst the right-hand side shows the facilitator

asking questions of the operator during aircraft refueling.

Note the bad weather!

4.2 Generating the scenarios

In VANTAGE we generated one scenario for each of the

five prioritized use cases. Each scenario normal course

event sequence specified the expected event ordering dur-

ing aircraft approach, landing, taxiing, turnaround and

takeoff. For each normal course event in the five scenarios

the algorithm generated one or more candidate alternative

courses for each normal course event. Alternative courses

Table 2 VANTAGE scenario walkthrough schedule

Date Scenario Type of

walkthrough

11/09/2006 VA2: approach/arrival sequence control Workshop

11/09/2006 VA3: ground movement control arrivals Workshop

17/10/2006 VA1: ground movement control departure Workshop

17/10/2006 VA11: coordinate flight departures Workshop

29/11/2006 VA4: on-stand operations Workplace

Fig. 3 Two uses of the MSP
when walking through the VA4
On-stand operations scenario in
the workplace

96 Requirements Eng (2009) 14:91–111

123

were generated using the domain-independent version of

the ART-SCENE algorithm [17] not tailored to air traffic

control. VANTAGE scenario alternative courses expressed

abnormal behaviours and states that were domain-inde-

pendent, such as what if the dispatcher lacks the necessary

knowledge? and what if the aircraft exhibits some abnor-

mal behaviour?, rather than what if the aircraft uses the

wrong taxiway when arriving at the terminal? A generated

scenario specified on average 20 normal course events and

8 alternative course events per normal course event. The

shortest scenario was VA11 Coordinate flight schedules

with 8 normal course and 41 alternative course events. The

longest was VA4 On-stand operations with 35 normal

course and 271 alternative

course events.

4.3 Scenario walkthrough results

All scenario walkthroughs took place as planned. Each

scenario workshop walkthrough lasted between 2 and 4 h.

The scenario workplace walkthrough lasted 4 h. Between 6

and 10 stakeholders attended each workshop depending on

availability—commercial airline pilots and air traffic con-

trollers were often unable to commit time to scenario

workshop walkthroughs, hence the workshop schedule was

a compromise between stakeholder availability and dead-

lines. Nonetheless all four included at least one

representative from BCA (e.g. an air traffic controller), an

airline (e.g. a BMI dispatcher and pilot), a solution tech-

nology provider (e.g. Thales and Raytheon), and an

environmental researcher from an academic partner in

VANTAGE. Many stakeholders participated in more than

one scenario workshop walkthrough.

Results from the scenario walkthroughs are reported in

Table 3. The five walkthroughs generated 147 require-

ments. All requirements were documented in ART-SCENE

using four attributes—the requirement description, ratio-

nale, type and source [23]. During scenario workshop

walkthroughs the scribe entered these fields using a desktop

computer keyboard, whilst during the scenario workplace

walkthrough the scribe selected the requirement type from

a pull-down menu, then the facilitator recorded the spoken

requirement description, rationale and source in the MSP.

After the walkthrough the facilitator and scribe transcribed

all spoken requirements, comments and scenario changes,

then the documented requirements were validated with

source stakeholders. In the scenario workshop walk-

throughs, the stakeholders were able to validate

requirements as the scribe entered them into the displayed

ART-SCENE requirements form.

Results reveal that walking through VA4 On-stand

operations in the workplace generated 59 requirements,

whilst the most requirements generated during one sce-

nario workshop walkthrough was 32, for VA2 Approach

and arrival sequence control. Each scenario workshop

walkthrough generated on average 22

requirements per

scenario. The average number of requirements generated

per scenario normal course event is also reported in

Table 3, but there was no discernible association between

scenario length and the number of requirements generated.

Alternative course events appeared to have little effect

on requirements discovery—only four requirements were

associated with alternative course events in the 5 scenarios.

One possible reason for this was the number of normal

course events in each scenario and limited time available in

each walkthrough. Each scenario workshop walkthrough

walked through the normal course events before the alter-

native course events. In most walkthroughs the time

available allowed stakeholders to walk through all normal

course events but not most alternative course ones.

Previous studies revealed that requirements

documented

with the MSP and stylus were shorter than requirements

documented via keyboards [19]. In VANTAGE, require-

ments transcribed from the recordings of the spoken

requirements in the MSP were similar in length to

Table 3 The total number of requirements documented during each scenario, types of event for which requirements were documented, and
average number of requirements generated per scenario normal course event, in the VANTAGE project

Scenario and walkthrough type Total number of

requirements
documented

Number of requirements

on use case and normal

course event behaviour

Number of requirements

on alternative

course behaviour

Average number of

requirements per

normal course event

VA2: approach and arrival sequence control

(workshop)

32 30 2 1.19

VA3: ground movement control arrivals

(workshop)

28 27 1 2.44

VA1: ground movement control departure

(workshop)

16 15 1 0.70

VA11: coordinate flight departures

(workshop)

12 12 0 1.5

VA4: on-stand operations (workplace) 59 59 0 1.55

Total 147 143 4

Requirements Eng (2009) 14:91–111 97

123

requirements typed by the scribe in the four scenario

workshop walkthroughs.

Table 4 reports the totals of requirement generated by

type. Walking through the scenario in the workplace led to

generation of more availability- and usability-type

requirements than during the scenario workshop

walkthroughs.

Occasionally the walkthroughs resulted in changes to

the scenarios themselves in response to stakeholder com-

ments and facilitator observations. The scenario workplace

walkthrough of VA4 On-stand operations resulted in 11

changes, including the addition of two observed new nor-

mal course events (e.g. the engineer enters the aircraft) and

nine observed alternative course events, for example what

if the stand is occupied? In contrast the four scenario

workshop walkthroughs resulted in just one change—an

event is undertaken by an air traffic support assistant rather

than a controller.

4.4 Qualitative requirements analysis

We investigated all 147 requirements to detect qualitative

differences between requirements associated with the two

types of scenario walkthrough.

4.4.1 Requirements subjects

The first characteristic was the subject of each requirement.

ART-SCENE mandated that all requirements were

expressed using shall statements with a common structure

[3] that leads to specification of properties of one or more

actors. These actors were the subjects of the requirements.

Results are reported in Table 5. Most requirements gen-

erated during the scenario workshop walkthroughs were

requirements on the VANTAGE software system. In con-

trast the VA4 On-stand operationsscenario workplace

walkthrough generated 20 requirements on the dispatcher

and 12 on the dispatch coordinator, both actors in on-stand

activities observed by the facilitator and scribe. And yet the

four scenario workshop walkthroughs only generated a

total of six requirements on the dispatcher, in spite of the

presence of dispatchers in each. This indicates that the type

of the scenario walkthrough, rather than dispatcher avail-

ability, influenced the generation of these 20 requirements.

The 12 requirements on the dispatch coordinator were

important in VANTAGE. In spite of earlier modeling of

airport operations with i*, the focus on dispatchers working

for airlines such as BMI rather than the airport services

operator led the project to overlook the dispatch coordi-

nator. As a result the scenario workshop walkthroughs did

not include representatives of the air services operator, and

no requirements on the dispatch coordinator were gener-

ated. In contrast, the walkthrough of one aircraft

turnaround scenario in the workplace took place in front of

the dispatch coordination office. After asking about the

office the facilitator negotiated access to it and asked

questions to the dispatch coordinator about their work,

problems, resources and needs. Some of the 12 require-

ments generated for the dispatch coordinator specified the

need to support effective local working practices, for

example the dispatch coordinator using VANTAGE shall

maintain important information about aircraft in a stats

sheet. Other requirements revealed basic problems with

airport operations that had consequences, not known

beforehand to the requirements process, on environmental

impact. One such problem was a lack of functioning radios

that meant that dispatchers could not communicate effec-

tively during aircraft turnarounds. The inference was that

working radios would improve the efficiency of aircraft

turnarounds and, as a result, contribute to aircraft

Table 4 The total number of requirements documented during each
scenario by type

Scenario and

walkthrough type

AR FR LFR IR PR RR SR TR UR

VA2 (workshop) 0 17 0 0 8 0 1 0 2

VA3 (workshop) 0 10 0 0 6 0 0 0 0

VA1 (workshop) 1 26 0 0 2 2 0 0 1

VA11 (workshop) 0 10 0 0 1 0 0 0 1

VA4 (workplace) 9 28 0 1 6 1 0 1 13

AR availability, FR functional, LFR look-and-feel, IR inter-operabil-
ity, PR performance, RR reliability, SR safety, TR training, UR
usability

Table 5 Totals of requirements with subject actor, generated per
VANTAGE scenario walkthrough

Requirement

subject actor

Scenario workshop

walkthroughs
Scenario
workplace
walkthrough

VA2 VA3 VA1 VA11 VA4

ATCO 3 1 7 2 0

BCA 0 0 0 0 3

VANTAGE system 14 8 16 4 19

Dispatcher 1 2 2 1 20

Ramp staff 2 1 2 1 1

Support services staff 2 0 0 0 0

Customer services agent 1 0 0 0 1

Dispatch coordinator 0 0 0 0 12

Pilot/aircraft 2 0 1 0 1

Airline 0 0 3 0 0

Airport operations staff 1 3 1 3 0

Passengers/general public 1 0 0 0 2

Stand guidance system 0 1 0 0 0

Other 1 0 0 0 0

98 Requirements Eng (2009) 14:91–111

123

movements that would reduce noise and gas emissions. The

facilitator generated requirements such as AR108 dis-

patchers, the dispatch coordinator the boarding staff shall

have sufficient communication resources to enable two-way

communication between these actors at all times.

4.4.2 Requirements themes

A second characteristic was the theme of each requirement.

The scenario workplace walkthroughs generated require-

ments with themes that were not generated during the

scenario workshop walkthroughs. Fourteen requirements

specified how VANTAGE should respond to bad weather

conditions. The walkthrough took place in bad weather

conditions depicted in Fig. 3. During a follow-up interview

the conditions prompted the facilitator to ask how they

affected aircraft turnaround. One dispatcher provided paper

documentation that specified bad weather restrictions on

turnaround equipment use, from which the facilitator

generated requirements such as AR112the VANTAGE sys-

tem shall support airport operations in all possible adverse

weather conditions that can arise at BCA and FR250the

VANTAGE system shall recognize the maximum operable

wind speed of 50 knots for stand parking of the A320/1

aircraft.

The scenario workplace walkthrough revealed another

theme—that dispatch work was highly mobile—which led

the facilitator to ask about requirements about mobile

working. One dispatcher reported previous experiences

with mobile tools at the nearby Belfast International Air-

port that revealed an opportunity for mobile computing for

dispatchers at BCA. An example availability requirement

was AR111the VANTAGE system shall connect with dif-

ferent mobile computing platforms at all airside locations.

Again, dispatchers in the scenario workshop walkthroughs

did not generate requirements on this theme.

4.4.3 Physical features in requirements

A third characteristic was description of geographical and

physical features of the airport in each requirement. We

might expect the scenario workplace walkthrough (in

which the analyst moves about the airport) to include more

references to geographical and physical features than

requirements generated during the workshops. Table 6

reports the totals of requirements that described

geographic

features by scenario. Not surprisingly scenario VA4 On-

stand Operations, which was walked through in the

workplace, generated the largest number of such require-

ments. Some requirements make reference to the

complexities of taxiing and parking that emerges from the

topological layout of the airport, for example FR363 The

VANTAGE system shall include rules which incorporate

the specification of stands and FR369 The BCA tower

ATCO shall have access to information about the optimum

taxi route to allocated stand based on aircraft type and

loading. The walkthrough generated all of the requirements

that described air bridges, dispatch offices and departure

gates, for example AR106 BCA shall have sufficient airside

staff on duty at any one time to stand behind an aircraft

and stop road traffic during its pushback. The scenario

workshop walkthroughs did not generate requirements

describing the airport geography in the same level of detail.

4.5 The scenario workplace walkthrough

The facilitator and scribe adapted the VA4 On-stand

Operations scenario walkthrough to the workplace. It was

difficult to observe one aircraft turnaround from start to end

because more than one aircraft turned around at the same

time. Furthermore event timings and action durations were

unpredictable. Some scenario events related to a single

aircraft were simultaneous and described concurrent

actions that were difficult to observe, for example ramp

staff insert chocks, passenger steps and plug-in the aircraft

to ground power at the same time. Other actions lasted a

long time but revealed little to observe, for example the

crew clean the aircraft. Therefore the facilitator and scribe

walked through selected scenario events with one aircraft

before jumping to other scenario events with another

aircraft.

4.5.1 What triggered requirements generation

During the walkthrough the facilitator recognized more

event triggers to generate new requirements from the

workplace than from scenario events presented on the

MSP. For example requirement FR235 The dispatcher

shall be able to see and hear refueling activities on the

aircraft for which the dispatcher is responsible was gen-

erated in response to observations of a dispatcher’s reaction

Table 6 Totals of requirements that reference geographic features,
generated per VANTAGE scenario walkthrough

Requirement
geographic

reference

Scenario workshop
walkthroughs
Scenario
workplace
walkthrough
VA2 VA3 VA1 VA11 VA4

Stand 8 11 3 1 7

Air bridge 0 0 0 0 2

Air side 0 0 0 1 8

Dispatch office 0 0 0 0 2

Departure gate 0 0 0 0 4

Road way 0 0 0 0 1

Requirements Eng (2009) 14:91–111 99

123

to the (loud) sound of the refueling truck stopping when

refueling stopped. We identified three possible reasons for

this dominance of workplace triggers. One was the richness

of the triggers in a complex and dynamic workplace such

as an airport. The facilitator was an experienced analyst

who drew on his experience to ask requirements discovery

questions in response to observed events that he knew the

VANTAGE system needed to respond to. A second reason

was the small screen size of the MSP. It was difficult for

the facilitator and scribe to read the MSP scenario at the

same time. Therefore, during the walkthrough, the facili-

tator and scribe developed a workaround. The scribe would

read out recognized events—typically alternative course

events—then the facilitator would investigate these events

as he was able. However, this workaround meant that the

more experienced facilitator could not browse and recog-

nize scenario events directly. A third possible reason was

the mismatch between the large number of alternative

course events generated in the scenario and browsed in the

MSP, and the relatively small number of real-world events

that might conceivably happen in the airport environment.

The effort needed by the scribe to scroll and read alterna-

tive course events in the MSP was greater than that needed

to observe the workplace. Hence the facilitator took the

simpler option during the walkthrough.

4.5.2 How requirements were documented

The procedure with which the facilitator specified

requirements also varied. In response to some events the

facilitator was able to document a requirement at the time

of the scenario event using information gathered from short

interactions with observed stakeholders. For example the

requirement AR105 BCA shall have sufficient airside staff

on duty at any one time not to delay the off-block time of

departing aircraft was generated in response to asking a

dispatcher why she had to stand next to aircraft during

pushback, and what would happen if she did not do it—an

interaction of no more than 20 s.

Other requirements could not be specified during the

scenario event because the event was too short, dangerous

or difficult, so the scribe marked the events to follow-up

when relevant stakeholders were available. Between

observations of scenario events, the facilitator conducted

structured interviews, sometimes lasting as long as 10 min,

with stakeholders to discuss observed events and document

stakeholder requirements. All were audio-recorded in full

using the MSP. For example, an interview with an expe-

rienced dispatcher led to requirement TR76 All dispatchers

who dispatch aircraft that load passengers using the air

bridge shall be trained to use aircraft cockpit instructions

to inform themselves of refueling status… based on
observations of a busy dispatcher entering the aircraft

cockpit whilst embarking passengers. In one case—when

exploring requirements for dispatcher mobile working—

the facilitator prompted a mini-brainstorm, using the PDA

on which the MSP was running, to demonstrate possible

uses.

4.5.3 Scenario walkthrough productivity

The VANTAGE scenario walkthroughs consumed airport

and airline resources including pilots and air traffic con-

trollers. Therefore, we computed the estimates of

stakeholder time needed to generate a requirement in each

scenario workshop walkthrough and the scenario work-

place walkthrough. On average each of the four scenario

workshop walkthroughs involved eight stakeholders, lasted

2.5 h and generated 22 requirements. From this data we

compute that 1.1 requirements were generated per hour of

stakeholder participation. Stakeholder involvement in the

scenario workplace walkthrough was more difficult to

estimate. The walkthrough timetable was reviewed to

reveal that the walkthrough consumed approximately 7.2

person-h, including 5 h of time from the BCA environ-

mental manager who acted as a chaperone for the

walkthrough. The scenario workplace walkthrough gener-

ated 59 requirements. From this data, we compute that 8.2

requirements were generated per hour of stakeholder

involvement. The estimates, although crude, do suggest

that the VANTAGE scenario workplace walkthrough was

more productive in terms of stakeholder time.

5 Walking through scenarios in APOSDLE

We also walked through ART-SCENE scenarios to dis-

cover requirements for APOSDLE, a system to support

work-integrated learning by knowledge workers such as

aeronautic engineers, systems developers and consultants

working for chambers of commerce in Germany. The 4-

year project, funded by the European Commission, inclu-

ded application partners from the aerospace multinational

EADS, a German software development organization

called CNM, a business consultancy called ISN, and IHK,

the Darmstadt Chamber of Commerce.

Again we walked stakeholders through scenarios as part

of the RESCUE scenario-driven requirements process [18].

Prior to the walkthroughs the requirements team had dis-

covered requirements on the new APOSDLE system using

a creativity workshop and pair-wise use case authoring. A

use case model specified 15 core use cases that specified

how the future APOSDLE-enhanced work-integrated

learning should take place at different organizations.

Requirements on APOSDLE were associated with behav-

iour specified in the use cases. However, again, we still

100 Requirements Eng (2009) 14:91–111

123

needed APOSDLE stakeholders to discover complete

requirements on APOSDLE using projections of its future

use with scenario walkthroughs.

5.1 The scenario walkthrough schedule

APOSDLE stakeholders prioritized use cases for their

potential impact on work-integrated learning. The work-

shop walkthrough schedule is reported in Table 7. ART-

SCENE was used to generate one scenario for each of the

selected eight use cases. These eight scenarios were walked

through once in a scenario workshop walkthrough that was

attended by application and technology partners from dif-

ferent stakeholder partners. Selected scenarios were then

walked through again using the MSP with scenario work-

place walkthroughs at two application partner sites—ISN

and CNM. Six such scenario workplace walkthroughs took

place. Several scenarios, such as AP24 Use learning event

and AP12 Collaborate, were therefore walked through

three times—once in a workshop and twice in the work-

place at two application partner sites.

Figure 4 shows part of the ART-SCENE scenario for

AP24 Use learning event used in one of the scenario

workshop walkthroughs. Typical events included the

Table 7 The APOSDLE
scenario walkthrough schedule

Date Scenario Type of
walkthrough

30/04/2007 AP8: monitor work context Workshop

01/05/2007 AP24: use learning event Workshop

01/05/2007 AP4a: find and contact relevant knowledge workers Workshop

02/05/2007 AP9b: store information exchanged Workshop

02/05/2007 AP12: collaborate Workshop

03/05/2007 AP6: make relevant knowledge artefact relevant to APOSDLE tools Workshop

04/05/2007 AP22: trigger learning Workshop

04/05/2007 AP23: construct and select learning event Workshop

10/05/2007 AP24: use learning event Workplace at ISN

10/05/2007 AP12: collaborate Workplace at ISN

16/05/2007 AP12: collaborate Workplace at CNM

16/05/2007 AP4a: find and contact relevant knowledge workers Workplace at CNM

16/05/2007 AP8: monitor work context Workplace at CNM

16/05/2007 AP24: use learning event Workplace at CNM

Fig. 4 One variation of the
APOSDLE scenario (AP24 Use
Learning Event) in the desktop
version of ART-SCENE used

during one scenario workshop
walkthrough

Requirements Eng (2009) 14:91–111 101

123

learning event is shown to the user and the user clicks on

the search refinement button. Each of the eight scenario

workshop walkthroughs took place in a meeting room with

one facilitator, one scribe who controlled ART-SCENE and

a second scribe who provided access to previous creativity

workshop results that were also displayed on a large screen.

The facilitator and stakeholders interacted directly with the

APOSDLE prototype displayed on an electronic white-

board through the touch screen. Each scenario workshop

walkthrough was designed to run for 2–4 h, depending on

the number of events in the scenario. Figure 5 shows a

facilitator interacting with and annotating the software

prototype during requirements discovery, and stakeholders

during the workshop surrounded by design prompts from

creativity workshop outcomes.

The six scenario workshop walkthroughs took place

over 2 days at the sites of two APOSDLE application

partners in Germany and Austria. One analyst, a native

German speaker who had not been present in the scenario

workshop walkthroughs, facilitated the scenario workplace

walkthroughs. A different scribe, also a German speaker,

operated the MSP. The facilitator observed work-based

learning behaviour and asked questions to staff being

observed. All spoken requirements and comments were

recorded in text and audio form in the MSP. Figure 6

shows the scenario workplace walkthrough at the consul-

tancy company ISN in Graz, Austria.

5.2 Generating the scenarios

Each generated scenario normal course specified the

expected order of events during APOSDLE’s generation,

use and management of learning material. For each nor-

mal course event in the eight scenarios, the algorithm

generated one or more candidate alternative courses for

each normal course event. This time alternative courses

were generated using the domain-independent ART-

SCENE algorithm [17] extended with a domain-specific

version that included class hierarchies of abnormal

behaviour and state derived from published learning

Fig. 5 Images from one
APOSDLE scenario workshop
walkthrough, the left-hand side
showing annotation of software

prototypes on an electronic

whiteboard, the right-hand side
showing stakeholders

surrounded by outputs from the

earlier creativity workshop

Fig. 6 An APOSDLE scenario
workplace walkthrough,
showing the use of the MSP at

consultancy organization ISN

102 Requirements Eng (2009) 14:91–111

123

literature. Domain-independent alternative courses inclu-

ded what if the knowledge worker lacks the necessary

knowledge? and what if the expert exhibits some abnor-

mal behaviour? Domain-dependent alternative courses

included what if the user’s environment is inappropriate

for learning? and what if the communication medium used

is inappropriate? Other examples are shown on the right-

hand side of Fig. 4. Each generated scenario specified on

average 13 normal course events and 19 alternative course

events per normal course event. The shortest scenario was

AP6: Make relevant knowledge artefact relevant to

APOSDLE tools with five normal course and a total of 78

alternative course events. The longest was AP24 Use

learning event with 19 normal course and a total of 457

alternative course events in the principal normal course

and four variation scenarios.

5.3 Scenario walkthrough results

All scenario walkthroughs took place as planned. Each of

the eight scenario workshop walkthroughs lasted between 2

and 4 h. Between three and eight technology and end-user

stakeholders attended each workshop, and each included at

least one technology partner and one application partner.

The six scenario workplace walkthroughs lasted a total of

about 10 h not including time taken for tool setup and

coffee breaks. The mobile analysts observed and interacted

with end-users from the application partners rather than the

technology developers.

Results from the scenario walkthroughs are reported in

Table 8. The eight scenario workshop walkthroughs

generated 228 requirements that included 46 requirements

for AP24 Use learning event. The average number of

requirements generated per scenario workshop walk-

through was 28.5. As in VANTAGE these requirements

were documented using ART-SCENE. Scenario work-

place walkthroughs generated 160 requirements generated

mostly during walkthroughs of scenarios AP12 and AP24.

The scenario workplace walkthroughs at ISN in Graz

discovered 52 requirements for AP12 and 19 require-

ments for AP24. At CNM in Dortmund the scenario

workplace walkthroughs generated 24 requirements for

AP12 and 36 requirements for AP24. Additionally, at

CNM we gathered requirements for two other scenarios,

which were not the focus of the inquiry, so less time was

spent walking through them. The average number of

requirements generated per scenario workplace walk-

through was 26.7.

Requirements were again analyzed by type as reported

in Table 9. Results revealed that, unlike VANTAGE, the

scenario workshop walkthroughs generated more usability-

, performance- and maintainability-type requirements than

did the scenario workplace walkthroughs. In contrast the

scenario workplace walkthroughs generated 18 security-

Table 8 The total number of requirements documented during each scenario, types of event for which requirements were documented, and
average number of requirements generated per scenario normal course event, for the APOSDLE project

Scenario and walkthrough type Total number of
requirements
documented
Number of requirements
on use case and normal
course event behaviour

Number of

requirements
on alternative
course behaviour
Average number of
requirements per
normal course event

AP8: monitor work context (workshop) 36 25 11 4.0

AP24: use learning event (workshop) 46 32 14 2.0

AP4a: find and contact relevant knowledge workers

(workshop)

30 28 2 3.0

AP9b: store information exchanged (workshop) 32 20 12 4.0

AP12: collaborate (workshop) 20 16 4 1.82

AP6: make relevant knowledge artefact relevant

to APOSDLE tools (workshop)

10 10 0 2.0

AP22: trigger learning (workshop) 22 18 4 0.81

AP23: construct and select learning event (workshop) 32 27 5 4.57

AP12: collaborate (workplace at ISN) 52 44 8 2.73

AP24: use learning event (workplace at ISN) 19 13 6 1.72

AP12: collaborate (workplace at CNM) 24 21 3 2.18

AP4a: find and contact relevant knowledge workers

(workplace at CNM)

7 5 2 0.7

AP8: monitor work context (workplace at CNM) 22 20 2 2.75

AP24: use learning event (workplace at CNM) 36 30 6 1.89

Total 388 309 79

Requirements Eng (2009) 14:91–111 103

123

type and 11 interoperability-type requirements, in contrast

to low numbers of these requirement type generated in the

scenario workshop walkthroughs. However, no overall

pattern of requirement generation by type emerged.

Although most of the 228 requirements generated in the

scenario workshop walkthroughs were expressed in text

form, the screen capture capability of the electronic

whiteboard enabled the facilitator and stakeholders to

annotate and save images of the software prototype asso-

ciated with these requirements. Fourteen of these 228

(6.1%) APOSDLE requirements were documented in this

manner. Figure 7 shows three examples of these enhanced

requirement descriptions.

The six scenario workplace walkthroughs generated a

total of 58 observations (an average of 9.67 per scenario)

recorded as comments. Of these 58 comments, 31 were

documented for AP12 Collaborate at the CNM site where

observation time was significantly higher than the time

spent interviewing stakeholders during the walkthroughs.

5.4 Qualitative requirements analysis

We investigated the APOSDLE requirements to detect

qualitative differences between requirements based on

requirement subject, theme and inclusion of physical fea-

tures generated with the different types of scenario

walkthrough. To enable this investigation we first analyzed

the requirements to remove duplicates that arose from

walking through the same scenario multiple times with

different stakeholders. Of the 160 requirements generated

during scenario workplace walkthroughs, 22 had already

been specified once before in a walkthrough and 1 had been

specified twice before. Of the 22 duplicated requirements,

12 had been originally specified in scenario workshop

walkthroughs and 10 in scenario workplace walkthroughs.

The removal of these 24 repeating requirements resulted in

a total of 364 unique APOSDLE requirements that were

investigated for their subjects, themes and description of

physical features in the work context.

Table 9 The total number of requirements documented during each scenario by scenario walkthrough type

Scenario AR FR LFR IR PR RR SR TR UR BUS LG MR DR

Scenario workshop
walkthroughs

AP8 0 23 0 1 2 1 0 0 4 2 1 2 0

AP4 0 26 1 0 0 0 0 0 15 3 0 0 1

AP4a 0 30 0 0 0 0 0 0 0 0 0 0 0

AP9 0 29 0 0 0 0 0 0 2 0 0 1 0

AP12 0 18 0 1 0 0 0 1 0 0 0 0 0

AP6 0 7 0 0 0 0 0 0 3 0 0 0 0

AP22 0 14 0 0 2 0 0 0 5 0 0 1 0

AP23 0 18 0 0 0 1 0 0 12 0 0 1 0

Scenario workplace

walkthroughs at ISN

AP24 0 17 0 0 0 0 2 0 0 0 0 0 0

AP12 0 28 0 7 0 2 5 1 2 7 0 0 0

Scenario workplace

walkthroughs at CNM

AP12 0 14 0 3 0 0 5 0 0 2 0 0 0

AP4a 0 6 0 0 0 0 0 0 0 1 0 0 0

AP8 0 13 0 0 0 0 6 0 1 1 1 0 0

AP24 0 28 0 1 0 0 0 0 1 6 0 0 0

AR availability, FR functional, LFR look-and-feel, IR inter-operability, PR performance, RR reliability, SR safety, TR training, UR usability, BUS
business goals, LG legal, MR maintainability, DR device

Fig. 7 Three APOSDLE screen shots taken from requirements discovered during the AP12 Collaborate, AP24 Use learning event and AP4a
Find and contact relevant knowledge workersscenario workshop walkthroughs

104 Requirements Eng (2009) 14:91–111

123

5.4.1 Requirements subjects

The first characteristic was the subject of each requirement.

As in VANTAGE, ART-SCENE mandated that all

requirements were expressed using shall statements with a

common structure [3] that highlighted the subjects of

requirements as the actor upon which the requirement was

specified. Results are reported in Table 10. Most require-

ments were on the APOSDLE system and its users

independent of the type of walkthrough that generated

them (e.g., The APOSDLE system shall generate a learning

goal from the user question). Compared to VANTAGE,

there were fewer differences regarding requirement sub-

jects between scenario workshop walkthroughs and

scenario workplace walkthroughs. The scenario workplace

walkthroughs generated requirements on the learner, col-

laboration transcript and collaboration document not

generated in the scenario workshop walkthroughs.

5.4.2 Requirements themes

A second characteristic was the theme of each requirement

reported in Table 11. There was one main requirement

theme generated during the scenario workplace walk-

throughs—privacy. Although eight requirements with this

theme had been generated in scenario workshop walk-

throughs, the scenario workplace walkthroughs generated

17 additional privacy-type requirements such as the

APOSDLE system shall delete context monitoring history

after a short time. However, compared to the VANTAGE

Table 10 Totals of requirements with different subject actors, generated per APOSDLE scenario walkthrough

Requirements subject actor Scenario workshops walkthroughs Scenario workplace

walkthroughs at ISN
Scenario workplace
walkthroughs at CNM

AP8 AP24 AP4a AP9 AP12 AP6 AP22 AP12 AP24 AP12 AP4a AP8 AP24

APOSDLE system 22 32 18 12 8 7 20 26 8 14 4 18 23

Collaboration participant 0 0 0 3 1 0 0 5 0 1 0 0 0

User 12 13 6 17 8 3 2 8 2 1 0 1 9

Expert 0 1 6 0 1 0 0 0 0 0 1 0 0

Collaboration transcript 0 0 0 0 0 0 0 3 0 0 0 0 0

Collaboration document 0 0 0 0 0 0 0 3 0 0 0 0 0

Learner 0 0 0 0 0 0 0 2 6 0 0 0 0

Customer 0 0 0 0 0 0 0 0 0 1 0 0 0

Administrator 1 0 0 0 0 0 0 0 0 0 0 0 0

Collaboration tool 0 0 0 0 2 0 0 0 0 0 0 0 0

Knowledge engineer 1 0 0 0 0 0 0 0 0 0 0 0 0

Table 11 Totals of requirements by requirements themes, generated per APOSDLE scenario walkthrough

Requirements themes Scenario walkthrough workshops Scenario workplace

walkthroughs at ISN
Scenario workplace
walkthroughs at CNM
AP8 AP24 AP4a AP9 AP12 AP6 AP22 AP12 AP24 AP12 AP4a AP8 AP24

APOSDLE affect on users 4 3 0 0 0 0 4 8 0 0 1 1 2

APOSDLE affect on current work practices 2 3 0 0 0 0 1 4 0 0 0 1 2

APOSDLE support for mobility 0 2 0 0 0 0 0 3 0 0 0 0 0

APOSDLE user profile 2 0 9 2 1 0 5 0 1 0 1 0 2

Knowledge artefact 1 10 0 19 2 4 0 5 0 0 0 0 0

Context monitoring 14 0 0 0 1 0 1 2 0 1 0 4 0

Privacy issues 2 1 1 3 1 0 0 6 2 4 0 5 0

Create knowledge artefact 1 1 0 3 3 0 1 4 1 7 0 5 1

Availability for collaboration 0 1 4 0 0 0 0 2 7 0 3 3 0

Learning process 4 23 0 0 1 5 9 3 4 0 0 0 24

Collaboration process 0 2 16 3 10 0 0 7 0 5 0 0 0

Other 0 0 0 0 0 1 0 3 1 0 0 0 1

System administration 4 0 0 2 1 0 1 0 0 0 0 0 0

Requirements Eng (2009) 14:91–111 105

123

project, there were fewer differences in the themes of

requirements generated in the scenario workshop walk-

throughs and scenario workplace walkthroughs.

5.4.3 Physical features in requirements

The third characteristic was the description of physical

features of the ISN and CNM office in each requirement. In

contrast to the VANTAGE project, we did not discover

requirements that referred to any physical features. One

possible reason is that physical features are less important

for a desktop based learning support system than for an

airport management system.

5.5 Scenario walkthrough productivity

Again we computed the estimates of APOSDLE stakeholder

time needed to generate a requirement in a scenario workshop

walkthrough and a scenario workplace walkthrough. On

average each of the seven workshops involved 3.9 stakeholders

(not including facilitator and scribe), lasted 2.45 h and gen-

erated 28.4 requirements. From this data we compute almost

3.0 requirements were generated per hour of stakeholder par-

ticipation, which was higher than the rate of generation during

the four VANTAGE scenario workshop walkthroughs.

Calculating stakeholder time spent on the APOSDLE

scenario workplace walkthroughs was again more difficult.

At ISN stakeholders participated in walkthroughs that lasted

a total of 3.8 h. We generated 71 requirements including the

eight duplicate ones. Both analysts interacted with the

stakeholders for 2.8 h—the remainder of the time was spent

observing them. From this data, we computed 13.2 require-

ments were generated per hour of stakeholder participation,

higher than the rate of the VANTAGE scenario workplace

walkthroughs. At CNM we spent 3.2 h (55%) on interactions

with stakeholders, while 2.6 h (45%) were spent on obser-

vations. During the scenario workplace walkthroughs at

CNM 89 requirements were generated (73 without dupli-

cates). We computed that 17.7 requirements were generated

per hour of stakeholder participation, again a rate higher than

the VANTAGE scenario workplace walkthroughs.

Furthermore, on average, the scenario workshop walk-

throughs generated 5.8 requirements per hour of analyst

and scribe participation. Scenario workplace walkthroughs

at ISN generated 8.3 requirements per hour of analyst

participation. Due to the increased observation time at

CNM the scenario workplace walkthrough generated 6.2

requirements per hour.

5.6 The scenario workplace walkthroughs

Most tasks that were walked through in the workplace were

standard office tasks (e.g., checking e-mail) that could be

mapped to scenario normal course events. As a conse-

quence analysts were able to interrupt stakeholders to ask

questions as well as ask follow-up questions after unin-

terruptible tasks (e.g., talking to a customer on the phone).

The facilitator and scribe rotated the roles to increase their

chances of recognizing cues provided by the MSP. Over

time both became familiar with the scenarios, which

reduced time to navigate them.

5.6.1 What triggered requirements generation?

After the scenario workplace walkthroughs the two ana-

lysts reflected that most requirements were triggered by

events in the workplace, for two reasons. The first was that,

during APOSDLE, both the analyst and scribe were

equipped with the MSP tool, which allowed them to read

the scenarios at the same time. The second was that the

office environment was simpler, less dynamic and therefore

provided fewer triggers than the airport environment.

5.6.2 How requirements were documented

In contrast to VANTAGE most requirements were docu-

mented in text rather than audio form using the MSP stylus

and keyboard. One reason was that, because the analyst and

scribe were equipped with PDAs, there was no need to

communicate scenario information to the facilitator, which

in turn gave more freedom to the scribe. The less dynamic

and mobile office environment also gave the analysts more

time to type requirements into the MSP, a luxury not

available in previous scenario workplace walkthroughs

with the MSP [19]. During one scenario workplace walk-

through the scribe even replaced the MSP with the desktop

Scenario Presenter running on a notebook computer to

document requirements due to the time available and

chance to sit at a desk.

6 The research questions revisited

The scenario walkthroughs in VANTAGE and APOSDLE

were a success. They led to generation of 147 and 338 new

requirements, respectively. The use of ART-SCENE was

also a success, in that we effectively applied a research

prototype to two challenging requirements problems. We

extended the use of the desktop Scenario Presenter in

facilitated scenario workshop walkthroughs to support

software prototype walkthroughs. The use of the MSP in

VANTAGE is one of the first reported effective uses of

mobile requirements tools on large projects [20]. Simple-

to-use audio recording of spoken requirements overcame

the usability problems reported in [19], whilst giving the

MSP to experienced analysts realized its potential in

106 Requirements Eng (2009) 14:91–111

123

different settings. That said, problems remained, such as

difficulties encountered by two people browsing scenario

events with a single MSP. We reviewed the VANTAGE

and APOSDLE results to answer the three research

questions.

6.1 Effect on scenario walkthroughs from a software

prototype?

The answer to question Q1—does a workshop walkthrough

of a scenario supported with a software prototype and

creativity prompts generate more requirements—is a ten-

tative yes based on data from the scenario workshop

walkthroughs. Not only did the APOSDLE scenario

workshop walkthroughs with the software prototype gen-

erate more requirements per scenario—28.5 to 26.7—than

the VANTAGE scenario workshop walkthroughs, but the

APOSDLE scenarios were shorter. In APOSDLE 2.51

requirements were generated per normal course event, as

opposed to 1.05 requirements per VANTAGE scenario.

Fourteen APOSDLE requirements were supported with

annotated screenshots and images not available in VAN-

TAGE due to the absence of a prototype.

The quantitative results provide preliminary evidence

that software prototypes can provide additional recognition

cues with which to generate and specify

new requirements.

However, the answer does need to be interpreted with care

as other variables such as the domain, degrees of stake-

holder participation and expertise, and requirements

specified previously in the process clearly may all have

influenced the result. Threats to validity of the findings are

discussed later.

6.2 Different requirements from scenario workplace

walkthroughs?

The answer to question Q2—does walking through ART-

SCENE scenarios in the workplace lead to generation of

different requirements to workshops—is also a tentative

yes. One scenario workplace walkthrough generated

requirements on VANTAGE actors that the scenario

workshop walkthroughs did not generate requirements on.

It acquired requirements from new VANTAGE stake-

holders not identified during earlier analyses. It acquired

requirements from stakeholders who had attended the

scenario workshop walkthroughs but not specified these

requirements during them. And it generated requirements

of different types on important themes not identified during

the VANTAGE scenario workshop walkthroughs. In

APOSDLE the scenario workplace walkthroughs at two

different sites revealed different, potentially conflicting

requirements that did not emerge clearly during the earlier

scenario workshop walkthroughs. These walkthroughs also

generated important observations not captured during the

scenario workshop walkthroughs related to the require-

ments that were specified.

There are several possible reasons for these results. The

first is the use of scenario workplace walkthroughs to do a

stakeholder analysis—discovering then involving all of the

important actors in VANTAGE. This was true for the

dispatch coordinator role. A second possible reason was

that observing the workplace enabled the facilitator to act

as an apprentice and learn about actors’ work, as supported

in contextual inquiry [6]. Indeed, some periods of the

scenario workplace walkthrough were indeed a form of

ethnographic observation, albeit structured using the sce-

nario in the MSP. This learning then enabled the facilitator

to ask more informed questions during observations and

structured interviews, as well as to infer more correct and

complete VANTAGE requirements. In contrast the facili-

tator and scribe in the scenario workshop walkthroughs

often did not have access to the domain knowledge needed

to complete the specification of requirements because

knowledge about the workplace was not available to them.

A third possible reason—an important one—is that the

workplace provided different event recognition cues to

discover requirements on different themes. Indeed, rather

than trigger event recognition, the facilitator used the MSP

scenario in the scenario workplace walkthroughs primarily

to generate requirements and requirements-related data in

the context of the observed normal course event. This had

two important advantages. The first is that related

requirements and material could be reviewed during and

between walkthroughs, thus enabling the facilitator to ask

more informed structured interview questions. The second

is that, during post-walkthrough analyses, analysts could

review the requirements and related material in context,

thus providing cues to recall the observed event and more

information with which to infer new requirements.

6.3 More requirements from scenario workplace

walkthroughs?

The answer to question Q3—does walking through ART-

SCENE scenarios in the workplace lead to generation of

more requirements than in workshops—is also a tentative

yes. The one VANTAGE scenario workplace walkthrough

generated a larger number of requirements than any single

scenario workshop walkthrough. All VANTAGE and

APOSDLE scenario workplace walkthroughs were more

productive in terms of stakeholder time than the workshop

equivalent. The repeated APOSDLE scenario workplace

walkthroughs also generated new requirements not

described in the scenario workshop walkthroughs.

Of course repeating walkthroughs of the same scenario

in different workplaces risked the duplication of

Requirements Eng (2009) 14:91–111 107

123

requirements that needed significant analyst effort to detect

and remove. Although just over 18% of the APOSDLE

requirements generated during the scenario workplace

walkthroughs were semantic duplicates that needed to be

removed from the requirements specification, over 80% of

the requirements from walking through the same scenario a

second or third time were new to the process, providing

results with which to answer yes to the research question.

There are at least two possible reasons for the greater

productivity of the VANTAGE scenario workplace walk-

through. The first is the role of the facilitators who, in the

workplace, directly inferred and documented more

requirements than in the workshops because of the limited

communication that was possible with stakeholders

engaged in other tasks. The outcome was that the facili-

tators were able to infer more requirements than they were

able to acquire from stakeholders in the workshops. This

has implications for redesigning the scenario workshop

walkthroughs to enable the facilitators to infer and propose

new requirements.

Conversely, a second reason is that the scenario work-

place walkthroughs increased the requirements

communication bandwidth. Whereas stakeholders did not

bring written material to the workshops, the facilitators in

the workplace were able to collect documents such as the

bad weather operation document, as well as observe

workplace artifacts such the dispatch coordinator’s stats

sheet and take photographs. This material added to the

spoken requirements recorded in the MSP and provided a

richer data corpus that the analyst used to infer larger

numbers of requirements than in the workshops. Again this

raises the need to design scenario workshop walkthroughs

to encourage inference of requirements from different

information sources.

6.4 Threats to validity

We report the pragmatic use of different scenario walk-

through types to solve the VANTAGE and APOSDLE

requirements problems. Our decision not to balance inde-

pendent variables and control dependent ones across a low

number of walkthroughs means that all results need to be

interpreted with care. For example, the effectiveness of the

scenario workplace walkthroughs could also have been

influenced by the design of the walkthroughs, the stake-

holder participation in them, and the reporting of the

results. The VANTAGE and APOSDLE scenario work-

place walkthroughs always occurred after the scenario

workshop walkthroughs, hence facilitator behaviour might

have been informed by domain knowledge obtained in the

earlier workshop walkthroughs. The influence on stake-

holders was less because, in both projects, most

stakeholders observed in the scenario workplace

walkthroughs did not participate in the scenario workshop

walkthroughs. Implicit biases might also have risen from

the desire of the facilitator and scribe to see the scenario

workplace walkthrough succeed, especially in light of

problems reported in earlier uses [19]. However, the effort

needed to set up and run the scenario walkthroughs under

challenging conditions in the VANTAGE and APOSDLE

projects, we believe, reduced the likelihood of such

implicit bias due to the analyst’s focus on more undertak-

ing the requirements tasks making both types of

walkthrough succeed to their best abilities.

7 Lessons learned

The following lessons were learned about the design and

running of scenario walkthroughs in requirements projects

from our experiences in APOSDLE and VANTAGE.

The most obvious lesson is that mixing and matching

different types of scenario walkthroughs, in our case sce-

nario walkthroughs in facilitated workshops and in the

workplace, generated more requirements on different

actors and about different themes. In simple terms, differ-

ent scenario walkthroughs increased the completeness of

resulting requirements specification over sole use of one

walkthrough type. A related lesson was to walkthrough the

same scenarios more than once with different stakeholders.

Although this led to duplicate requirements being specified,

over four in every five requirements generated during the

walkthroughs were original and valid. Some requirements

duplication may be an acceptable price for ensuring more

complete requirements specification.

One unexpected outcome of the VANTAGE scenario

workplace walkthrough was the discovery of one new

stakeholder with important requirements on the new sys-

tem. It provides direct evidence for the effectiveness of

scenario workplace walkthroughs for stakeholder analysis.

Although stakeholder analysis techniques are available

[16], scenario workplace walkthroughs earlier in the

requirements process, using simple scenarios that outline

key events without exploring alternative course events, can

complement existing analysis techniques and validate a

current stakeholder model. Such walkthroughs earlier in

the requirements process can make the scenarios more

complete for later walkthroughs in workshops, as results

showed that scenarios were edited and commented more

frequently in the scenario workplace walkthroughs.

Results also provide lessons for designing scenario

walkthroughs to be more effective. In particular the

VANTAGE scenario workplace walkthrough allowed the

analyst to generate new requirements that stakeholders

later accepted, in strong contrast to scenario workshop

walkthroughs in which the analyst encouraged stakeholders

108 Requirements Eng (2009) 14:91–111

123

to generate requirements. The workshop walkthrough

process has been extended to provide periods in which the

analyst can propose speculative new requirements to be

accepted or rejected by the stakeholders present. The sce-

nario workplace walkthroughs in both projects

demonstrated the value of interleaving the walkthrough

with more detailed stakeholder interviews and analysis of

documentation available in the workplace, although this

was not part of the original walkthrough protocol. The

protocol has been changed to allow for documentation

collection, brainstorming and interview sessions, and

guidelines given to stakeholders before scenario workshop

walkthroughs have been extended to encourage stake-

holders to bring relevant documentation to workshops.

Furthermore, results indicate that different types of

scenario walkthrough with different types of stakeholders,

sometimes in different workplaces, can influence the sub-

jects and themes of the generated requirements. We

recommend more a priori design of scenario walkthrough

schedules that takes into account the acquisition of sets of

requirements using information about the workplace and

stakeholders. A requirements framework that structures

requirements by subject and theme can inform the design

of such a schedule.

Results from the APOSDLE scenario workshop walk-

throughs indicated another unexpected lesson. Whilst the

walkthroughs revealed weak evidence that the software

prototype and creativity cues might have increased the

number of requirements specified, one expected outcome

was annotation of the prototype to illustrate requirements

graphically using electronic whiteboards. Such illustrations

can communicate requirements to stakeholders and

designers more effectively, as well as lead to more

requirements generation in walkthroughs.

Another lesson emerges from the productivity results.

Scenario workplace walkthroughs were more efficient in

terms of stakeholder participation time to generate

requirements. If time is short, we recommend running more

scenario workplace walkthroughs rather than scenario

workshop walkthroughs.

One final lesson relates to the scenarios walked through

in both projects. Stakeholders lacked the time needed to

walk through alternative course events automatically gen-

erated by ART-SCENE. Although results do not indicate

that this has led to requirements incompleteness in both

projects, analysts perhaps need to specify and generate

scenarios with fewer normal course events, to allow more

effective walking through of the normal and alternative

course events.

To conclude the lessons indicate some comparative

strengths and weaknesses of scenario workshop walk-

throughs and scenario workplace walkthroughs when

supported with different versions of the ART-SCENE

scenario environment. The next section places the strengths

of scenario workplace walkthroughs in a wider context.

8 Related work

Ethnographical methods have been used in requirements

projects to provide an adequate understanding of the cur-

rent work practice to be changed by specified systems.

Several researchers used ethnographical methods to inform

requirements engineering in various domains including air

traffic control [5] and underground control rooms [11]. In

some case ethnographical methods were combined with

existing requirements techniques, such as viewpoints to

structure the results of an ethnographic study [13]. Viller

and Sommerville [27] reported different uses of ethno-

graphical methods during requirements processes.

Contextual inquiry is an approach influenced by eth-

nography that supports system development. In contrast to

other ethnographic methods, an analyst with a technical

background is in charge of analyzing existing work prac-

tice [7, 29]. Contextual inquiry is based on observation and

the contextual interview, the key activity to gather design

relevant data in the stakeholders’ work environment. The

interview is structured following the principles of contex-

tual inquiry [6]. Contextual inquiry has been successfully

applied in various projects in the software engineering

domain [6]. Holtzblatt [12] concludes that building a

design upon field data was essential for the success of these

projects.

Ethnographical methods and contextual inquiry support

analysts’ understanding of the workplace. However, prob-

lems have been highlighted [13, 22, 27]. Most still use a

paper and pencil-based approach and lack on-site tool

support for guiding on-site analysts and for documenting

the gathered information. Contextual inquiry techniques

are only weakly integrated with existing requirements

methods and tools. Moreover the volume of information

gathered is often unfocused, which makes the information

difficult to use in the requirements process. There is also a

lack of a theoretical structure underpinning the observation

process [22] and due to a lack of focus these approaches are

confined to relatively small-scale environments (e.g., con-

trol rooms) [14]. The introduction of mobile tools for

walking through scenarios in the workplace reported in this

paper was designed to overcome some of these reported

problems.

9 Future research

Future research is in three directions. The first will extend

the model of scenario-based requirements discovery with

Requirements Eng (2009) 14:91–111 109

123

new physical tasks such as perceive recognition cues in the

work context and perceive possible design features, and

cognitive tasks such as infer new requirement, which will

relate to models of creativity in requirements engineering.

Secondly, we will then apply the model to redesign the

scenario workshops to support analysts to infer candidate

requirements and propose them to stakeholders. ART-

SCENE will be extended with pattern-based requirements

generation that can recommend outline requirements

automatically. We will also build on existing methods [24]

to develop new walkthrough processes, techniques and

protocols to manage the effective use of scenario proto-

types that provide effective additional recognition cues for

discovering requirements during facilitated workshops.

Whilst MSP audio recording of spoken requirements

overcame earlier usability problems, its use here revealed

new challenges to solve. One is the provision of scenario

event cues to both the facilitator and scribe as we explored

in the reported APOSDLE walkthroughs. Screen size is

dictated by available PDA devices, so one solution is to

synchronize scenario walkthroughs on two devices. Whilst

the scribe navigates the scenario on one device running the

current MSP, a selected subset of scenario events, for

example one normal course event and alternative course

events associated with it, are displayed on the second

device to provide the facilitator with manageable, context-

specific event recognition cues. Analysts could also use this

feature to select between generated alternative course

events to present to the facilitator, to reduce information

overload. One possible further refinement is to use context-

aware devices to filter scenario events dynamically

according to proximity to a location or actor. We look

forward to reporting these outcomes in the near future.

Acknowledgments Work reported in this paper was funded in part
by the UK DTI-funded VANTAGE Phase-1 project and in part by the
EU-funded FP6 027023 APOSDLE project.

References

1. Agentsheets web site: http://agentsheets.com/

2. Alexander IF, Maiden NAM (eds) (2004) Scenarios, stories and

use cases. John Wiley, New York

3. Alexander IF, Stevens R (2002) Writing better requirements.

Addison-Wesley, Reading

4. Baddeley AD (1990) Human memory: theory and practice.

Lawrence Erlbaum Associates, Mahwah

5. Bentley R, Hughes JA, Randall D, Rodden T, Sawyer P, Shapiro

D, Sommerville I (1992) Ethnographically-informed systems

design for air traffic control. In: Proceedings ACM conference on

computer supported cooperative work (CSCW), pp 123–129

6. Beyer H, Holtzblatt K (1998) Contextual design: defining con-

sumer-centered systems. Morgan-Kauffman, San Francisco

7. Blomberg J, Burrell M, Guest G (2002) An ethnographic

approach to design. In: Jacko JA, Sears A (eds) The human–

computer interaction handbook: fundamentals, evolving

technologies and emerging applications. Lawrence Erlbaum

Associates, Mahwah, pp 964–986

8. Carroll JM (2000) Making use: scenario-based design of human–

computer interactions. MIT Press, Cambridge

9. Gottensdeiner E (2004) Running a use case/scenario workshop.

In: Alexander I, Maiden NAM (eds) Scenarios, stories, use cases:

through the systems development life-cycle. John Wiley, New

York, pp 81–101

10. Haumer P, Heymans P, Jarke M, Pohl K (1999) Bridging the gap

between past and future in re: a scenario-based approach. In:

Proceedings of the 4th IEEE international symposium on require-

ments engineering. IEEE Computer Society Press, pp 66–73

11. Heath C, Luff P (1992) Crisis management and multimedia

technology in London underground line control rooms. J Comput

Support Cooperative Work 1(1):24–48

12. Holtzblatt K (2004) The role of scenarios in contextual design:

from user observations to work redesign to use cases. In: Alex-

ander IF, Maiden N (eds) Scenarios, stories, use cases: through

the systems development life-cycle. John Wiley & Sons, New

York, pp 179–209

13. Hughes J, King V, Rodden T, Andersen H (1995) The role of

ethnography in interactive systems design. Cooperative Systems

Engineering Group, Lancaster University, Technical report

CSEG/8/1995

14. Hughes J, King V, Rodden T, Andersen H (1994) Moving out

from the control room: ethnography in system design. In: Pro-

ceedings of the ACM conference on computer supported

cooperative work (CSCW), pp 429–439

15. Jones SV, Lynch P, Maiden NAM, Lindstaedt S (2008) Use and

influence of creative ideas and requirements for a work-integrated

learning system. In Proceedings 16th IEEE international confer-

ence

on requirements engineering. IEEE Computer Society

Press

16. Macaulay L (1993) Requirements capture as a cooperative

activity. In: Proceedings of the IEEE international symposium on

requirements engineering. IEEE Computer Science Press,

pp 174–181

17. Maiden NAM (2004) Systematic scenario walkthroughs with

ART-SCENE. In: Alexander I, Maiden NAM (eds) Scenarios,

stories, use cases : through the systems development life-cycle.

John Wiley, New York, pp 161–178

18. Maiden NAM, Jones SV, Manning S, Greenwood J, Renou L

(2004) Model-driven requirements engineering: synchronising

models in an air traffic management case study. In: Proceedings

of CaiSE’2004. LNCS, vol 3084, pp 368–383. Springer, Berlin

19. Maiden NAM, Seyff N, Grunbacher P, Otojare O, Mitteregger K

(2006) Making mobile requirements engineering tools usable and

useful. In: Proceedings of the 14th IEEE international conference

on requirements engineering. IEEE Computer Soci

ety Press

20. Maiden NAM, Seyff N, Grunbacher P, Otojare O, Mitteregger K

(2007) Determining stakeholder needs in the workplace. IEEE

Softw 27(2):46–52

21. Mavin A, Maiden NAM (2003) Determining socio-technical

systems requirements: experiences with generating and walking

through scenarios. In: Proceedings of the 11th IEEE international

conference on requirements engineering. IEEE Computer Society

Press

22. Maxwell C, Millard N (1999) Integrating ethnographic field

observations into requirements engineering. http://www.comp.

lancs.ac.uk/computing/research/cseg/projects/coherence/

workshop/Maxwell.html. Workshop—an industrial approach to

work analysis and software design. http://www.comp.lancs.ac.

uk/computing/research/cseg/projects/coherence/workshop.html

23. Robertson S, Robertson J (1999) Mastering the requirements

process. Addison-Wesley, Longman, Reading, London

24. Sutcliffe AG (1997) A technique combination approach to

requirements engineering. In: Proceedings of the 3rd international

110 Requirements Eng (2009) 14:91–111

123

http://agentsheets.com/

http://www.comp.lancs.ac.uk/computing/research/cseg/projects/coherence/workshop/Maxwell.html

http://www.comp.lancs.ac.uk/computing/research/cseg/projects/coherence/workshop/Maxwell.html

http://www.comp.lancs.ac.uk/computing/research/cseg/projects/coherence/workshop/Maxwell.html

http://www.comp.lancs.ac.uk/computing/research/cseg/projects/coherence/workshop.html

http://www.comp.lancs.ac.uk/computing/research/cseg/projects/coherence/workshop.html

symposium on requirements engineering. IEEE Computer Soci-

ety Press

25. Sutcliffe AG, Maiden NAM, Minocha S, Manuel D (1998)

Supporting scenario-based requirements engineering. IEEE Trans

Softw Eng 24(12):1072–1088

26. Uchitel S, Chatley R, Kramer J, Magee J (2004) Fluent-based

animation: exploiting the relationship between goals and sce-

narios for requirements validation. In: Proceedings of the 12th

international IEEE requirements engineering conference. IEEE

Computer Society, pp 208–217

27. Viller S, Sommerville I (1999) Social analysis in the require-

ments engineering process: from ethnography to method. In:

Proceedings of the IEEE international symposium on require-

ments engineering, pp 6–13

28. Weidenhaupt K, Pohl K, Jarke M, Haumer P (1998) Scenario

usage in systems development: a report on current practice. IEEE

Softw 15(2):34–45

29. Whiteside J, Wixon D (1988) Contextualism as a world view for

the reformation of meetings. In: Proceedings of the ACM con-

ference on computer-supported cooperative work (CSCW),

pp 369–376

30. Zachos K, Maiden NAM, Tosar A (2005) Rich media scenarios

for discovering requirements. IEEE Softw 22(5):89–97

Requirements Eng (2009) 14:91–111 111

123

COMMUNICATIONS OF THE ACM February 2006/Vol. 49, No. 2 12

3

B y D e b o r a h J . A r m s t r o n g

A two-construct taxonomy is used to define the essential elements of
object orientation through analysis of existing literature.

THE QUARKS

Even though object-oriented development was introduced in the late 1960s
(beginning with the Simula programming language), OO development has not
yet lived up to its promises. A major stumbling block to reaping the promised
benefits is learning the OO approach (see [6]). One reason that learning OO is
so difficult may be that we do not yet thoroughly understand the fundamental
concepts that define the OO approach. When reviewing the body of work on
OO development, most authors simply suggest a set of concepts that characterize
OO, and move on with their research or discussion. Thus, they are either taking
for granted that the concepts are known or implicitly acknowledging that a uni-
versal set of concepts does not exist.

Several authors, asserting there is no clear definition of the essence of OO, have
called for the development of a consensus [9]. While a few have tried to develop
such a consensus [4], to date a thorough review of the literature and identification
of the fundamental concepts of the OO approach1 has been lacking. The goal of
this article is twofold: to identify and describe the fundamental concepts, or
quarks,2 of object-oriented development, and identify how these concepts fit
together into a coherent scheme.

of
OBJECT-ORIENTED

DEVELOPMENT

1

This article focuses on the fundamental concepts that define the OO development approach and not a specific OO methodology such as the Rational Unified

Process, or application such as OO programming.

2

A quark is a fundamental particle that represents the smallest known unit of matter. These particles are the basic building blocks for everything in the universe.

124 February 2006/Vol. 49, No. 2 COMMUNICATIONS OF THE ACM

Understanding what concepts
characterize OO is of paramount
importance to both practitioners
in the midst of transitioning to the
OO approach and researchers
studying the transition to OO
development. How can we hope
to achieve the productivity gains
promised by the OO development
approach, effectively transition
software developers, or conduct
meaningful research toward these
goals, when we have yet to identify
and understand the basic phe-
nomena?

SAMPLE AND METHOD
To address this question material
related to OO development pub-
lished from 1966–2005 was
reviewed using the keyword search
‘object-oriented development’. A
wide variety of sources (journals,
trade magazines, books, and con-
ference proceedings), viewpoints
(computer science, information
systems) and emphases (program-
ming, methodologies, modeling,
and databases) were reviewed for
the sampling frame.3 The analysis
consisted of reviewing each source
document for the identification of
specific concepts as the OO con-
cepts. For example, Morris, Speier,
and Hoffer [6] list abstraction,
attribute, class, encapsulation,
inheritance, message passing,
method, object, polymorphism,
and relationships as central OO concepts; whereas
Rosson and Alpert [9] list abstraction, class, encapsu-
lation, information hiding, inheritance, instance,
message passing, method, object, object model, and
polymorphism as the OO concepts. Of the 23

9

sources reviewed, 88 asserted that a specific set of
concepts characterize the OO approach. Those 88
sources are used as the data set in this article. For this
study, both the concepts directly listed and implied
(concepts used in the explanation of other concepts)
from the 88 papers were included as potential fun-
damental concepts.

Once a paper was selected for inclusion in the data

set, the concepts from the 88
sources were recorded. There
were 39 concepts mentioned as
those comprising the OO
approach (such as abstraction
and dynamic binding). Of the
39 concepts, eight were identi-
fied by the majority of the
sources: inheritance, object,
class, encapsulation, method,
message passing, polymor-
phism, and abstraction. A dis-
cussion and definition of the
concepts in the order of their
frequency of listing by the
sources follows (see Table 1 for
the numerical data). Interest-
ingly, many of the remaining
concepts with lower levels of
agreement (such as instance) are
either similar to, or can be sub-
sumed under, the more agreed-
on concepts (for example,
object).

WHAT ARE THE OO QUARKS?
Inheritance. The concept of
inheritance was introduced in
1967 in the Simula program-
ming language and further
developed in the Smalltalk lan-
guage [3]. Inheritance is fre-
quently cited as a new concept
introduced by OO [7], and it
has been suggested that inheri-
tance is the only unique contri-

bution of the OO approach [4].
Inheritance has been defined as a mechanism by

which object implementations can be organized to
share descriptions [11]. Another conceptualization of
inheritance is as a relation between classes that allows
for the definition and implementation of one class to
be based on that of other existing classes [10]. Inher-
itance has also been explained (in association with the
class hierarchy concept) as a mechanism by which
lower levels of the hierarchy contain more specifi

c

instances of the abstract concepts at the top of the
hierarchy [6]. Based on these definitions, inheritance
is: a mechanism that allows the data and behavior of one
class to be included in or used as the basis for another
class.

Object. The concept of an object was also intro-
duced in the Simula programming language as a new
concept. The object is both a data carrier and executes

Armstrong table 1 (2/06)

Concept

Concept Count.

Inheritance

Object

Class

Encapsulation

Method

Message Passing

Polymorphism

Abstraction

Instantiation

Attribute

Information Hiding

Dynamic Binding

Relationship

Interaction

Class Hierarchy

Abstract Data Type

Object-Identity Independence

Collaboration

Aggregation

Association

Object Model

Reuse

Cohesion

Coupling

Graphical

Persistence

Composition

Concurrency

Dynamic Model

Extensibility

Framework

Genericity

Identifying Objects

Modularization

Naturalness

Safe Referencing

Typing

Virtual Procedures

Visibility

Count

n = 88

71

69

62

5

5

50

49

4

7

45

31

29

28

13

12

10

9
7

6

5

4

4
4
3
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1

Percentage

8

1%

7

8%

71%

6

3%

5

7%

5

6%

53%

51%

3

5%

33%

3

2%

15%

14%

12%

10%

8%
7%
6%
5%
5%
5%
3%
2%
2%
2%
2%
1%
1%
1%
1%
1%
1%
1%
1%
1%
1%
1%
1%
1%

Table 1. Concept
count.

3
While extensive efforts were made to gain complete coverage of the OO approach, it

is not claimed that the sources reviewed are exhaustive.

actions [8]. An object has
been defined simply as
something that has state,
behavior, and identity [1],
and as an identifiable item,
either real or abstract, with a
well-defined role in the
problem domain [6]. By far
the most common reference
to an object is as an instance
of a class [1]. Based on these
definitions, an object is: an
individual, identifiable item,
either real or abstract, which contains data about itself
and descriptions of its manipulations of the data.

Class. Also introduced in the Simula programming
language, a class is a set of objects described by the
same declaration [8, 9] and is the basic element of
OO modeling. Some have conceptualized a class as an
encapsulation of data and procedures, which can be
instantiated in a number of
objects [3]. Others have
defined a class as a set of
objects that share a common
structure and common
behavior [1].

A class does several
things: at runtime it pro-
vides a description of how
objects behave in response to
messages; during develop-
ment it provides an interface
for the programmer to inter-
act with the definition of
objects; in a running system
it is a source of new objects
[8]. Based on these defini-
tions, a class is: a description
of the organization and
actions shared by one or more
similar objects.

Encapsulation. Encapsulation is one of the most
debated concepts in OO development. The concept
of encapsulation has been said to exist prior to the
introduction of OO [7] while others assert it was a
new concept introduced in the Simula programming
language [1]. Still others assert that encapsulation is a
new term for the already existing information-hiding
concept [4, 12].

There are three primary conceptualizations of
encapsulation in the literature. The first conceptual-
ization of encapsulation is as a process used to pack-
age data with the functions that act on the data [11].
The second, most common conceptualization of

encapsulation, is that
encapsulation hides the
details of the object’s
implementation so that
clients access the object
only via its defined
external interface [1,
11]. The third concep-
tualization includes
both of the previous def-

initions and can be summarized as: information about
an object, how that information is processed, kept
strictly together, and separated from everything else
[6]. Bringing these conceptualizations together,
encapsulation is: a technique for designing classes and
objects that restricts access to the data and behavior by
defining a limited set of messages that an object of that
class can receive.

Method. The concept of a procedure as a unit of
software has existed within software development for

a long time [8]. The
concept of a method
(also called a procedure
or operation) as tied
to/or inseparable from
an object emerged from
the Smalltalk program-
ming language [1].
Methods typically involve
accessing, setting, or
manipulating the object’s
data [8]. In most cases, a
discussion of methods is
intertwined with the con-
cept of messages. A
method is the fundamen-
tal element of an object

program. Typically, a method sends messages to other
objects that invoke that object’s methods [9]. Based on
these definitions, a method is: a way to access, set or
manipulate an object’s information.

Message Passing. Some authors state that a message
is merely a procedure call from one function to another
[5]. Others assert that function calls are used to apply
some standard process to data whereas messages
invoke a specific object activity [8]. The distinction
could be summed up as a procedure call takes an
action and message passing makes a request.

Within the group of authors that see message pass-
ing as a distinct concept, there seem to be two fairly
consistent emphases in the definition of message pass-
ing. The first emphasis is on the invocation of a
method. This group sees message passing as a signal
from one object to another that requests the receiving

COMMUNICATIONS OF THE ACM February 2006/Vol. 49, No. 2 125

Rosson and Alpert [9]

a. Object, Message Passing, Method

a. Information Hiding, Encapsulation,
Data Abstraction, Polymorphism

a. Inheritance, Class, Instance

a. Object Modeling

Henderson-Sellers [4]

1. Encapsulation, Information Hiding

2. Abstraction, Class, Object

3. Inheritance, Polymorphism

Comparison of OO Taxonomies.

Armstrong table 2 (2/06)

1. Communicating Objects

2. Abstraction

3. Shared Behavior

4. Problem Oriented Design

Table 2. Comparison of
OO taxonomies.

Armstrong table 3 (2/06)

OO Taxonomy.

Construct

Structure

Behavior

Concept
Abstraction
Class
Encapsulation
Inheritance
Object
Message Passing
Method
Polymorphism

Definition

Creating classes to simplify aspects of reality
using distinctions inherent to the problem.

A description of the organization and actions
shared by one or more similar objects.

Designing classes and objects to restrict access
to the data and behavior by defining a limited
set of messages that an object can receive.

The data and behavior of one class is included
in or used as the basis for another class.

An individual, identifiable item, either real or
abstract, which contains data about itself and
the descriptions of its manipulations of the data.

An object sends data to another object
or asks another object to invoke a method.

A way to access, set, or manipulate an object’s
information.

Different classes may respond to the same
message and each implement it appropriately.

Table 3. OO taxonomy.

object to carry out one of its methods [2]. The second
group looks at message passing as objects making
requests for actions and passing information to each
other [6, 8]. Bringing together these definitions, mes-
sage passing is: the process by which an object sends data
to another object or asks the other object to invoke a
method.

Polymorphism. The polymorphism concept was
used in software development prior to the introduc-
tion of OO. The most basic conceptualization of
polymorphism appears to be the ability to hide dif-
ferent implementations behind a common interface
[12]. Some have conceptualized polymorphism as
the ability of different objects to respond to the same
message and invoke different responses [10]. Others
have thought of polymorphism as the ability of dif-
ferent classes to contain different methods of the
same name, which appear to behave the same way in
a given context; yet different objects can respond to
the same message with their own behavior [5, 6].
The literature appears to inconsistently apply the
concept of polymorphism with some likening poly-
morphism to late binding or dynamic binding [2].
Bringing together these conceptualizations, poly-
morphism is defined as: the ability of different classes
to respond to the same message and each implement the
method appropriately.

Abstraction. The earliest application of structural
abstraction4 to programming languages was in the
1950s with symbolic assemblers. Data abstraction is
possible in classical development, but it is enforced in
the OO approach [9]. Many authors define abstrac-
tion in a generic sense as a mechanism that allows us
to represent a complex reality in terms of a simplified
model so that irrelevant details can be suppressed in
order to enhance understanding [4, 5, 12]. Others
have conceptualized abstraction as the act of remov-
ing certain distinctions between objects so that we
can see commonalities [6]. Based on these defini-
tions, abstraction is: the act of creating classes to sim-
plify aspects of reality using distinctions inherent to the
problem.

OO Taxonomy. Recall the dual purposes of this
article: to identify the fundamental concepts, or
quarks, of object-oriented development, and deter-
mine how these concepts fit together into a coherent
scheme. What has been presented are the eight OO
concepts that a quorum of sources cited as concepts
that characterize the OO approach. These concepts
have been discussed and defined within the context of
the literature reviewed. Now that the fundamental

126 February 2006/Vol. 49, No. 2 COMMUNICATIONS OF THE ACM

The simplified OO taxonomy
presented here may reinforce

OO thinking by helping learners

see how the concepts come together

into the OO approach via the

structure and behavior constructs.

4
Structural abstraction encompasses both generalization and aggregation.

concepts have been identified we can address how the
concepts may fit together to create the OO approach.

In addition to a lack of consensus on the funda-
mental concepts, the software development field lacks
an understanding of how OO concepts can be classi-
fied to characterize the OO approach. There seems to
be an absence in the literature of an OO taxonomy
with the exception of two studies. In the first study,
Rosson and Alpert [9] present a taxonomy in which
they list four constructs of OO: communicating
objects (object, message passing, method), abstraction
(information hiding, encapsulation, data abstraction,
polymorphism), shared behavior (inheritance, class,
instance), and problem-oriented design (object mod-
eling). In the second study, Henderson-Sellers [4] pre-
sents a taxonomy using the idea of the object-oriented
triangle. Each construct is represented by a corner of
the triangle (which unfortunately he does not name).
The first corner of the triangle includes the concepts
of encapsulation and information hiding. The second
corner includes the concepts of abstraction, class, and
object. The third corner includes the concepts of
inheritance and polymorphism.

Looking at the two taxonomies (see Table 2) there
seems to be a large overlap in the concepts included,
but little overlap between the two classifications of the
concepts. While the concepts identified in these tax-
onomies substantiate the OO quarks identified in this
work, they also identify the need for a simplified con-
ceptualization of the OO approach.

In an attempt to reconcile the taxonomies pre-
sented in Table 2 with the information found in this
study, a new taxonomy is proposed composed of the
eight fundamental OO concepts placed into two con-
structs labeled structure and behavior. This taxonomy
uses an OO perspective to classify the quarks of OO
development. The first construct, structure, includes
the abstraction, class, encapsulation, inheritance, and
object concepts. These concepts are focused on the
relationship between classes and objects and the
mechanisms that support the class/object structure. In
essence a class is an abstraction of an object. The
class/object encapsulates data and behavior and inher-
itance allows the encapsulated data and behavior of
one class to be based on an existing class. The second
construct, behavior, includes the message passing,
method, and polymorphism concepts. This construct
is loosely based on the communicating objects con-
struct found in the Rosson and Alpert [9] taxonomy
and is focused on object actions (behavior) within the
system. Message passing is the process in which an
object sends information to another object or asks the
other object to invoke a method. Polymorphism enacts
behavior by allowing different objects to respond to

the same message and implement the appropriate
method for that object. Table 3 presents the taxonomy
discussed.

So how do these two constructs fit together?
Within the OO approach, behavior (the actions and
reactions of the system) is a way of manipulating
structure (objects and/or processes within a system
and their relations). However, behavior must also
support the actions of the system. So this taxonomy
emphasizes the interconnected nature of OO devel-
opment. Compared to the two previous taxonomies,
this taxonomy is simpler (only two constructs) and
uses an OO perspective (structure and behavior) to
frame the concepts. The two constructs are consistent
with the OO approach, and simplify the process of
organizing the OO quarks into a coherent conceptual
scheme. Therefore, rather than complicating the
learning process with a complex conceptual scheme,
this taxonomy will both simplify the learning process
and reinforce the OO approach.

DISCUSSION AND IMPLICATIONS
This study indicates the first step toward mastering
the OO approach by presenting the eight funda-
mental OO concepts (quarks) consistently identified
by a heterogeneous sample of sources from the OO
literature. In addition to identifying and defining
the fundamental OO concepts, this study has pre-
sented a taxonomy for the eight OO quarks by orga-
nizing them into two constructs labeled structure
and behavior. This is the first taxonomy that has
used an OO perspective to classify the quarks of OO
development.

Why has there been no consensus around the fun-
damental concepts that define the OO approach? Per-
haps differences in perspectives (conceptual versus
implementation), emphasis in life cycle (analysis,
design, implementation), and orientation (computer
science versus information systems) provide insur-
mountable obstacles. If there has been no consensus
to date, why is a definitive set of OO quarks and orga-
nizing taxonomy so important? Select any five OO
books off your shelf and you’ll get five different con-
ceptualizations of the fundamental OO concepts.
This is very confusing, especially for a developer learn-
ing the OO approach. Recall that one stumbling
block to obtaining the benefits of OO is learning the
OO approach [6]. One way to decrease the confusion
and ease the learning process is to establish a standard
set of OO quarks. Developing a consensus around the
fundamental concepts that define the OO approach
will aid developer learning by providing a common
language and knowledge base.

By putting these findings into practice, an imme-

COMMUNICATIONS OF THE ACM February 2006/Vol. 49, No. 2 127

diate impact can be felt in both academia and the
software development industry. Both universities and
organizations can use the concepts demonstrated in
this study to design learning materials and techniques
aimed at clarifying these fundamental OO concepts
for the learner. The simplified OO taxonomy pre-
sented here may reinforce OO thinking by helping
learners see how the concepts come together into the
OO approach via the structure and behavior con-
structs. This may aid training programs and the
retraining of developers moving to the object-ori-
ented approach. In addition, an established set of fun-
damental OO concepts within a taxonomy may
enhance the maturity of the OO development disci-
pline through standardization, and increase the porta-
bility of developers across organizations and
environments.

References
1. Booch, G. Object Oriented Analysis and Design with Applications. Ben-

jamin/Cummings, Redwood City, CA, 1994.
2. Byard, C. Object-oriented technology a must for complex systems.

Computer Technology Review 10, 14 (1990), 15–20.
3. Dershem, H.L. and Jipping, M.J. Programming Languages: Structures

and Models. PWS Publishing Company, Boston, MA, 1995.
4. Henderson-Sellers, B. A Book of Object-Oriented Knowledge. Prentice-

Hall, Englewood Cliffs, NJ, 1992.

5. Ledgard, H. The Little Book of Object-Oriented Programming. Prentice-
Hall, Upper Saddle River, NJ, 1996.

6. Morris, M.G., Speier, C., and Hoffer, J.A. An examination of proce-
dural and object-oriented systems analysis methods: Does prior experi-
ence help or hinder performance?” Decision Sciences 30, 1 (Winter
1999), 107–136.

7. Page-Jones, M. and Weiss, S. Synthesis: An object-oriented analysis
and design method. American Programmer 2, 7–8 (1989), 64–67.

8. Robson, D. Object-oriented software systems. Byte 6, 8 (Aug. 1981),
74–86.

9. Rosson, M. and Alpert, S.R. The cognitive consequences of object-ori-
ented design. Human Computer Interaction 5, 4 (1990), 345–379.

10. Stefik, M. and Bobrow, D.G. Object-oriented programming: Themes
and variations. The AI Magazine 6, 4 (Winter 1986), 40–62.

11. Wirfs-Brock, R.J. and Johnson, R.E. Surveying current research in
object-oriented design. Commun. ACM 33, 9 (Sept. 1990), 104–124.

12. Yourdon, E., Whitehead, K., Thomman, J., Oppel, K. and Never-
mann, P. Mainstream Objects: An Analysis and Design Approach for
Business. Yourdon Press, Upper Saddle River, NJ, 1995.

Deborah J. Armstrong (darmstrong@walton.uark.edu)
is an assistant professor in the Information Systems Department in the
Sam M. Walton College of Business at the University of Arkansas.

Permission to make digital or hard copies of all or part of this work for personal or class-
room use is granted without fee provided that copies are not made or distributed for
profit or commercial advantage and that copies bear this notice and the full citation on
the first page. To copy otherwise, to republish, to post on servers or to redistribute to
lists, requires prior specific permission and/or a fee.

© 2006 ACM 0001-0782/06/0200 $5.00

c

128 February 2006/Vol. 49, No. 2 COMMUNICATIONS OF THE ACM

64 I E E E S O F T WA R E // P U B L I S H E D B Y T H E I E E E C O M P U T E R S O C I E T Y 0 74 0 – 74 5 9 / 11 / $ 2 6 . 0 0 © 2 0 11 I E E E

focus

SOFTWARE ARCHITECTS MAKE
many decisions when creating designs.
The importance of getting key archi-
tectural decisions right is well docu-
mented.1–3 However, it can be diffi cult
to generalize what the key decisions
are, let alone when and how to make
them. In the past, architectural deci-
sions have been characterized as the
subset of design decisions that are hard
to make4 and costly to change.5

To help clarify these issues, the fol-
lowing defi nition adds several qualifi –
cation heuristics:6

Architectural decisions capture key
design issues and the rationale behind
chosen solutions. They are con-

scious design decisions concerning a
software-intensive system as a whole
or one or more of its core components
and connectors in any given view. The
outcome of architectural decisions
infl uences the system’s nonfunctional
characteristics including its software
quality attributes.

According to this defi nition, choosing
a programming language, architectural
pattern, application container technol-
ogy, or middleware asset are all archi-
tectural decisions. For instance, integra-
tion patterns such as Broker describe the
many forces confronting distributed sys-
tems, including location independence
and networking issues.7 These forces

qualify as decision drivers, so adding
Broker to an architecture is an architec-
tural decision that should be justifi ed.

State-of-the-art software engineer-
ing methods, such as the IBM Unifi ed
Method Framework, call for architec-
tural decision logs to document and
justify key decisions in a single place.8
The logs help preserve design integrity
in allocating functionality to system
components. They support an evolving
system by ensuring that the architec-
ture is extensible. They also provide a
reference for new people joining a proj-
ect to avoid reconsideration of issues al-
ready decided.

The logs capture architectural deci-
sions after the fact. Creating such logs
is a documentation activity with many
long-term but few short-term benefi ts.9
If we relax the assumption of documen-
tation rigor on a particular project and
assume instead that multiple projects in
an application genre follow the same
architectural style—that is, share the
same principles and patterns—we can
consider the option of upgrading archi-
tectural decisions from documentation
artifacts to design guides. These guides
can help architects working in a par-
ticular application genre and architec-
tural style understand decision-making
needs and solution options on the basis
of peer knowledge applied successfully
in similar situations.

In this way, recurring architectural
decisions become reusable assets, just
as methods and patterns are. This gives
rise to novel usage scenarios. For in-
stance, recurring issues can serve as re-
view checklists, help prioritize design
and development work items, and im-
prove communication between enter-
prise and project architects.

SOA Decision-Modeling
Framework
The fi rst step in giving recurring archi-
tectural decisions a guiding role dur-
ing design is to effectively capture and

Architectural
Decisions
as Reusable
Design Assets
Olaf Zimmermann, IBM Research–Zurich

// A novel decision-modeling framework for service-
oriented architecture supports the evolution of architectural

decisions from documentation artifacts to design guides. //

FEATURE: SATURN CONTRIBUTIONS

J A N U A R Y/ F E B R U A R Y 2 0 11 \\ I E E E S O F T WA R E 65

generalize related project experi-
ence. This is a knowledge engineering
activity.

Service-Oriented Architecture (SOA)
Decision Modeling (SOAD) is a knowl-
edge management framework that sup-
ports this activity.10 SOAD provides
a technique to systematically identify
the decisions that recur when apply-
ing the SOA style in a particular genre,
such as enterprise applications. SOAD
enhances existing metamodels and
templates,8,11 specifi cally by distin-
guishing decisions required from deci-
sions made. It establishes a multi level
knowledge organization that separates
platform-independent from platform-
specifi c decisions. On the conceptual
level, the design alternatives refer-
ence architectural patterns, such as
those defi ned by Martin Fowler12 and
others.7,13,14

The SOAD framework lets knowl-
edge engineers and software architects
manage decision dependencies, so they
can check model consistency and prune
irrelevant decisions. A managed-issue
list guides the decision-making process.
Supported by the framework, architects
can update design artifacts according
to decisions made by injecting decision
outcomes into model transformations.6

In support of reuse, the SOAD
metamodel defi nes two model types:

• guidance models to identify deci-
sions required and

• decision models to log decisions
made.

Figure 1 shows the relations and inter-
nal structures of these model types.6

A guidance model is a reusable asset
containing knowledge about architec-
tural decisions required when applying
an architectural style in a particular ap-
plication genre. The model is based on
knowledge captured from already-com-
pleted projects that employed the archi-
tectural style in that genre. As Figure 1
shows, an issue informs the architect

that a particular design problem exists
and requires an architectural decision.
Issues present decision driver types,
such as quality attributes, and reference
alternative potential solutions along
with their advantages (pros), disadvan-
tages (cons), and known uses in previ-
ous applications. A knowledge engineer
documents the issues and alternatives,
writing in the future tense and a tone
that a technical mentor would choose
in a personal conversation.

The guidance model feeds project-
specifi c architectural decision models
in a tailoring step that might involve
deleting irrelevant issues, enhancing
relevant ones, or adding new issues.
The decision model is an architecture
documentation artifact that contains
knowledge not only about architectural
decisions required but also about archi-
tectural decisions made. An outcome
is a record (log) of a decision actually
made on a project, along with its justifi –
cation. In SOAD, outcomes represent a
form of design workshop minutes that
software architects capture in the pres-
ent or past tense.

A decision model can reuse one or
more guidance

models.

It can feed in-
formation about decisions made back

to the guidance model after project clo-
sure via asset harvesting activities that
might include informal or formal les-
sons-learned reviews.

In SOA design, for instance, an in-
surance company’s business process
model might state that back-end sys-
tems must implement and integrate
three business activities and corre-
sponding service operations: customer
inquiry, claim check, and risk assess-
ment. The architect must select an in-
tegration style for this purpose, such
as one of the four alternative patterns
that Gregor Hohpe and Bobby Woolf
identifi ed for this issue: File Transfer,
Shared Database, RPC, or Messag-
ing.13 The architect must also select an
integration technology, such as HTTP
and Java Message Service (JMS) that
lets the business activities interact with
other systems. A problem statement
(“Which technology will be used to let
the business activities and service oper-
ations in the business process commu-
nicate with other components, such as
legacy systems?”) and decision drivers
(“interoperability, reliability, and tool
support”) are the same for all three ser-
vice operations.

Project-specifi c decision outcomes,

FIGURE 1. The Service-Oriented Architecture (SOA) Decision Modeling (SOAD) framework.

The SOAD metamodel is instantiated into a guidance model that identi� es the decisions

required for a particular architectural style, such as SOA. Architects can tailor the guidance

model to create an initial decision model for a project.

Tailor guidance
model into

decision model

Guidance model (reusable asset)

Issues (decisions required)

Alternatives (potential solutions)

Pros Cons

Decision
driver types

Recommendation

Known
uses

Decision model (project artifact)

Issues (open and resolved)

Alternatives (considered solutions)

Decision drivers

Recommendation

Outcomes (decisions made)

Justi�cation

Pros Cons Known
uses

SOAD metamodel

instantiatedInto

instantiatedInto

Harvest decision
log for next
version of

guidance model

Chosen
alternatives

66 I E E E S O F T WA R E // W W W. C O M P U T E R . O R G / S O F T WA R E

FOCUS: MULTIPARADIGM PROGRAMMING

FOCUS: MULTIPARADIGM PROGRAMMING
FOCUS: MULTIPARADIGM PROGRAMMING

FOCUS
MULTIPARADIGM
PROGRAMMING

FOCUS MULTIPARADIGM PROGRAMMING

FEATURE: SATURN CONTRIBUTIONS

such as the chosen alternative and its
justifi cation, depend on each opera-
tion’s individual requirements. For ex-
ample, “For customer inquiry, we se-
lected RPC and HTTP because Java
and C# components must be integrated
in a simple and interoperable manner,
and we value the available Web services
tool support.” Or, “For risk assess-
ment, we selected Messaging and JMS
because some of the involved back-end
systems are known to have poor avail-
ability and we can not afford to lose
messages.”

From General Issues
to a SOA Guidance Model
I will use the insurance company ex-
ample to generalize and extend the two
decisions required—that is, the integra-
tion style and integration technology
issues.

The fi rst step is to add general is-
sues that occur in layered client-server
architectures to a generic component-
and-connector diagram. In Figure 2,

the components and connectors are
generalizations of service consum-
ers (clients) and providers (servers) in
a layered system. For instance, layer n
might be instantiated into the presen-
tation, business-logic, and persistence
layers of a service-oriented enterprise
application.12 In such an instantiation,
the three business activities and service
operations from the insurance example
(customer inquiry, claim check, and
risk assessment) are components in the
architecture’s business-logic layer. Ev-
ery question fragment (for example, in
Figure 2, “Interface signature?”) then
suggests a general issue that has to be
investigated when refi ning the design
into a concrete architecture.

The second step is to combine the
general issues with SOA patterns. Sev-
eral authors have described such SOA
patterns—for example, Uwe Zdun and
his colleagues.14 The SOAD framework
uses the following SOA defi nition:6

From an architecture design per-

spective, SOA introduces a Service
Consumer (requestor), a Service
Provider, and a Service Contract.
These patterns promote the architec-
tural principles of modularity and
platform transparency. A composite
architectural pattern, ESB (Enter-
prise Service Bus), governs the service
consumer-provider interactions and
physical distribution in support of
principles such as protocol transpar-
ency and format transparency. The
Service Composition pattern orga-
nizes the processing logic, adhering
to the principles of logical layering
and fl ow independence. The Service
Registry pattern defi nes how service
providers are looked up; related prin-
ciples are location transparency and
service virtualization.

Instantiating the generic client-
server components into a functional ar-
chitecture overview, Figure 3 illustrates
how these patterns and their building
blocks interact in a SOA.6

Layer n + 1

Component

Component

Layer n

Layer n – 1

Downstream interface (consumer)

Component

Utilities

Interface signature?

Interface usage?

Internal layer structure?

Component
interactions?

Layer
activity

logging?

Layer access
control?

Component
life-cycle

management?

Overall layer
organization?

Design model element
(in any viewpoint)

General issue
(decision required)

Connector

State?

Interface QoS?

Legend:

Host?

Error
handling

Transitions to next
realization levels

Request Reply

Request Reply

Upstream interface (provider)

FIGURE 2. General issues in generic component-and-connector architectures. Each component and connector yields concrete issues

derived from general issues. Transitions to next realization levels include, for example, conceptual to speci� ed to implementation component

models.

J A N U A R Y/ F E B R U A R Y 2 0 11 \\ I E E E S O F T WA R E 67

According to the fi gure, the essence
of the SOA style is the decoupling
of service consumer and service pro-
vider via the service contract, ESB
messaging, and the service registry. The
ESB pattern comprises three other pat-
terns: Mediator, Router, and Adapter.
To separate platform-independent from
platform-specifi c design, this patterns-
based characterization of SOA omits
Web services or other technologies.

Combining the general issues from
Figure 2 with the SOA patterns from
Figure 3 leads to concrete recurring is-
sues. Identifying issues and alternatives
this way allows knowledge engineers
to harvest decision drivers, pros, cons,
and recommendations from project ex-
perience with the patterns.

Executive Decisions
Assuming SOA is the preferred archi-
tectural style, which is an executive de-
cision in its own right, the selection of a
particular SOA reference architecture is
an executive-level decision. It requires
agreement on terminology, such as

layer and component names, and iden-
tifi cation of relevant pattern languages.
Architectural principles—for example,
to prefer open source assets or certain
software vendors and server infrastruc-
tures—might also take the form of ex-
ecutive decisions.

The corresponding general issue in
Figure 2 is “Overall layer organization?”

Conceptual, Platform-Independent
Design Issues
A service designer must decide on the
granularity of service contracts in terms
of operation signatures for request and
response message parameters. These
signatures specify the service’s invoca-
tion syntax as well as the message pay-
load structures. The issue deals with
service contract design. The service
contract, a SOA pattern according to
Figure 3, is the SOA instantiation of an
interface; hence, the general issue called
“Interface signature?” applies in this
design context, according to Figure 2.

The detailed ESB design and confi g-
uration trigger another set of issues. Ar-

chitects must select message exchange
patterns, such as one way and request-
reply (asynchronous versus synchro-
nous communication). They must also
detail usage of the Router, Mediator,
and Adapter patterns, describing how
to maneuver messages from service con-
sumers to service providers (Router),
how to transform message content
while it’s transported on the ESB (Me-
diator), and how to integrate non-SOA
systems and components (Adapter).13

Architects following the SOA style
must also refi ne the service composition
design (if they select this SOA pattern).
The choice of a central Process Man-
ager,13 such as a workfl ow engine (as
opposed to distributed state manage-
ment in individual applications or com-
ponents), is an important related archi-
tectural decision regarding the internal
structure of the business-logic layer.
Design time versus runtime registry
lookup is an example design issue re-
garding service registries. “Component
life-cycle management?” is the related
general issue (see Figure 2).

FIGURE 3. SOA patterns and their collaborations and functional decomposition. Service consumers and providers communicate via the ESB

pattern. A service registry lists the service contracts and providers that are available to service consumers.

Service Composition pattern

Service Consumer/Provider/
Contract pattern

Process manager

has

uses
obtains

awareOf

Business process (work�ow)

Business (process) activity

receiveResponseMsg ()

User channel

Service consumer

sendRequestMsg ()

qualitiesOfService

Service contract
<>

functionalInterface

operationN ()
operation1 ()
serviceSemantics

exposes
invokes

sendResponseMsg ()

Service provider

receiveRequestMsg ()
publishProvider ()

Service registry

lookupProvider ()

Router

routeMessage ()

ESB

routeMessage ()

Service Registry pattern

Adapter

Backend system

call ()

Mediator

accessMessage ()
transformMessage ()

integrates

ESB (Enterprise Service Bus) pattern

looksUpProvider

68 I E E E S O F T WA R E // W W W. C O M P U T E R . O R G / S O F T WA R E

FOCUS: MULTIPARADIGM PROGRAMMING
FOCUS: MULTIPARADIGM PROGRAMMING
FOCUS: MULTIPARADIGM PROGRAMMING
FOCUS
MULTIPARADIGM
PROGRAMMING
FOCUS MULTIPARADIGM PROGRAMMING
FEATURE: SATURN CONTRIBUTIONS

Platform-Related Design Issues
None of the conceptual design issues
deals with technology standards or
their implementations. However, archi-
tects that select one or more SOA pat-
terns must also resolve such platform-
related issues—for example, selecting
and profiling implementation technolo-
gies such as WS* Web services for the
integration technology. Once the tech-
nologies have been chosen, the architect
must select and configure implementa-
tion platforms. Many SOA patterns are
implemented in commercial and open
source middleware. The architect must
decide whether to procure such middle-
ware and, if so, how to install and con-
figure it.

Reusable Design Asset:
SOA Guidance Model
All the SOA design issues I’ve described
qualify as architectural decisions ac-
cording to the definition presented in
the introduction. For instance, a ser-
vice’s operation signature influences
quality attributes such as performance
and interoperability. Moreover, these
issues recur. Whenever a project ap-
plies SOA patterns, it must resolve
the corresponding issues one or more
times. Knowledge reuse is therefore
desirable.

I’ve compiled 500 such recurring is-
sues in a SOA guidance model.6,10

SOAD Case Study Reports
Practicing architects have applied
SOAD and the SOA guidance model
successfully to more than 10 projects.
These projects dealt with a pension
plan management application for a Eu-
ropean country’s social security agency,
customer- and order-management so-
lutions for a telecommunications com-
pany, and business-to-business applica-
tions for a multichannel retailer. The
case studies confirmed that many issues
recur. Participating architects assessed
the knowledge in the SOA guidance
model to be both relevant and action-

able. They reported improved speed
and quality in design activities and gen-
eral appreciation for the SOAD vision
and approach.6,9

Feedback for the SOAD Framework
During the case study projects, I inter-
acted with several hundred architects
to obtain their feedback regarding the
value and usability of SOAD. Only one
of them disagreed openly with the fun-
damental SOAD hypothesis that ar-
chitectural decisions recur when the
same architectural style is employed
on multiple projects in an application
genre, and this objection turned out to
be a misunderstanding. SOAD doesn’t
claim that a decision always has the
same outcome; it claims only that the
issue, expressing the need for a deci-
sion, recurs.

Case study participants saw the
SOAD metamodel’s attributes as intui-
tively understandable, conveying useful
and sufficient information to help make
key decisions. They suggested a few ad-
ditional attributes. They also suggested
different ways of structuring the guid-
ance models—including the organiza-
tional dimensions defined by enterprise
architecture frameworks.

Participants saw decision-depen-
dency management as an important
advantage of decision modeling be-
cause managing dependencies in text-
based decision logs is difficult. They
also pointed out that mature design
methods exist already and that any
additional method must align with
these. They saw SOAD as a supporting
asset—a decision-making technique em-
bedded in a general-purpose method—
rather than a stand-alone method.

Feedback for the SOA Guidance Model
Case study participants appreciated
the guidance model’s content and level
of detail. They saw it as appropriate
to terms of being not obvious, relevant
to SOA industry projects, and clearly
documented.

There was some confusion regarding
proactive versus retrospective decision
modeling. One user simply copied the
issue descriptions and recommendation
attributes from the guidance model to
outcome justifications in the decision
log. This provoked negative comments
from a senior architect in a team-inter-
nal technical quality assurance review.
In conclusion, the expectations regard-
ing the use of SOAD must be managed.
SOAD doesn’t intend to make architec-
tural thinking obsolete.

Usage Scenarios and Discussion
SOAD is a decision-centric approach to
guiding design work. One of its bene-
fits is that the target audience, software
architects, already knows the core con-
cept of architectural decisions from a
different usage scenario—namely, ar-
chitecture documentation. This sim-
plifies SOAD applications to other
scenarios:

• IT users could maintain control
over their application landscape by
asking suppliers to deliver a stan-
dardized decision log along with
their software solution or prod-
uct. Users could structure the de-
cision logs according to the SOAD
metamodel and populate them from
a shared guidance model.

• Companies that develop multiple
software-intensive products or
product lines could ask their enter-
prise architects to create a company-
wide guidance model. Method and
tool groups could support guid-
ance modeling activities by adopt-
ing a company-specific SOAD
metamodel. This approach would
shorten time to market and help
preserve architectural consistency
across products.

• Software vendors with complex
portfolios could reduce train-
ing, customization, and sup-
port efforts by sharing technical
knowledge in guidance models that

J A N U A R Y/ F E B R U A R Y 2 0 11 \\ I E E E S O F T WA R E 69

are annotated with best-practice
recommendations.

• In professional services, communi-
ties of practice that value explicit
knowledge management and reus-
able assets could create guidance
models to support a shift from la-
bor-based to asset-based delivery
models (strategic reuse).

• Trainers could use guidance models
as a systematic way of teaching pat-
terns and technology best practices.

• Analysts and auditors who want to
evaluate middleware and enterprise
applications in a repeatable, effi –
cient way could base standardized,
domain-specifi c questionnaires on
recurring design issues. They could
model these issues according to the
SOAD metamodel.

SOAD assumes that many issues
recur. If they don’t, a guidance-model
asset won’t provide suffi cient value to
justify its creation. If multiple projects
employ the same architectural style,
the assumption that issues recur will
likely hold. However, using SOAD to
describe the issues and alternatives in-
volves a commitment to knowledge
engineering. A guidance model must
meet higher editorial standards than
project-specifi c decision logs, so a deci-
sion to create such a model must sup-
port a knowledge management strat-
egy. It needs a funding model as well
as a review, approval, and maintenance
process.

My results over the course of three
years’ experience with SOAD showed
that, on average, knowledge engineers
can fully model one issue in one person
day. Architects can already benefi t from
incompletely modeled knowledge, such
as issue checklists articulating problem
statements in question form. Moreover,
tools can partially automate asset har-
vesting—for instance, mining tools ex-
tracting architectural knowledge from
project artifacts.

From a tool-design perspective, the

amount of information displayed and
the context-specifi c fi ltering and order-
ing capabilities are key success factors.
Architects typically spend much of their
time communicating with external and
internal stakeholders, so they might not
be willing to read a guidance model end
to end (although some of my colleagues
have done just that). Tools can trim the
guidance model down to the issues and
alternatives that are relevant in a given
design context, fi rst during the tailor-
ing step and then throughout the proj-
ect. The SOAD metamodel supports
such tool development, for example, by
giving issues a scope attribute and by
calling out the project phase in which
an issue typically is resolved.

S OAD was originally created to support enterprise application and SOA design, but the use
of guidance models as reusable assets
also applies to other application genres
and architectural styles. It supports ap-
plication scenarios such as education,
knowledge exchange, design method,
and governance. Next steps for its de-
velopment include extending it to other
business and technology domains as
well as target audiences, such as busi-
ness people in addition to software ar-
chitects. Other plans address guidance
modeling challenges such as knowledge
visualization and maintenance.15

By promoting the reuse of architec-
tural knowledge in the form of guid-
ance models that compile recurring is-
sues and options, SOAD lets architects
share best practices in a problem-solu-
tion context. We may learn best from

mistakes, but who said all the mistakes
must be our own?

References
1. L. Bass, P. Clements, and R. Kazman,

Software Architecture in Practice, 2nd ed.,
Addison-Wesley, 2003.

2. N. Rozanski and E. Woods, Software Systems
Architecture: Working with Stakeholders
Using Viewpoints and Perspectives, Addison-
Wesley, 2005.

3. P. Eeles and P. Cripps, The Process of Soft-
ware Architecting, Addison-Wesley, 2010.

4. M. Fowler, “Who Needs an Architect?”
IEEE Software, vol. 20, no. 5, 2003, pp. 2– 4.

5. G. Booch, internal conference presentation to
IBM Academy of Technology, 27 Apr., 2009.

6. O. Zimmermann, “An Architectural Decision
Modeling Framework for Service-Oriented
Architecture Design,” PhD thesis, Univ. of
Stuttgart, 2009.

7. F. Buschmann, K. Henney, and D. Schmidt,
Pattern-Oriented Software Architecture,
Vol. 4 – A Pattern Language for Distributed
Computing, Wiley, 2007.

8. IBM Unifi ed Method Framework, work
product description (ARC 0513), IBM, 2009.

9. M. Ali Babar et al., eds., Software Architec-
ture Knowledge Management: Theory and
Practice, Springer, 2009.

10. O. Zimmermann et al., “Managing Archi-
tectural Decision Models with Dependency
Relations, Integrity Constraints, and Pro-
duction Rules,” J. Systems and Software
and Services, vol. 82, no. 8, 2009, pp.
1246–1267.

11. J. Tyree and A. Ackerman, “Architecture
Decisions: Demystifying Architecture,” IEEE
Software, vol. 22, no. 2, 2005, pp. 19–27.

12. M. Fowler, Patterns of Enterprise Appli-
cation Architecture, Addison-Wesley, 2003.

13. G. Hohpe and B. Woolf, Enterprise Inte-
gration Patterns, Addison-Wesley, 2004.

14. U. Zdun, C. Hentrich, and S. Dustdar,
“Modeling Process-Driven and Service-
Oriented Architectures Using Patterns
and Pattern Primitives,” ACM Trans.
Web, vol. 1, no. 3, 2007, article no. 3;
doi.10.1145/1281480.1281484.

15. M. Nowak, C. Pautasso, and O. Zimmer-
mann, “Architectural Decision Modeling with
Reuse: Challenges and Opportunities,” Proc.
2010 ICSE Workshop Sharing and Reusing
Architectural Knowledge (SHARK 10), ACM
Press, 2010, pp. 13–20.

ABOUT THE AUTHOR

OLAF ZIMMERMANN is a research staff member at IBM Research–Zurich.
His research interests focus on application and integration architecture, SOA
design, architectural decisions, and frameworks for service and knowledge
management. Zimmerman has a PhD in computer science from the University
of Stuttgart. He’s an Open Group Distinguished IT Architect, IBM Executive
IT Architect, and author of Perspectives on Web Services (Springer, 2003).
Contact him at olz@zurich.ibm.com.

Copyright of IEEE Software is the property of IEEE Computer Society and its content may not be copied or

emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission.

However, users may print, download, or email articles for individual use.

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/268297633

Software Architecture, Scenario and Patterns

Article · September 2010

CITATIONS

8

READS

26

2 authors, including:

Siva Balan

Noorul Islam University

16 PUBLICATIONS 33 CITATIONS

SEE PROFILE

All content following this page was uploaded by Siva Balan on 04 May 2016.

The user has requested enhancement of the downloaded file.

https://www.researchgate.net/publication/268297633_Software_Architecture_Scenario_and_Patterns?enrichId=rgreq-efc82f28a33d3fcbb0ccc5426a7ee41d-XXX&enrichSource=Y292ZXJQYWdlOzI2ODI5NzYzMztBUzozNTc4OTk2MDA3MTE2ODJAMTQ2MjM0MTMyMjMyMg%3D%3D&el=1_x_2&_esc=publicationCoverPdf

https://www.researchgate.net/publication/268297633_Software_Architecture_Scenario_and_Patterns?enrichId=rgreq-efc82f28a33d3fcbb0ccc5426a7ee41d-XXX&enrichSource=Y292ZXJQYWdlOzI2ODI5NzYzMztBUzozNTc4OTk2MDA3MTE2ODJAMTQ2MjM0MTMyMjMyMg%3D%3D&el=1_x_3&_esc=publicationCoverPdf

https://www.researchgate.net/?enrichId=rgreq-efc82f28a33d3fcbb0ccc5426a7ee41d-XXX&enrichSource=Y292ZXJQYWdlOzI2ODI5NzYzMztBUzozNTc4OTk2MDA3MTE2ODJAMTQ2MjM0MTMyMjMyMg%3D%3D&el=1_x_1&_esc=publicationCoverPdf

https://www.researchgate.net/profile/Siva_Balan11?enrichId=rgreq-efc82f28a33d3fcbb0ccc5426a7ee41d-XXX&enrichSource=Y292ZXJQYWdlOzI2ODI5NzYzMztBUzozNTc4OTk2MDA3MTE2ODJAMTQ2MjM0MTMyMjMyMg%3D%3D&el=1_x_4&_esc=publicationCoverPdf

https://www.researchgate.net/profile/Siva_Balan11?enrichId=rgreq-efc82f28a33d3fcbb0ccc5426a7ee41d-XXX&enrichSource=Y292ZXJQYWdlOzI2ODI5NzYzMztBUzozNTc4OTk2MDA3MTE2ODJAMTQ2MjM0MTMyMjMyMg%3D%3D&el=1_x_5&_esc=publicationCoverPdf

https://www.researchgate.net/institution/Noorul_Islam_University?enrichId=rgreq-efc82f28a33d3fcbb0ccc5426a7ee41d-XXX&enrichSource=Y292ZXJQYWdlOzI2ODI5NzYzMztBUzozNTc4OTk2MDA3MTE2ODJAMTQ2MjM0MTMyMjMyMg%3D%3D&el=1_x_6&_esc=publicationCoverPdf

https://www.researchgate.net/profile/Siva_Balan11?enrichId=rgreq-efc82f28a33d3fcbb0ccc5426a7ee41d-XXX&enrichSource=Y292ZXJQYWdlOzI2ODI5NzYzMztBUzozNTc4OTk2MDA3MTE2ODJAMTQ2MjM0MTMyMjMyMg%3D%3D&el=1_x_7&_esc=publicationCoverPdf

https://www.researchgate.net/profile/Siva_Balan11?enrichId=rgreq-efc82f28a33d3fcbb0ccc5426a7ee41d-XXX&enrichSource=Y292ZXJQYWdlOzI2ODI5NzYzMztBUzozNTc4OTk2MDA3MTE2ODJAMTQ2MjM0MTMyMjMyMg%3D%3D&el=1_x_10&_esc=publicationCoverPdf

IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010

ISSN (Online): 1694-0814
www.IJCSI.org 418

Software Architecture, Scenario and Patterns
R.V. Siva Balan1, Dr. M. Punithavalli2

1Department of Computer Applications,
Narayanaguru College of Engineering, kanyakumari, India.

2Director & Head Department of Computer Applications,

Sri Ramakrishna College of Arts and Science for women, Coimbatore, India.

Abstract

The software engineering projects [22, 23] reveals
that a large number of usability related change
requests are made after its deployment. Fixing
usability problems during the later stages of
development often proves to be costly, since many of
the necessary changes require changes to the system
that cannot be easily accommodated by its software
architectural design. This costs high for the
practitioners and prevents the developers from
finding all the usability requirements, resulting in
systems with less than ideal usability. The successful
development of a usable software system therefore
must include creating a software architecture that
supports the optimal level of usability. Unfortunately,
no architectural design usability assessment
techniques exist. To support software architects in
creating a software architecture that supports
usability, practicing a scenario based assessment
technique that leads to successful application of
pattern specification is undergone. Explicit
evaluation of usability during architectural design
may reduce the risk of building a system that fails to
meet its usability requirements and may prevent high
costs incurring adaptive maintenance activities once
the system has been implemented.

Keywords: use-case, patterns, usability, scenarios,
patterns specifications

1. Introduction

Scenarios have been gaining increasing popularity in
both Human Computer Interaction (HCI) and
Software Engineering (SE) as ‘engines of design’. In
HCI scenarios are used to focus discussion on
usability [3] issues .They support discussion to gain
an understanding of the goals of the design and help
to set overall design objectives. In contrast, scenarios
play a more direct role in SE, particularly as a front
end to object oriented design. Use case driven
approaches have proved useful for requirements
elicitation and validation. The aim of use cases in
Requirements Engineering is to capture systems
requirements. This is done through the exploration
and selection of system user interactions to provide

the needed facilities. A use case is a description of
one or more end to end transactions involving the
required system and its environment. The basic idea
is to specify use cases [8] that cover all possible
pathways through the system functions. The concept
of use case was originally proposed in Objectory [8]
but has recently been integrated in a number of other
approaches including the Fusion method and the
Unified Modeling Language [7].

In the software design area, the concept of design
patterns has been receiving considerable attention.
The basic idea is to offer a body of empirical design
information that has proven itself and that can be
used during new design efforts. In order to aid in
communicating design information, design patterns
focus on descriptions that communicate the reasons
for design decisions, not just the results. It includes
descriptions of not only ‘what’ but also ‘why’. Given
the attractiveness and popularity of the patterns
approach, a natural question for RE is: How can
requirements guide a patterns-based approach to
design? A systematic approach to organizing,
analyzing, and refining nonfunctional requirements
can provide much support for the structuring,
understanding, and applying of design patterns during
design.

2. Software architecture

The challenge in software development is to develop
software with the right quality levels. The problem is
not so much to know if a project is technically
feasible concerning functions required, but instead if
a solution exists that meets the software quality
requirements, such as throughput and maintainability.

Traditionally the qualities of the developed software
have, at best, been evaluated on the finished system
before delivering to the customer. The obvious risks
of having spent much effort on developing a system
that eventually did not meet the quality requirements
have been hard to manage. Changing the design of
the system would likely mean rebuilding the system
from scratch to the same cost. The result from the
software architecture design activity is a software

IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org 419

architecture. But, the description of that software
architecture is far from trivial. A reason is that it is
hard to decide what information is needed to describe
software architecture, and hence, it is very hard to
find an optimal description technique.

In the paper by Perry and Wolf [2] the foundations
for the study of software architecture define software
architecture as follows:

Software Architecture = {Elements, Form, Rationale}

Thus, software architecture is a triplet of (1) the
elements present in the construction of the software
system, (2) the form of these elements as rules for
how the elements may be related, and (3) the
rationale for why elements and the form were chosen.
This definition has been the basis for other
researchers, but it has also received some critique for
the third item in the triplet. In [15] the authors
acknowledge that the rationale is indeed important,
but is in no way part of the software architecture. The
basis for their objection is that when we accept that
all software systems have inherent software
architecture, even though it has not been explicitly
designed to have one, the architecture can be
recovered. However, the rationale is the line of
reasoning and motivations for the design decisions
made by the design, and to recover the rationale we
would have to seek information not coded into
software.

Software system design consists of the activities
needed to specify a solution to one or more problems,
such that a balance in fulfillment of the requirements
is achieved. A software architecture design method
implies the definition of two things. (i) A process or
procedure for going about the included tasks. (ii) A
description of the results or type of results to be
reached when employing the method. From the
software architecture point-of-view, the first of the
aforementioned two, includes the activities of
specifying the components and their interfaces, the
relationships between components, and making
design decisions and document the results to be used
in detail design and implementation. The second is
concerned with the definition of the results, i.e. what
is a component and how is it described.

The traditional object-oriented design methods, e.g.
(OMT [12], Booch [6], Objectory [8]) has been
successful in their adoption by companies worldwide.
Over the past few years the three aforementioned
have jointly produced a unified modeling language
(UML) [7] that has been adopted as de facto standard
for documenting object-oriented designs.

3. Scenarios

Scenarios serve as abstractions of the most important
requirements on the system. Scenarios play two
critical roles, i.e. design driver, and
validation/illustration. Scenarios are used to find key
abstractions and conceptual entities for the different
views, or to validate the architecture against the
predicted usage. The scenario view should be made
up of a small subset of important scenarios. The
scenarios should be selected based on criticality and
risk. Each scenario has an associated script, i.e.
sequence of interactions between objects and
between processes [13]. Scripts are used for the
validation of the other views and failure to define a
script for a scenario discloses an insufficient
architecture.

Fig. 1 4+1 View model design method

The 4+1 View Model presented in [17] was
developed to rid the problem of software architecture
representation. Five concurrent views (Fig. 1) are
used; each view addresses concerns of interest to
different stakeholders. On each view, the Perry/Wolf
definition [2] is applied independently. Each view is
described using its own representation, a so called
blueprint. The fifth view (+1) is a list of scenarios
that drives the design method.

4. Usability concerns

The work in this paper is motivated by the fact that
this also applies to usability. Usability is increasingly
recognized as an important consideration during
software development; however, many well-known
software products suffer from usability issues that
cannot be repaired without major changes to the
software architecture of these products. This is a
problem for software development because it is very
expensive to ensure a particular level of usability
after the system has been implemented. Studies [21,
22] confirm that a significant large part of the
maintenance costs of software systems is spent on
dealing with usability issues. These high costs can be
explained because some usability requirements will

IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org 420

not be discovered until the software has been
implemented or deployed.

5. Patterns

Software engineers have a tendency to repeat their
successful designs in new projects and avoid using
the less successful designs again. In fact, these
different styles of designing software systems could
be common for several different unrelated software
engineers. This has been observed in [18] where a
number of systems were studied and common
solutions to similar design problems were
documented as design patterns.

Fig. 2 Usability Framework

The concept has been successful and today most
software engineers in are aware of design patterns.
The concept has been used for software architecture
as well. First by describing software architecture
styles [16] and then by describing software
architecture patterns [5] in a form similar to the
design patterns. The difference between software
architecture styles and software architecture patterns
have been extensively debated. Two major
viewpoints are; styles and patterns are equivalent, i.e.
either could easily be written as the other, and the
other view point is, they are significantly different
since styles are a categorization of systems and
patterns are general solutions to common problems.

Either way styles/patterns make up a common
vocabulary. It also gives software engineers support
in finding a well-proven solution in certain design
situations.

The design and use of explicitly defined software
architecture has received increasing amounts of
attention during the last decade. Generally, three
arguments for defining an architecture are used [14].
First, it provides an artifact that allows discussion by
the stakeholders very early in the design process.
Second, it allows for early assessment of quality
attributes [29,25]. Finally, the design decisions
captured in the software architecture can be
transferred to other systems.

Our work focuses on the second aspect: early
assessment of usability. Most engineering disciplines
provide techniques and methods that allow one to
assess and test quality attributes of the system under
design. For example for maintainability assessment
code metrics [23] have been developed. In [3] an
overview is provided of usability evaluation
techniques that can be used during software
development. Some of the more popular techniques
such as user testing [9], heuristic evaluation [10] and
cognitive walkthroughs [1] can be used during
several stages of development. Unfortunately, no
usability assessment techniques exist that focus on
assessment of software architectures. Without such
techniques, architects may run the risk of designing a
software architecture that fails to meet its usability
requirements. To address to this problem we have
defined a scenario based assessment technique
(SALUTA).

The Software Architecture Analysis Method (SAAM)
[20] was among the first to address the assessment of
software architectures using scenarios. SAAM is
stakeholder centric and does not focus on a specific
quality attribute. From SAAM, ATAM [19] has
evolved. ATAM also uses scenarios for identifying
important quality attribute requirements for the
system. Like SAAM, ATAM does not focus on a
single quality attribute but rather on identifying
tradeoffs between quality attributes. SALUTA can be
integrated into these existing techniques.

6. Pattern Specifications

Pattern Specifications (PSs) [25, 26] are a way of
formalizing the structural and behavioral features of a
pattern. The notation for PSs is based on the Unified
Modeling Language (UML) [26]. A Pattern
Specification describes a pattern of structure or
behavior and is defined in terms of roles. A PS can be
instantiated by assigning modeling elements to play
these roles. The abstract syntax of UML is defined by
a UML metamodel. A role is a UML metaclass
specialized by additional properties that any element
fulfilling the role must possess. Hence, a role

IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org 421

specifies a subset of the instances of the UML
metaclass. A PS can be instantiated by assigning
UMLmodel elements to the roles in the PS. A model
conforms to a pattern specification if its model
elements that play the roles of the pattern
specification satisfy the properties defined by the
roles. Pattern specifications can be defined to show
static structure or dynamic behavior. Here we
concern with specifications of behavior but it should
be noted that any class roles participating in pattern
specifications must be defined in a Static Pattern
Specification (SPS), which is the PS equivalent of a
class diagram.

An Interaction Pattern Specification defines a pattern
of interactions between its participants. It consists of
a number of lifeline roles and message roles which
are specializations of the UML metaclasses Lifeline
and Message respectively. The IPS in Fig. 4
formalizes the Observer pattern. Role names are
preceded by a vertical bar to denote that they are
roles.

Fig. 3 Pattern Specification Process Model

Each lifeline role is associated with a classifier role, a
specialization of a UML classifier. Fig. 4 shows an
example of an IPS and a conforming sequence
diagram.

The separation of specification concerns are
maintained at the state machine level with
composition of the functional need and non-
functional need of requirements from the scenario
level, the state machines need never be seen by the
requirements engineer. Composition is specified
purely in terms of scenario relationships and the
composed state machine of the execution of the
requirement and cancellation that are generated can

be hidden. This has advantages for requirements
engineers not trained in state-based techniques.

Fig. 4 Conforming Sequence Diagram

An IPS can be instantiated by assigning concrete
modeling elements to the roles.

7. Functional and non-functional patterns

Non-functional requirements (NFRs) are pervasive in
descriptions of design patterns. They play a crucial
role in understanding the problem being addressed,
the tradeoffs discussed, and the design solution
proposed. However, since design patterns are mostly
expressed as informal text, the structure of the design
reasoning is not systematically organized. In
particular, during the design phase, much of the
quality aspects of a system are determined. Systems
qualities are often expressed as non-functional
requirements, also called quality attributes e.g.
[28,29]. These are requirements such as reliability,
usability, maintainability, cost, development time,
and are crucial for system success. Yet they are
difficult to deal with since they are hard to quantify,
and often interact in competing, or synergistic ways.

Fig. 5 Non-functional patterns as Requirements

During design such quality requirements appear in
design tradeoffs when designers need to decide upon
particular structural or behavioral aspects of the
system. Applying a design pattern may be understood
as transforming the system from one stage of

IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org 422

development to the next. A good design needs the
identification of architectural design decisions that
improve usability, such as identification of usability
patterns [29].

8. Conclusion

Use cases are a popular requirements modeling
technique, yet people often struggle when writing
them. They understand the basic concepts of use
cases, but find that actually writing useful ones turns
out to be harder than one would expect. One factor
contributing to this difficulty is that we lack objective
criteria to help judge their quality. Many people find
it difficult to articulate the qualities of an effective
use case. We have identified approximately three-
dozen patterns that people can use to evaluate their
use cases. We have based these patterns on the
observable signs of quality that successful projects
tend to exhibit. Construction guidance is based on use
case model knowledge and takes the form of rules
which encapsulate knowledge about types of action
dependency, relationships between actions and flow
conditions, properties of objects and agents, etc.
Based on this knowledge rules, help discovering
incomplete expressions, missing elements,
exceptional cases and episodes in the use case
specification through pattern specification. They
support the progressive integration of scenarios into a
complete use case specification.

References
[1] C. Wharton, J. Rieman, C. H. Lewis,

and P. G. Polson, The Cognitive
Walkthrough: A practitioner’s guide.,
in Usability Inspection Methods,
Nielsen, Jacob and Mack, R. L., John
Wiley and Sons, New York, NY., 1994.

[2] D.E. Perry, A.L.Wolf, ‘Foundations
for the Study of Software
Architecture’, Software Engineering
Notes, Vol. 17, No. 4, pp. 40-52,
October 1992.

[3] E. Folmer and J. Bosch,
Architecting for usability; a survey,
Journal of systems and software,
Elsevier, 2002, pp. 61-78.

[4] F. Buschmann, R. Meunier, H.
Rohnert, M.Stahl, Pattern-Oriented
Software Architecture – A System of
Patterns, John Wiley & Sons, 1996.

[5] F. Buschmann, R. Meunier, H.
Rohnert, M.Stahl, Pattern-Oriented
Software Architecture – A System of
Patterns, John Wiley & Sons, 1996.

[6] G. Booch, Object-Oriented Analysis
and Design with Applications (2nd
edition), Benjamin/Cummings
Publishing Company, 1994.

[7] G. Booch, J. Rumbaugh, I. Jacobson,
The Unified Modeling Language User
Guide, Object Technology Series,
Addison-Wesley, October 1998.

[8] I. Jacobson, et. al., Object-
oriented software engineering. A use
case approach, Addison- Wesley, 1992.

[9] J. Nielsen, Heuristic Evaluation.,
in Usability Inspection Methods.,
Nielsen, J. and Mack, R. L., John
Wiley and Sons, New York, NY., 1994.

[10] J. Nielsen, Usability Engineering,
Academic Press, Inc, Boston, MA.,
1993.

[11] J. Bosch, Design and use of
Software Architectures: Adopting and
evolving a product line approach,
Pearson Education (Addison-Wesley and
ACM Press), Harlow, 2000.

[12] J. Rumbaugh, M. Blaha, W.
Premerlani, F. Eddy, W. Lorensen,
Object-oriented modeling and design,
Prentice Hall, 1991.

[13] K. Rubin, A. Goldberg, “Object
Behaviour Analysis”, Communications
of ACM, September 1992, pp. 48-62.

[14] L. Bass, P. Clements, and R.
Kazman, Software Architecture in
Practice, Addison Wesley Longman,
Reading MA, 1998.

[15] L. Bass, P. Clements, R. Kazman,
Software Architecture in Practice,
Addison Wesley,1998.

[16] M. Shaw, D. Garlan, Software
Architecture – Perspectives on an
Emerging Discipline, Prentice Hall,
1996.

[17] P.B. Kruchten, ‘The 4+1 View Model
of Architecture,’ IEEE Software, pp.
42-50, November 1995.

[18] R. Gamma et. al., Design Patterns
Elements of Reusable Design,
Addison.Wesley, 1995.

[19] R. Kazman, M. Klein, M. Barbacci,
T. Longstaff, H. Lipson, and J.
Carriere, The Architecture Tradeoff
Analysis Method, Proceedings of
ICECCS’98, 8-1-1998.

[20] R. Kazman, G. Abowd, and M. Webb,
SAAM: A Method for Analyzing the
Properties of Software Architectures,
Proceedings of the 16th International
Conference on Software Engineering,
1994.

[21] R. S. Pressman, Software
Engineering: A Practitioner’s
Approach, McGraw-Hill, NY,1992.

[22] T. K. Landauer, The Trouble with
Computers: Usefulness, Usability and
Productivity., MIT Press., Cambridge,
1995.

[23] W. Li and S. Henry, OO Metrics
that Predict Maintainability, Journal

IJCSI International Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
ISSN (Online): 1694-0814
www.IJCSI.org 423

of systems and software, Elsevier,
1993, pp. 111-122.

[24] R. France, D. Kim, S. Ghosh and E.
Song, “A UMLBased Pattern
Specification Technique”, IEEE
Transactions on Software Engineering,
Vol. 30(3), 2004.

[25] D. Kim, R. France, S. Ghosh and
E. Song, “A UMLBased Metamodeling
Language to Specify Design Patterns”,
Proceedings of Workshop on Software
Model Engineering (WiSME), at UML
2003, San Francisco, 2003.

[26] Unified Modeling Language
Specification, version 2.0 January
2004, In OMG, http://www.omg.org [15]
J. Warmer and A. Kleppe, The Object
Constraint Language: Getting Your
Models Ready for MDA, 2nd Edition,
Addison-Wesley, 2003.

[27]Boehm BW. Characteristics of
software quality. North-Holland Pub.
Co., Amsterdam New York 1978.

[28]Bowen TP. Wigle GB. Tsai JT.
Specification of software quality
attributes (Report RADC-TR-85-37).
Rome Air Development Center, Griffiss
Air Force Base NY 1985.

[29] Architecting for usability; a
survey, http://segroup.cs.rug.nl.

Prof. Dr.M.Punithavalli is currently the Director &
Head of Department of Computer Applications, Sri
Ramakrishna College of Arts and Science for
women, Coimbatore, India. She is actively working
as the Adjunct Professor in the department of
Computer Applications of Ramakrishna Engineering
College, India.

Lect. R.V.SivaBalan is currently working as the
Lecturer in the Department of Computer
Applications, Narayanaguru College of Engineering,
India. He is a research scholar in Anna University
Coimbatore, India.

View publication statsView publication stats

https://www.researchgate.net/publication/268297633

S

H
I

a

A

R

R
A
A

K
S

L

C

S
A
Q

1

o
t

s

u
e
m
r

d

2

p
c

o
a

k
a
a
p
e

0
d

The Journal of System

s and Soft

w

are 83 (2010) 2211–222

6

Contents lists available at Science

D

irect

The Journal of Systems and Software

j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / j s s

oftware architecture awareness in long-term software product evolution

ataichanok Unphon ∗, Yvonne Dittrich
T University of Copenhagen, Rued Langgards Vej 7, DK-2300, Copenhagen S, Denmark

r t i c l e i n f o

rticle history:
eceived 15 October 2009
eceived in revised form 23 April 2010
ccepted 27 June 2010
vailable online 17 July 2010

eywords:
oftware products
ong-term evolution
ooperative and human aspects

a b s t r a c t

Software architecture has been established in software engineering for almost 40 years. When devel-
oping and evolving software products, architecture is expected to be even more relevant compared to
contract development. However, the research results seem not to have influenced the development prac-
tice around software products very much. The architecture often only exists implicitly in discussions that
accompany the development. Nonetheless many of the software products have been used for over 10,
or even 20 years. How do development teams manage to accommodate changing needs and at the same
time maintain the quality of the product? In order to answer this question, grounded theory study based
on 15 semi-structured interviews was conducted in order to find out about the wide spectrum of archi-
tecture practices in software product developing organisations. Our results indicate that a chief architect

oftware architecture
rchitecture knowledge management
ualitative empirical studies

or central developer acts as a ‘walking architecture’ devising changes and discussing local designs while
at the same time updating his own knowledge about problematic aspects that need to be addressed.
Architecture documentation and representations might not be used, especially if they replace the feed-
back from on-going developments into the ‘architecturing’ practices. Referring to results from Computer
Supported Cooperative Work, we discuss how explicating the existing architecture needs to be com-
plemented by social protocols to support the communication and knowledge sharing processes of the

‘walking architecture’.

. Introduction

Software products are programs that are used by more than one
rganisation. They are often configured and customised to fit with
he specific use context. They are long-living; often evolving over
everal decades. Bug fixes and upgrades are delivered on a reg-
lar basis. Though especially for software products that need to
volve to keep up with technical and application domain develop-
ents the architecture should be an important asset, our previous

esearch showed that the companies we have been in contact with
id not have a formal architecture process (Unphon and Dittrich,
008). Nonetheless, the software has been successful over long
eriods of use and evolution. So our questions are: how do these
ompanies manage to maintain the evolvability of their products
ver a long life-time? Are the development teams aware of the
rchitecture of their product? If yes, how does the architecture
nowledge become influential in the development, and how is the

rchitecture updated in the on-going evolution? The goal of the
rticle is to better understand the architecturing practices software
roduct developing teams employ when evolving their product. We
specially are interested to understand whether the practices we

∗ Corresponding author. Tel.: +45 7218 5000; fax: +45 7218 5001.
E-mail addresses: unphon@itu.dk (H. Unphon), ydi@itu.dk (Y. Dittrich).

164-1212/$ – see front matter

© 2010 Elsevier Inc. All rights reserved.

oi:10.1016/j.jss.2010.06.043

© 2010 Elsevier Inc. All rights reserved.

observed previously are unique or are an instance of a wider spread
phenomenon.

The questions aim at understanding how developers relate to
architecture as context of their activity and how architecture is
used to support mutual awareness in order to coordinate paral-
lel activities in the team. ‘Awareness’ has been discussed in the
Computer Supported Cooperative Work (CSCW) discourse for over
2 decades. Referring to the notion of awareness developed by Heath
and Luff (1992) based on the analysis of the cooperation of line con-
trollers and Divisional Information Assistants in line control rooms
of the London underground is cited as a seminal study. Through
both monitoring common displays and each other’s activity, they
managed the common tasks – advising train drivers and informing
passengers about delays – with little explicit coordination. Even the
reactions of each colleague was monitored so that the missing of
a necessary reaction could be caught, which then caused either a
more emphasised behaviour or if necessary explicit coordination.
Since then, such heedful situated coordination has been reported
from a range of activities and resulted in specific awareness support
in groupware applications (Gutwin and Greenberg, 2002). Schmidt

(2002) remarks in an article for a special issue of CSCW journal:
awareness is a substantiation of an attribute of an activity. Some-
one is acting in awareness of the activity of others and of changes
in the context. The issue then is to understand what the coordi-
nation or awareness mechanisms are in play, and how they can be

dx.doi.org/10.1016/j.jss.2010.06.04

3

http://www.sciencedirect.com/science/journal/01641212

http://www.elsevier.com/locate/jss

mailto:unphon@itu.dk

mailto:ydi@itu.dk

dx.doi.org/10.1016/j.jss.2010.06.043

2

System

s
i
s

w
a
t
t
f
r
t
o

o
a
b
s
a
t
p
l
w
a
s
C
h
d
o
i
m
T
s

i
t
o
c
c
o
w
d
c
a
b
t
t
t
d

t
w
s
c
i
i
s

a
a
t
e
s
t
d
w
c

architect has been discussed in many sites and portals on soft-
ware architecture; well-known examples include Bredemeyer’s
site (Bredemeyer, 2010), the Software Engineering Institute (SEI)
architecture website (SEI, 2010), and Wikipedia’s software archi-

212

H. Unphon, Y. Dittrich / The Journal of

upported. In other words, what are the clues a co-operator is read-
ng, how does he or she make their own actions accountable to the
urroundings, and what are the means and protocols in play?

Applying the concept of awareness as a lens to understand soft-
are architecture practices implies a focus on the situated use of

rchitectural knowledge when (a) making changes to a module
hat might have implications on another’s code – that is changes
he interface – visible, (b) monitoring changes that are relevant
or the task at hand, and (c) monitoring changes to the code, the
equirements, and the context that makes it necessary to change
he architecture and thus change the design and implementation
f the different modules.

The article reports results of an interview study. In previ-
us research we observed informal sharing and maintenance of
rchitectural knowledge and started to understand the rationality
ehind this practice (Unphon, 2009). The design of the interview
tudy is based on these observations. Awareness mechanisms in
ction need to be observed in situ. Interviews, however, provide
he possibility to compare the reported practices of different com-
anies, and thus give an indication of whether or not we are

ooking at a wider spread phenomenon. The interview guideline
as based on our previous studies and based on the concept of

wareness as a theoretical lens. We interviewed members of eight
oftware developing organisations in five countries (i.e., Belgium,
hina, Denmark, Germany, and Switzerland). Each organisation
as on-going software product development. The interviews were
one and analysed in a grounded theory manner. The motivations
f applying a grounded theory approach was to not only collect
nformation of what is going on in the companies, but also what

otivates different practices, and how they depend on each other.
he interviews both confirmed and deepened our previous under-
tanding.

Our results indicate that the industrial practice in most cases
s not what is recommended by applicable textbooks. Nonetheless,
he structure of software products is regarded as an important asset
f development. Rather than documenting it in a formal way, most
ompanies rely on a practice for which we have coined the con-
ept of a ‘walking architecture.’ This is a key person, or a number
f key persons, who maintain and update the structure of the soft-
are, and are involved in discussions of changes motivated in the
evelopment, or by new requirements, and who introduce new-
omers to the structure of the software. Representations of the
rchitecture thus are temporary and partial, e.g., sketches on white-
oard and scrap paper used in a specific situation. The result of
his practice is not only the distribution of architectural knowledge
o the development team, but also an update of the chief archi-
ect’s knowledge on the issues the developers meet when they
evelop.

We argue in our discussion that software architecture litera-
ure, so far, has underestimated this feedback, and that here maybe
e can find reasons for the lack of appreciation of recommended

oftware architecture methods in industry. By using the awareness
oncept from CSCW when discussing our findings, we highlight the
mportance to focus not only on documentation and tools when
mproving architectural practices, but also on the development of
ocial protocols around such methods and tools.

In the next section, we discuss related literature on software
rchitecture and software evolution, but also on knowledge man-
gement in software engineering and the concept of awareness
hat originated in the discourse of Computer Supported Coop-
rative Work that has been adopted in research on distributed

oftware engineering. Section 3 shows research methodology. Sec-
ion 4 elaborates on interviewees and their companies, then briefly
escribes interview guideline. Section 5 shows an analysis of soft-
are architecture awareness. Section 6 is discussion. Section 7 is

onclusions.

s and Software 83 (2010) 2211–2226

2. Architecture, knowledge, and awareness

This section introduces the research we build upon and con-
tribute to. The section starts with discussing the notion of software
product architecture and evolution, then discusses knowledge
management in software engineering and the notion of awareness
which stems from the discourse on Computer Supported Coopera-
tive Work.

2.1. Software architecture

In programming, the term architecture has been used since the
late 1960s (Brooks and Iverson, 1969). In the early 1970s, Par-
nas contributed many of the fundamental tenets and principles
behind software architecture (Parnas, 1971, 1972, 1974, 1976). The
report and book by Garlan and Shaw (Garlan and Shaw, 1994; Shaw
and Garlan, 1996) not only redefined software architecture overall,
but also introduced a number of architectural styles and reference
architectures. They introduced the notions of components, connec-
tors and constraints. More and more researchers, e.g., Jansen and
Bosch (2005), Tyree and Akerman (2005), Kruchten et al. (2006),
van der Ven et al. (2006), emphasised the notion of the design ratio-
nal, and illustrated how architectural representations can improve
the understanding to complex software systems.

Software architecture is meant to serve a number of purposes:
as it decomposes the software into components, it helps to handle
complexity in a divide and conquer manner; the decomposition
serves also as a base to structure the implementation work into
manageable chunks assigned to individuals or small teams; it pro-
vides a base to analyse and assess non-functional requirements of
the software to be built, or of the changes introduced (Bass et al.,
2003; Kruchten, 1995; IEEE 1471-2000 standard1).

The practices of creating an architecture proposed heavily rely
on written representations, and when necessary, (semi-) formal
notations (e.g. Allen and Garlan, 1997, 1992; Booch et al., 1998;
Dunsire et al., 2005; Feiler et al., 2006; Garlan et al., 1997; Gasparis
et al., 2008; Luckham, 1996; Luckham and Vera, 1995; Medvidovic
et al., 1999; Shaw et al., 1995; The Open Group, 2009; Weilkiens,
2008). The notations provide an explicit way of specifying the ele-
ments and their connections used in the architecture.2 Over time, as
the software evolves, the code structures become less tightly cou-
pled with the design architecture, aka the code view vs. the module
view (Hofmeister et al., 2000). The design architecture has layers,
modules and dependencies, but the source code architecture con-
tains folders and files, as well as, static and dynamic relationships
between different classes. Keeping the correspondence between
design architecture and code architecture alive requires a rigorous
engineering discipline (Bischofberger et al., 2004).

2.2. The role of the software architect

It would be somewhat misleading to place the main responsibil-
ity for the maintenance of the architectural structure of a software
product with the software architect. The role of the software

1 IEEE 1471-2000 standard is now ISO/IEC 42010 standard.
2 Tools developed on top of those notations generate part of the implementa-

tion based on the architecture. In this way, the organisation of source code—when
developed that way from scratch—conforms to major design elements and the rela-
tionships among them. Model Driven Development (MDD) aims at supporting the
evolution through these tools (Czarnecki et al., 2000; Greenfield et al., 2004).

System

t
r
a
c
a
A
d
i
b
s
m
A
g
t
p
T
d
c
a
t
i
t
o

t
s
t
s
p
d
p
o
a
t
c
i
a
t
t
a
t
n
a
i
s
i
a
e
s

t
y

5

a
t
r
t
o
u
c
s
b
o
a
r
o

Knowledge management has been a major topic in software
engineering, even before the concept had been coined by Nonaka
at the beginning of the nineties (Nonaka, 1994, 1998). In their sem-

H. Unphon, Y. Dittrich / The Journal of

ect website (Wikipedia, 2010). A recent survey on ‘what architects
eally do’ (Farenhorst and de Boer, 2009) confirmed that architects
re mostly busy with taking architectural decisions. Fowler (2003)
ategorised architect’s roles into two types: Architectus Reloadus
nd Architectus Oryzus based on decision making approaches.
rchitectus Reloadus is an architect who makes all the important
ecisions. The architect in this type does this because a single mind

s needed to ensure a system’s conceptual integrity, and perhaps
ecause the architect does not think that the team members are
ufficiently skilled to make those decisions. Often, such decisions
ust be made early on so that everyone else has a plan to follow.

rchitectus Oryzus is an architect that must be very aware of what is
oing on in a project, looking out for important issues and tackling
hem before they become a serious problem. The most noticeable
art of the work for Architectus Oryzus is the intense collaboration.
he most important activity of Architectus Oryzus is mentoring the
evelopment team to raise their level so that they can take on more
omplex issues. Improving the development team’s ability allows
n architect much greater leverage utilizing the entire team rather
han being the sole decision maker and running the risk of becom-
ng an architectural bottleneck. This leads to the rule of thumb
hat an architect’s value is inversely proportional to the number
f decisions he or she makes.

Kruchten (1999) listed the roles and responsibilities of an archi-
ect or an architecture team: (i) defining the architecture of the
ystem; (ii) maintaining the architectural integrity of the sys-
em; (iii) assessing technical risks; (iv) working out risk migration
trategies/approaches; (v) participating in project planning; (vi)
roposing order and content of iterations; (vii) consulting with
esign, implementation, and integration teams; and (viii) assisting
roduct marketing and future product definitions. The definition
f software architecture includes all the usual technical activities
ssociated with design: understanding requirements and quali-
ies; extracting architecturally significant requirements; making
hoices; synthesizing a solution; exploring alternatives and val-
dating them; etc. For certain challenging prototyping activities,
rchitects may have to use services of software developers and
esters. The maintenance of the architectural integrity takes place
hrough regular reviews; writing guidelines, etc. and presenting the
rchitecture to various parties as different levels of abstraction and
echnical depth. For many effort estimation aspects, or for the plan-
ing of distributed development, managers need the assistance of
rchitects. Because of their technical expertise, architects are drawn
nto problem-solving and fire-fighting activities that are beyond
olving strictly architectural issues. The architects have insights
nto what is feasible, doable, or ‘science fiction’ and their presence in
product definition or marketing team may be very effective. How-
ver, good architects should bring a good mix of domain knowledge,
oftware development enterprise, and communication skills.

Later on, Kruchten (2008) recommended a simple
ime–management practice for architects based on more than 10
ears of his experience. The recommended time ratio allocates
0% internal, 25% inward, and 25% outward activities. The internal
ctivities focus on architecting per se (architectural design, pro-
otyping, evaluating, documenting, etc.). The inward and outward
efer to cooperation and communication with other stakeholders
hat the architects interact with. The inward is to get input from the
utside world. For example, listening to customers, users, prod-
ct manager, and other stakeholders (developers, distributors,
ustomer support, etc.), and learning about technologies, other
ystem’s architecture, and architectural practices. The outward can

e seen as providing information or helping other stakeholders
r organisations (e.g., communicating architecture, project man-
gement, or product definition). The 50:25:25 time–management
atio helps the architects to be aware of the risks of falling into one
f the following situations: creating a perfect architecture for the

s and Software 83 (2010) 2211–2226 2213

wrong system, creating a perfect architecture that’s too hard to
implement, architects in their ivory tower, or absent architects.

Grinter (1999) presented a study of the roles of architects,
in particular, the work that they do to coordinate design cross-
organisational and institutional boundaries. Bass and Klein (2008)
reckoned the relationship between architects and organisations
towards consistency producing high-quality architectures. They
proposed models for evaluating and improving architecture com-
petence of software architects, software architect teams, and
software architecture producing organisations. The models are
based on (1) the duties, skills, and knowledge required of a software
architect or architecture organisation, (2) human performance
technology, an engineering approach applied to improving the
competence of individuals, (3) organisational coordination, the
study of how people and units in an organisation share information,
and (4) organisational learning, an approach to how organisations
acquire, internalise, and utilise knowledge to improve their perfor-
mance.

2.3. Software product evolution and architecture

The intrinsic evolutionary nature of real-world computer usage
and of software embedded in its use context was originally recog-
nised in Belady and Lehman (1976), and Lehman (1980). The
dynamism of the real world induces software to be continu-
ally changed, updated, and evolved over its life-time. As the
software evolves, its architectural integrity tends to dilute. For
example, source code architecture drifts from its design archi-
tecture (Murphy et al., 1995). The gap between the source code
and the design architecture hinders program understanding which
leads to development and maintenance activities that are increas-
ingly difficult and highly error prone (Tran et al., 2000; Unphon,
2009). However, the effort emphasised on updating the software in
order to improve upon the future maintainability without chang-
ing its current functionality, the so-called ‘preventive maintenance’
(Lientz and Swanson, 1980), aka anti-regressive activity (Lehman,
1996) or refactoring (Fowler, 1999), is not highly prioritized (Lientz
and Swanson, 1981; Schach et al., 2003). Even though preventive
maintenance offers significant improvements in the simplicity of
conducting maintenance interventions in the long-term, it brings
little to no immediate benefits (Madhavji et al., 2006). As a result
the software becomes more difficult to maintain.

Decisions made during initial software development affect the
ability of organisations to successfully perform software change.
In particular, the selection of architecture could either aid or hin-
der changes made through evolution (Madhavji et al., 2006). Once
the first version of the architecture has been implemented, the
evolution will become the primary activity. In socially embedded
software,3 end-users are able to tailor their software products,
e.g., through configuration, composition, expansion, or extension
(Eriksson, 2008). However, the end-users are able to tailor their
software with a minimum risk if their requirements were antic-
ipated in the design of architecture. To react on un-anticipated,
evolving needs, the software itself evolves over time.

2.4. Knowledge management

3 Socially embedded software refers to software that can be modelled intensively
according to the environment and practices of its end-users (Unphon et al., 2009b).
The socially-embedded software, can be seen as E-type programs (Lehman, 1996).
We also refer it as social software.

2 System

i

P

b
n
m
t
c
u
b
a
s
r
t
b
c
t
s
d
h
n

i
t
t
a
e
i
i
e

t
t
(
(
a
m
s
p
k
d
o
u
A
s
k
b
s
i

2

h
e

p
t
n
d
n
S
d

214 H. Unphon, Y. Dittrich / The Journal of

nal article ‘The rational design process: Why and how to fake it’,
arnas and Clements (1986) argue that though to design software
y deriving the design and source code from the requirements is
ot possible, the software team should aim at producing the docu-
entation that mirrors such a rational design process. The reason is

o document the reasoning and rationale behind design decisions in
ase revisions become necessary, and in order to have suitable doc-
mentation for the maintenance team. Today this argument could
e related to what Dingsøyr and Conradi (2002) call codification
pproaches to knowledge management: such approaches empha-
is the codification, digital storage, and retrieval of information
epresenting relevant knowledge. Complementary personalisa-
ion oriented approaches emphasize face-to-face communication
etween people in-the-know and who need to know. In his arti-
le ‘Programming and Theory Building’, Naur (1985) emphasises
he importance of participation in the development team to under-
tand the rationale behind a design. Only through participation in
esign and development can help software engineers to understand
ow the software models its problem domain and supports the
eeds of its users.

Though software engineering from the beginning overstated the
mportance of documentation for the development process, and
herefore exhibited an affinity to codification oriented approaches
o software architecture, empirical research (Section 5; Farenhorst
nd de Boer, 2009) indicates that personalisation strategy (Hansen
t al., 1999) is at least as important in architectural knowledge shar-
ng. In a dialogue situation, knowledge can be tailored to the context
n which it is needed. Through this communication, the software
ngineer who shares his knowledge also updates his knowledge.

The discussion on software architecture knowledge emphasises
he codification, storing and retrieval of information on architec-
ure. However, this codification strategy does not work in practice
Lago et al., 2008). The people involved in the architecting process
who own the knowledge) often do not document it (Harrison et
l., 2007). The reasons are a lack the motivation to document and
aintain architecture knowledge, as the benefits do not seem sub-

tantial enough to justify the effort; the short-term interest in the
roject becomes more important than the long-term architectural
nowledge reuse; developers are absorbed in the creative flow of
esign and thus don’t reflect on long-term impact of decisions; lack
f training. Even worse, when the architecture knowledge is doc-
mented, it’s often not sufficiently shared within the organisation.
s examples Lago et al. (2008) gives (i) the knowledge is not dis-
eminated to the appropriate stakeholders; (ii) the recipients of
nowledge don’t use it in their own tasks, either intentionally, or
ecause there is no provision in the processes; (iii) it’s cumber-
ome to search and locate the appropriate knowledge and adapt it
n one’s needs.

.5. Awareness in software engineering

The concept of awareness as defined in the introduction
ighlights what can be called a situated socialisation-based knowl-
dge sharing mechanism. Dourish and Bellotti (1992) define:

[A]wareness is an understanding of the activity of others which
rovide a context for your own activities.’4 In software engineering,
he notion of awareness is so far mostly used to address coordi-
ation of distributed development, for example, Grinter’s study of

istributed development and integration emphasises the coordi-
ation role of architecture when evolving software (Grinter, 2003).
torey et al. (2005) give an overview of different tools that are
esigned to help programmers monitor changes to the common

4 Highlighting as in the original.

s and Software 83 (2010) 2211–2226

software under development that might become relevant for their
own programming. Particularly, in spatially distributed develop-
ment, parallel on-going work cannot be monitored by means of
‘overhearing’ design discussions taking place in the vicinities. Also,
meetings, such as daily stand-up meetings, that are designed to
provide a project team with a development overview, with respect
to the common product, cannot help this need. Tools visualising
social-technical dependencies, and thus helping to contact the cor-
rect person, e.g., de Souza et al. (2005), are designed to address
this lack. Such tools can be understood as support for fine-grain
knowledge sharing. Recent research, however, indicates that tech-
nical support addresses only one side of the problem: awareness
problems can also occur in cases of mismatched social protocols
(Damian et al., 2007).

The importance of such protocols has already been indicated
in the early studies on the London underground control room.
‘However, it is clear that while certain activities are primarily
accomplished by specific categories of individuals, the in situ
accomplishment of these tasks is sensitive to, and coordinated with,
the actions and responsibilities of colleagues within the immediate
environment. The competent production of a range of specialised
individual tasks within the Control room is thoroughly embedded
in, and inseparable from, a range of socio-interactional demands’
(Heath and Luff, 1992, p. 82, highlighting by the authors). This
interlacing of their own and their co-workers activities is not only
guided by a codex of explicit rules, but depends on competence
with respect to understanding established practices (Heath and
Luff, 1992, p. 78).

The usefulness of the technology that is the base of this inter-
action ‘relies upon a collection of tacit practices and procedures
through which Controller and Divisional Information Assistant
(DIA) coordinate information flow and monitor each other’s con-
duct (Heath and Luff, 1992, p. 87, see also Schmidt, 2002).

With the term ‘practice,’ we describe a common way of acting
acknowledged by the community that shares the practice (Hansson
et al., 2006, p. 1296). A group of co-operators maintains the com-
mon practice through reproducing it in their every day actions.
Practice thus is distinguished from ad-hoc behaviour, which as such
is only perceivable by its deviation from both the formalized rules
and the established practice. Such practices as well as explicitly
agreed on procedures have been also called social protocols (Gerson
and Star, 1986; Schmidt and Simone, 1996). They are developed and
maintained through on-going ‘articulation work’ or ‘meta-work’ of
the members and can only to some extent be designed from the
outside.

So far, only one project addresses awareness issues with respect
to software architecture: application programmer interfaces (APIs),
the interfaces that make the functionality of one module avail-
able, are discussed in a way that can be regarded as implementing
the material side of an awareness mechanism (de Souza et al.,
2004). Often when APIs are used to indicate boundaries between
development groups, corresponding social protocols are estab-
lished indicating that software teams who implement functionality
using the module are informed if the API needs to change. The arti-
cle, however, does not refer to the role of the architecture, nor the
everyday work of the software architect.

3. Research methodology

The study has been designed as triangulation complementing

in depth ethnographically informed studies in two organisations
(Unphon et al., 2009b; Unphon and Dittrich, 2008). Being aware
of those architectural practices in situ would be best observed by
participatory observation, we nonetheless decided for an interview
study. Interviews provide the possibility to compare the reported

System

p
w
c
v
w
s
o
w
a
t
a

a
c
p
e
s
f
i
o
e
m
s
o
3

3

i
a
t
I
o
I
t
c
c
p

u
I
r
e
c
a

o
i
e
t

p
r
e
a
d
c
p
a

3

d
D

H. Unphon, Y. Dittrich / The Journal of

ractices of different companies, and thus give an indication of
hether or not we are looking at a wider spread phenomenon. We

arefully designed the interview guideline both based on our pre-
ious observation and using the concept of awareness as a lens:
e asked not only into document based architectural knowledge

haring, but addressed especially face-to-face communication and
ther informal practices. Most work on architecture focuses on
hat from a research perspective is understood as shortcomings

nd aim at deriving various remedies. Our research and thus also
he interviews aim at understanding how software engineers man-
ge.

The interviews aimed at mapping out the architectural practices
s well as documents and artefacts and their usage. The questions
over business contexts of the software products, development
rocess, architecture, dimension and use of the architecture, coop-
ration of development, and awareness of change in the software,
oftware product line, and software evolution. Our interviews
ocused on understanding concrete practices rather than a pol-
shed record, as we focused on both the tools used and concrete
ccasions. The interview guideline and analysis was done in coop-
ration. However, it was first author who did the interviews and the
ain work preparing the transcripts and the initial analysis. This

ection is outlined as follows: Section 3.1 presents grounded the-
ry; Section 3.2 presents interviews as data collection; and Section
.3 shows analytic process.

.1. Grounded theory

Among flexible research strategies, grounded theory claims that
t is useful in new, applied areas where there is a lack of theory
nd concepts to describe and explain what is going on. Grounded
heory approach was derived from a combination of Chicago style
nteractionism and Pragmatism (Glaser and Strauss, 1967), in terms
f data collection and analysis. Data analysis focuses upon concepts.
t is achieved by carrying out three kinds of coding: open coding
o find concepts; axial coding to interconnect them; and selective
oding to establish core concept(s). The result of this process of data
ollection and analysis is a substantive theory relevant to a specific
roblem, issue, or group (Robson, 2002, see also Fowler, 2003).

Data collection and analysis is an iterative process; it contin-
es until no new concepts and relations are found in the new data.

n this interview study, we compared different interviewed data,
edefined analysis, and out to conduct more interviews. Although,
ach interview shows evidence from diverse perspectives, the pro-
ess still involves converging on construct concepts, sub-concepts,
nd relations for structuring the finding.

According to Robson (2002), typical features of grounded the-
ry are (i) applicable to wide variety of phenomena; (ii) commonly
nterview-based; and (iii) a systematic, but flexible research strat-
gy which provides detailed prescriptions for data analysis and
heory generation.

The problems in using grounded theory are (i) that it is not
ossible to start a research study without some pre-existing theo-
etical ideas and assumptions; (ii) there are tensions between the
volving and inductive style of a flexible study and the systematic
pproach of grounded theory; (iii) it may be difficult in practice to
ecide when categories are ‘saturated,’ or when the theory is suffi-
iently developed; and (iv) grounded theory has particular types of
rescribed categories as components of the theory which may not
ppear appropriate for a particular study.

.2. Interviews

For the interviews, we contacted eight software product
evelopment organisations in five countries, i.e. Belgium, China,
enmark, Germany and Switzerland. All organisations developed

s and Software 83 (2010) 2211–2226 2215

and evolved software products. To open up for possible differen-
tiations regarding kind of software and size of the development
project, we tried to interview as diverse companies and projects
as possible. The software products comprised hydraulic simula-
tion software, scientific control systems, virus scanners, social
software, identity management systems, search engine, and gov-
ernmental contents management systems. Sizes of the interviewed
organisations ranged from a three-person organisation, to an inter-
national organisation with more than 20,000 employees. The
sample included normal industrial product development as well as
an open source project lead by a governmental agency, a research
institution, and a semi-private research and consultancy company.
Interviewees included a managing director, a chief technical officer,
a chief architect, a senior consultant, a group leader, and a number
of software developers. Most of the interviewees did not want to
disclose their name and organisation name, thus, we present them
under assumed names. The interviews were partly carried out at
the workplace, partly via SkypeTM and partly in neutral spaces. We
had to accommodate the interviewees’ constraints. Where possi-
ble, we interviewed more than one member of the development
team. In one case we interviewed members from several product
teams in the same company (see company No. 2).

We prepared an interview guideline with two parts, i.e. a num-
ber of open questions, and a series of multiple-choice questions.
The free response questions addressed the six categories listed in
the introduction of this section. The fixed questions were asked
after the open question. The answers thus quantified what had been
discussed during the interview.

The interviews were conducted from late 2007 until early 2008.
The duration of each interview varied between 30 min and 3 h,
depending on the interviewee. The interviews were audio-taped
and transcribed. The transcription and analysis of the interviews
were checked by the interviewees. Apart from that, we also
provided confidentiality agreements for the interviewed organi-
sations.

3.3. Analytic process

The analysis process started with reading each transcription in
order to get a feeling of what the interviewees were telling. We
came back and listened to the voice recordings of the interviews
while reading the transcriptions. The first tentative concepts were
presented as memos and a springboard of our analysis. These codes
were then highlighted in the transcription together with words
or terms that supported properties and dimensions of the con-
cept. During this work, higher-level categorisations, or lower-level
explanatory concepts became visible. With this list of concepts
and codes, we proceeded to the next transcription. New concepts
appearing in later interviews were added to the coding scheme
which required returning to previous interviews. This part of the
analysis process continued until we were satisfied that we had
accounted for the contents of the interview.

Relations between different concepts became visible. In order to
fully understand concepts, we explored the semantic and process
context of the concepts in the interviews. For the semantic context,
we explored conditions and relationships between the concepts
the interviewees expressed in the interviews. For the process con-
text, we explored how interviewees responded to concepts through
action, interaction, and emotions. That way, we further explored
the meaning of the concepts and linked the concepts to one another.

We represented the relationship between major categories and

subcategories as the foundation for the theoretical structure that
we iteratively refined by going back to transcripts and memos. Fig. 1
shows such a representation of how the categories referred to each
other. Section 5 presents the central concepts, and also indicates
how they relate to each other. Section 4 provides the background

2216 H. Unphon, Y. Dittrich / The Journal of Systems and Software 83 (2010) 2211–2226

oncep

o
a
3

i
f

T
S

Fig. 1. Early diagram over analysis c

f the organisations and the results of the closed questions asked
t the end of the interview.

.4. Confidence

The strategies we applied to minimise possible threats to valid-
ty (Robson, 2002) and to enhance trustworthiness shows as
ollows:

Triangulation. The term triangulation, borrowed from navigational
science and land surveying, referred to using two or more sources
to achieve a comprehensive picture of a fixed point of reference
(Padgett, 2008, p. 186). We applied data triangulation in out study.
The data was gathered from 13 interviews (given by 15 intervie-
wees from eight different product developing organisations). The
correspondence with previous and parallel case studies further
supports that the account given is corresponding to what actually
takes place in practice.
Member checking. All interviewees were required to check the tran-
scription of their interview before the data was analysed. This
article was reviewed by the interviewees before submission.
Audit trail. All interviews were audio-taped and transcribed.
During data collection and data analysis, drawing artefacts and
diagrams on the whiteboard were photographed. Detailed analytic

and self-reflective memos were documented.
Saturation of categories. For grounded theory the convergence of
the observation and the saturation of categories is important. We
continued with the interviews until the concepts, categories and
theoretical structure were saturated. For each of the categorisa-

able 1
ummary of sampled companies.

No. Company name Software product industry

1. EW E-government applications
2. ABC Hydraulic simulation

3. OMD Business identity management

4. ARG CRM and telecommunication
5. GDT Computer security
6. CO Visual office solutions
7. XYZ Internet searching and organising universal informati
8. DZ Controlling cryogenic processes for colliders

ts illustrating the analysis process.

tions we found, we have evidence from several interviews. Many
of the concepts are based on all interviews. Though looking for
companies and development organisation with a more structured
architecture practice, our interviewees again and again reported
and emphasised the informal architectural practices we develop
in our analysis.

Based on these measures we are confident that the analysis cap-
tures the software architecture practices as reported and provides
a picture of architectural practices in software product evolution.

4. The companies and their architectural practice

Before presenting the results of the grounded theory analysis
this section presents the companies and provides an overview over
their architectural practices. Section 4.1 presents interviewees and
organisation profiles. Section 4.2 outlines dimensions, sophistica-
tions, and states of architecture.

4.1. Interviewees and organisation profiles

There are 15 interviewees from eight organisations develop-
ing and maintaining their own software products. Table 1 gives
an overview over interviewees and companies. Note that the order

of interviewees follows the chronological order of the interviews.

The first company is EW, a Belgian government agency in the
Walloon region. The main task of EW is to simplify the lives of citi-
zens and enterprises that need to communicate with public entities.
EW develops and acquires 20 software products and projects for

Total employees Interviewees

20–22 One senior software engineer
800 Five senior software engineers, and two

offshore junior software engineers
72 One junior software engineer and one senior

consultant
3 One managing director
51–200 One senior software engineer/chief architect
10 One chief technology officer

on 20,000 One software engineer
1001–5000 One group leader and one software engineer

System

e
o
t
u
p
d
E
a

D

T
m
fi

c
c
t
A
p
h
b
i
o
h
r
i
e
I
o
e
a
a

p
a
t
i
t
c
r
d
o
t
c
c
i

i
c
p
i
o
p
t
s
k

q
2
s
s
y
a

f
t
a

H. Unphon, Y. Dittrich / The Journal of

-government, and on-line public administration. EW has a total
f 22 employees, ten of whom are in IT department. EW employs
he core contributors for an open source project following a prod-
ct line approach that is developed together with a number of IT
eople in different municipalities. We interviewed Gaëtan, a senior
eveloper educated in computer science who has been working at
W for 2 years. He is the main developer of the open source project
nd cooperates with a Belgian university to explore Model Driven
evelopment (MDD), along with feature modelling techniques.
he project started with evolving a product family for advanced
eeting management functionality, like meeting workflow speci-

cations and document generation.
The second organisation is ABC, an independent research and

onsultancy in the field of water, environment, and health. The
ompany has approximately 800 employees, and is based in more
han 25 countries worldwide with their headquarters in Denmark.
BC develops more than 15 commercial software products sup-
orting water resources management, with the main expertise in
ydrodynamic simulation. Some of the software products have
een evolved for more than 20 years. Of the 35 developers, we

nterviewed five senior developers at their headquarters, and two
ffshore developers in China. One of the senior developers is the
ead of development and responsible for all products. The rest are
esponsible for different products, shown as ABC product #1–5
n Table 2. The five senior developers are educated in hydraulic
ngineering, while the two offshore developers are educated in
T. One of the interviewed developers is working closely with us
n a project that re-engineers a core computational part of three
xisting products using a software product line approach. However,
rchitectural tools and practices were introduced to the project
fter this interview study.

The third organisation is OMD, a Danish founded company that
rovides advanced role based access control (RBAC) and tools for
ssuring and reporting compliance between legal requirement and
he identity management. Established in 1999, OMD has operations
n Europe, Africa, Australia and North America, delivering its solu-
ion via a network of skilled partners and system integrators. The
ompany has 72 employees: two in USA, three in Germany and the
est in Denmark. The IT team consists of 18 people, 12 of whom are
evelopers. The company offers three standard software products
f which only two are maintained. We interviewed two employees,
o be hereafter named as Santiago and Neeraj. Santiago is a senior
onsultant, specialised in Business Process Management and edu-
ated in Economics. Neeraj is a junior software developer educated
n IT.

The forth organisation is ARG, a private consultancy company
n the field of customer relations management and telecommuni-
ations. For the past 2 years, ARG has been developing software
roducts on top of call recording systems and CRM systems. We

nterviewed Ole, a managing director and a founder of ARG. ARG’s
ffice and development centre is based in Germany. The com-
any has three employees educated in computer science: Ole, and
wo developers. Ole invented and designed the prototype of the
ystems. He’s the only person in the company that has domain
nowledge in the field of telecommunication.

The fifth organisation is GDT, a security software company head-
uartered in Germany. The company size ranges between 51 and
00 employees, while 10–11 of them are developers. GDT offers ten
ecurity software products for home users and businesses. Those
oftware products have been in development for more than 18
ears. We interviewed Hans, a senior software engineer and chief

rchitect at GDT. Hans has been working at GDT more than 8 years.

The sixth organisation is CO, a European software publisher
ounded in 1999 by Belgian Internet pioneers who specialize in vir-
ual office solutions. The organisation is a small company having
n approximately ten employees, three of whom are developers.

s and Software 83 (2010) 2211–2226 2217

We interviewed Guillaume, a back-end developer and a chief tech-
nology officer (CTO) who has been working with the company for
more than 9 years.

The seventh organisation is XYZ, one of the world’s leading inter-
net companies that provides searching, organises information, and
makes it accessible. XYZ has provided dozens of products since
the late nineties. The company has approximate 20,000 employees.
Although, XYZ is a large company, the size of the development team
is kept between 4 and 6 people. The interaction between teams is
the responsibility of architects and product managers. Team mem-
bers are changed regularly depending on the need of products and
projects. The headquarters is located in the USA, but the company
has a software development centre in Switzerland where we inter-
viewed Marie, a software engineer educated in computer science.
At the time of the interview, she was working on a 2-year-old prod-
uct. Because development processes and practices at XYZ are varied
from team to team, the XYZ product shown in Table 2 refers only
to the product that Marie is working with.

The eighth and last organisation is DZ, based in Germany. It
is one of the world’s leading centres for the investigation of the
structure of matter. DZ develops, runs, and uses accelerators and
detectors for photon science and particle physics. The company
ranges between 1001 and 5000 employees who are allocated to
various groups. Each group is responsible for its own projects and
products. The products are all open source and are often developed
together with other companies and institutes worldwide. We inter-
viewed two employees, i.e. Jan and Matthias. Matthias is a group
leader who had proposed his idea to develop the system which was
established as a project before Jan, a software engineer, joined the
group. The project was to evolve two products that needed to be
used together for the system to control cryogenic processes for the
colliders. One of the products had been in use for 20 years, and
another had been in development for 1 year. Currently, both prod-
ucts are developed by 2–2.5 developers and are assembled in 15–20
applications.

4.2. The presence of software architecture

This study is based on interviewees’ perception on software
architecture. We wanted to know about their architectural under-
standing and how the software architecture is present in the
development practice: “What is your understanding of software
architecture?” The answers were given differently. The term soft-
ware architecture had been explained using a variety of buzzwords,
e.g. a blue-print/skeleton of software, components, design of
source code, high-level patterns/abstraction, 4 + 1 views, struc-
turing, assembling building blocks, stack of technology, layering,
dependencies, plug-ins, design patterns, UML diagrams, overall
description of the communication of the software, and overview
for the development team. The implications of what interviewees
understood about software architecture ranged from source code
to human activities.

Table 2 summarises the results with respect to software prod-
ucts that interviewees were working with. Product pseudonyms
are used to represent the software products as company-based
products because of confidential information. The presence of
architecture is categorised into Different dimensions of architecture
representation and usage; To what detail is the architecture repre-
sented?; and How is the architecture expressed?. The cross sign (X)
denotes existence of the item with respect to software product
names. This is rather coarse information and has to be interpreted

together with the qualitative analysis of Section 5.

The first category, different dimensions of architecture represen-
tation and usage, are further categorised into In which form is the
architecture presented in the process?; How is the architecture repre-
sented?; How the architecture representation is used?; and How can

2
2

1
8

H
.U

n
p

h
o
n

,Y
.D

ittrich
/

Th
e

Jo
u

rn
a
l
o
f

Sy
stem

s
a
n

d
So

ftw
a
re

8
3

(2
0

1
0

) 2
2

1
1


2

2
2
6

Table 2
The presence of architecture with respect to software products.

Software products

EW
products

ABC
product #1

ABC
product #2

ABC
product #3

ABC
product #4

ABC
product #5

OMD
products

ARG
products

GDT
products

CO
products

XYZ
product

DZ
products

Different
dimensions of
architecture
representation
and usage

In which form is
the architecture
presented in the
process?

In somebody’s head X

X X

X X X X

X

Documented in folder,
binder, or internet

X X

X X X

X X

Readily available in
workspace

X X X X X

How is the
architecture
represented?

Text X X X X X X X
Source code X

X X

X X X X X X X X

X

Boxes and arrows X X X X X X
Class, packages, and
diagrams

X X X X X X X

Architecture Description
Language (ADL)

X

Different views X X X
How the
architecture
representation is
used?

Design

X X X X X X X X X X

Communication between
developers

X X X X X X X X X X X

Communication about
changes

X X X X X X X X X

Communication when
designing new features

X X X X X X X X X X

Communication for bug
fixing

X X X X X

As feedback for on-going
implementation

X X X X X

Distribution of work and
responsibility

X X X X X X X X X X

Generation of diagrams X X X X
How is the
architecture
representation
updated?

Never X X X X X X
Regularly controlled X X X X X X X X
Continuously X X X X
Related to an overall plan
and release plan

X X

Only when problem occurs X X X X

To what detail is the
architecture
represented?

Overall X X X X X X X X X X X
Classes X X X X X X X X X
Styles X X X
Patterns X X X X X X X
Design patterns X X X X X X
Various views X X X

How is the
architecture
expressed?

Implicitly Source code X X X X X X X X X
Explicitly Diagram X X X X X X X X

Textual description X X
Requiring substantial knowledge of
implementation base/pattern architecture

X X X X X X X X

System

t
t
h
i
i
p
p
p
c
n
g
g
p
a
i
b

s
p
p
A
p
t
c
o

t
k
p
u
t

p
t
t
t
u
t
a
r
r
a
t
a
e
a
t
e

5

y
g
t
5
p
t
u
S
a
a

t
p
r

H. Unphon, Y. Dittrich / The Journal of

he architecture representation be updated?. For example, the archi-
ecture for EW products is presented in the form of in somebody’s
ead, and documented in folder, binder, or internet, but is not read-

ly available in the workspace. The architecture for EW products
s represented in text, source code, boxes and arrows, class and
ackage diagrams, Architecture Description Language (ADL), and
rovides different views. The architecture representation for EW
roducts is used in design, communication between developers,
ommunication about changes, communication when designing
ew features, communication for bug fixing, as feedback for on-
oing implementation, distribution of work and responsibility, and
eneration of diagrams. The architecture representation for EW
roducts is updated, regularly controlled, related to an overall plan
nd release plan, or only when problem occurs. The many marks
n EW’s column indicates an elaborated practice which might have
een due to the cooperation with the local university.

The next category is To what detail is the architecture repre-
ented? containing five levels: overall, classes, styles, patterns, design
atterns, and various views. For example, the architectures of ABC
roduct #1–5 are all detailed at overall level. The architecture of
BC product #1 is detailed at the levels of overall, classes, styles,
atterns, and various views, but not design patterns. The architec-
ures of ABC Product #3–4 are detailed at the levels of overall and
lasses. The architecture of ABC Product #5 is detailed at the levels
f overall, classes, and patterns.

The last category is How is the architecture expressed? that is fur-
her categorised into implicitly, explicitly, and requiring substantial
nowledge of implementation base/pattern architecture. For exam-
le, the architectures for OMD products are explicitly expressed
sing diagram and requiring substantial knowledge of implementa-
ion base/pattern architecture.

Based on Table 2, the architecture is mostly presented in the
rocess as a form in somebody’s head. All our interviewees report
hat the architecture is represented in the source code, but architec-
ure description languages (ADLs) are hardly ever used to represent
he architecture. The architecture representation is commonly
sed for design, communication between developers, communica-
ion about changes, communication when designing new features,
nd distribution of work and responsibility. Updates of architecture
epresentation are almost equally distributed between never and
egularly controlled. However, the updates rarely relates to over-
ll plan and release plan. Most of the interviewees confirmed that
he architectures are represented at the overall detail; only a few
ddressed styles and various views. The architectures are almost
qually expressed implicitly in source code, explicitly in diagram
nd almost always requiring substantial knowledge of implementa-
ion base/pattern architecture. However, the architectures are rarely
xpressed explicitly in textual description.

. Analysis of interviews

This section presents the results of the grounded theory anal-
sis of the interviews: Section 5.1 begins the analysis with target
roups of architecture and their understanding levels in the archi-
ecture; Section 5.2 is documentation for the architecture; Section
.3 explains how newcomers learn the architecture of software
roducts; Section 5.4 points out the role of architect(s); Sec-
ion 5.5 shows communication channels that developers use for
pdating information about changes in their software products;
ection 5.6 addresses the architecture with respect to evolvability
nd changes; and Section 5.7 presents architectural problems as

ddressed by our interviewees.

In the presentation, we use citations from the interviews to illus-
rate and support our analysis. These citations keep as much as
ossible to the original wording. We minimally edited them to for
eadability and Grammar.

s and Software 83 (2010) 2211–2226 2219

5.1. Architecture: who needs it and at what level?

Throughout the software life cycle, development team mem-
bers carry on their tasks depending upon their roles. Different team
members use architecture in different ways in order to collabo-
rate with others. The interviewees distinguished three groups, i.e.
newcomers, developers, and chief architects. The newcomers need
architecture as a springboard to understand the software that they
are going to develop. The developers share and accumulate archi-
tectural knowledge, in particular, the part of architecture that they
are responsible on a daily basis. The chief architect orchestrates all
architectural activities based on their architectural knowledge.

The different roles refer to the architecture on different levels
of abstraction. These levels define a protocol on how to discuss and
understand the architecture. Marie from XYZ company said: “Usu-
ally, we talk to each other verbally [face-to-face], and we can imagine
[understand] because we know the code basis. When we do it on the
team, we never go to class level. Otherwise, nothing will be done. We
define the interface level, i.e. how do we talk to each other.” But not
everyone will be able to look from a high-level abstraction point-
of-view. Guillaume, the CTO from CO company, told us when he
reviewed his colleague’s work, the code worked fine, but it was not
easy to understand. The main challenge was that it required a lot of
abstraction to interpret. “You don’t have to tell how the code works
in the [function name], but what it does. For example, yesterday, he
[Guillaume’s colleague] wrote a function ‘setStyle’. What the function
does is to change the style. But that is very low-level interpretation.
In fact, at the high-level it is to add background to the menu and the
exact name would be ‘setBackground’, not ‘setStyle’ . . .” The differ-
ence between levels of abstraction became visible again when we
asked Jan and Matthias from DZ to draw and explain their archi-
tecture. They worked on the same product, but Matthias explained
the architecture as the overall picture, while Jan explained what he
was responsible for on the daily basis.

5.2. Documentation

Forms of documentation and how it was used was subject to
each of the interviews. The code base presented throughout the
interviews is regarded as the best and only up-to-date representa-
tion of de facto architecture, whereas independent documentation
is regarded as problematic because it is outdated quickly.

5.2.1. Code base as actual documentation
Source code is seen as ‘the’ actual documentation while the

other kinds of documentation are informally produced to sup-
port situated discussion. However, UML diagrams are produced
in small scale mostly by need, for example, to get feedback from
other developers before implementation of a large component.
Direct discussion with developers is more efficient. Many inte-
grated development environments (IDEs) support synchronisation
between UML and source code, however, the developers feel com-
fortable to start with programming. The developers have the
impression that understanding and becoming familiar with UML
diagrams takes longer than looking into source code.

A common problem is a lack of documentation on the overview
of a system, in particular a design rationale, and the description of
the main interfaces or functions. A few documents are provided for
newcomers to become familiar with architecture. Newcomers feel
that comprehending systems from documents only, is hopeless, so
they as well prefer to start with programming.

When the source code becomes the actual documentation, the
naming of classes, methods, or interfaces is extremely crucial for
on-going development. If the name is on the correct level of abstrac-
tion, it facilitates the other developers to understand the concept
behind the name. The citation above by Guillaume shows that archi-

2 System

t
f
o

5

d
2
p
t
d
m
e

k
D
b
w
c
w
s
d
t
u
i

u
s
a
g
b
w
s
t
c
n
a
u
c
f

5
t

a
e
r
s
a
i

5

w
r
t
t
a
t
c
o
s
a
r
o

220 H. Unphon, Y. Dittrich / The Journal of

ects are well aware of this need and take care to implement it. Apart
rom the on-going development, shared distributed development
r end-users development also gain this benefit.

.2.2. The absence of a document
Our interviewees give several reasons for the absence of explicit

ocumentation. Some software products have been used more than
0 years. When the products were first developed, the main pur-
ose of development was to solve domain problems for a short time,
hus, there was no effort on architectural documentation. When
evelopers started with a small feature, they neglected to docu-
ent, so when that feature grew bigger, documentation hardly ever

xisted.
Though a document might have been created, the effort of

eeping the document up-to-date leads to maintenance neglect.
evelopers have the responsibility to document what they have
een programming, but they are aware that their documentation
ill soon be out of date. Marie said: “Documentation is like [. . .]

leaning. You have to clean regularly, but it will get dirty again. When
e document, we know that the documentation will be out of date

oon.” Maintenance and updating documents is a boring task for
evelopers. The architecture document often has a simple nota-
ion or diagram using boxes and arrows, so it’s not convenient to
pdate the diagram. Therefore, documentation diverges from what

s actually present in the source code.
In a complex and specialised domain area, e.g. hydraulic sim-

lation, domain expertise is strongly required for developing a
oftware product. Often, a developer is a domain expert rather than
software expert because it is a big task for the software expert to
et familiar with the domain, so documentation is often neglected
y domain experts. This in turn becomes a problem. Our intervie-
ees reported that even a developer with domain expertise could

pend up to a year to fully understand and implement a new func-
ionality in a software product. The main reasons are not only the
omplexity of software products building on top of a stack of tech-
ology, but also the unavailability of architecture documents. The
rchitecture exists in somebody’s head rather than a written doc-
ment. When a developer or an architect doesn’t document the
urrent architecture before leaving a company, it causes problems
or other developers who follow that architecture.

.3. Architecture knowledge acquisition: how newcomers learn
he architecture

A well-attuned team might not have any problems with informal
rchitectural practices, so we asked specifically how new develop-
rs are introduced to software architecture. All our interviewees
eported on their informal knowledge sharing practices. In this
ub-section, we show how architectural knowledge is acquired
nd developed by team members, discussed with a chief architect,
ntermixed with programming, and learning by experience.

.3.1. Discussion with a chief architect
Throughout a software’s life cycle, a chief architect discusses

ith developers in order to update the progress of the tasks, and
ealise the changes in the architecture. Since tasks are clearly dis-
ributed based on architecture, each developer is responsible for his
ask, for example, making new features or adding some function-
lity. The chief architect’s discussions with developers ensure that
hey understand the tasks before they begin implementation. The
hief architect often draws boxes and arrows or UML-like diagrams

n a piece of paper, or whiteboard, or makes a Power Point pre-
entation of a software prototype and its architecture. If the chief
rchitect’s explanation is not precise, bad design decisions could
esult. In order to avoid that situation, one of ABC senior devel-
pers explained how he transfers architectural knowledge to the
s and Software 83 (2010) 2211–2226

ABC offshore developers: “We go to China and explain this [diagram]
to them. Then we show them how things [architecture] are now and
explain [. . .] what we want to add to this, [the] new feature to put
in. . ..”

The chief architect synchronises architectural knowledge with
other developers by discussion, in particular, team members sit-
uated at the same physical location. The discussion happens in
weekly meetings or daily conversations that take place spon-
taneously in communal areas, like kitchens, canteens, or coffee
corners. The chief architect has the most up-to-date architectural
knowledge, but that knowledge is hardly ever documented. In order
to get that knowledge, one has to update from the chief architect.
When we asked Neeraj, a junior developer at OMD, about a certain
architectural style and patterns currently used for OMD software
products, he said: “You have to ask our system architects. I’m not able
to answer that.”

5.3.2. Intermixed with programming
Training with a chief architect or a senior developer is a

springboard for newcomers to understand architecture. Based on
our interviewee’s experience, a common training technique is
pair-programming, where two developers program together at a
workstation. In the vast majority of cases, the newcomer is assigned
to implement a simple task. The chief architect, or the senior
developers, sit down with the newcomers and allow them to ask
questions. The newcomers get an overview and design rationale of
the software product. They develop a ‘feeling’ – as one of out inter-
viewees expressed it – of how the software product works, how to
implement using provided tools, and how the repository is organ-
ised. The newcomers look into packages, files, and source code,
and at the same time, begin programming, so they gradually learn
how the software is architected. Later on, the newcomers become
developers who are responsible for the architecture of his or her
sub-systems. This intermix process of architecting and program-
ming facilitates both architecture acquisition and work progress.
However, the chief architect or the senior developers need to con-
tribute his or her time to educate each newcomer.

5.3.3. Learning by doing
A software product can be developed on top of a third party

software product. Often, the third party architecture is not fully
transparent, nor does it provide sufficient architectural documen-
tation to explain how the system is implemented or how it can
attach a new functionality. Every time the developers and the chief
architect begin developing on top of a new third-party software
product, they feel like they are ‘opening Pandora’s box.’ Moreover,
changes are hardly controlled. When the third party releases a new
version, the only way to understand that new architecture is to
look directly in the source code. It is always a time-consuming and
painful process.

5.4. The role of a chief architect

The analysis so far points to and underlines the importance
of the software architect acting on what we have begun to call a
‘walking architecture.’ Not all companies have a ‘chief architect’.
However, all products have a person or group of people acting in
that role, even though their title might be different, like chief tech-
nology officer (CTO), senior developer, product manager, project
leader, or system architect. Chief architects have – explicitly, or
due to the recognition of their expertise – the responsibility for

designing and updating the architecture throughout the software
life cycle. The chief architect informs and updates the developers
regarding architectural changes on a daily or weekly basis. In their
formal and informal meetings, the developers also update the chief
architects about architectural issues in the parts of the program for

System

w
d
s
t
o
t

s
i
k
c

e
c
a
i

5
d

t
w
t
p
a
c
i
u
s
m
n
b
n
s

r
r
t
m
m

t
a
t
a

5

n
p
t
l
t
d
p

c
i
s
w
l
b
h
t
m

H. Unphon, Y. Dittrich / The Journal of

hich they are responsible. The chief architect sometimes creates
ocuments containing a few diagrams to give a good view of the
oftware, however, most developers still prefer talking directly to
he chief architect. The developers usually ask about relevant parts
f the software that are changing, and even publicly-kept architec-
ural documents. Still, the most updated version of architecture is
stored’ in the head of the chief architect. This is also the under-
tanding of the chief architects themselves, as it becomes apparent
n a question put to Hans, the chief architect from GDT, on how he
nows about the current or de facto architecture. “I’ve worked in the
ompany for eight years. Most of architectures are my architectures.”

As our interviewees emphasize, a good chief architect has both
xpertise in software engineering, and the domain. This sub-section
ategorises three main roles of the chief architect, i.e. controlling
nd communicating architecture within a development team, and
nterfacing to outward.

.4.1. Controlling and communicating architecture within a
evelopment team

A chief architect is responsible for most design decisions. In ini-
ial design discussions, the chief architect sometimes brainstorms
ith domain and software experts before designing the architec-

ure. This discussion covers data type, quality attributes, design
atterns, and platforms for the architecture. Most of the design
rchitectures have clear interfaces and low dependency between
omponents. The design architecture is often used for distribut-
ng development tasks and defining social protocols that aim at
sing the architecture as a coordination mechanism. Examples for
uch protocols are: when changing an interface the programmer
ust contact the relevant developer; the chief architect synchro-

ises the tasks by collecting, reviewing and accepting everything
efore checking changes and documenting requirements for the
ext release; and the chief architect schedules meetings or work-
hops with developers when the architecture needs to be updated.

During the implementation, some types of problems cannot be
esolved by tools automatically, e.g. naming of a function at the
ight level of abstraction. Thus, code review is often part of the
asks of chief architects. When they find a problem in the imple-

entation, the chief architects will talk directly to developers and
otivate them to resolve the problem.
Chief architects often train newcomers by assigning simple

asks, e.g. implementing a new component. They know where to
dd these new components or functionalities without endangering
he architecture. When they have sufficient knowledge, the chief
rchitect will assign developers to work on critical parts.

.4.2. Updating the ‘walking architecture’
Maybe because we did not ask explicitly, the interviewees did

ot always emphasise the above understanding of the mentioned
ractices, in that they may also serve the purpose of updating
he chief architect’s knowledge of architectural issues that might
ead to reconsidering the architecture itself. This became visible in
wo ways within our interview material; the communication with
evelopers was talked about as a two-way communication, and the
roblems that arose when the feedback channel did not exist.

Marie from XYZ answered the question of how their team dis-
ussed architectural issues: “When it becomes larger, especially [if]
t affects a whole sub-team, or the other part/team, or [we need a]
anity check, we set up [a] meeting or talk informally with the people
ho care about the affect and need to know.” Or as one of the project

eaders at ABC said: “Generally, it’s [a] very informal way, [talking]
etween colleagues that know about this thing.” On the question of
ow developers get to know about relevant changes in the archi-
ecture, another of the ABC architects answered: “Hopefully the one
aking the changes tell other people.”

s and Software 83 (2010) 2211–2226 2221

These informal update becomes problematic when the develop-
ment becomes distributed. One of ABC senior developers reported:
“Sometimes they do not. [. . .] Normally, when developers were in
Prague, most things [were] developed there, [so] they knew what [was]
going on and we [had] weekly meeting with them, talking about dif-
ferent things, and what different people [had] been doing. [. . .] Now,
it’s not the same [as] what we do in China. So, it’s more difficult, now.
Developers do not know anymore what changes in the application.
[. . .] It’s difficult.” One of our interviewees reported: “Sometimes we
have people implementing core components that destroy other compo-
nent. Not so much within the engine group, because we have only two
people. But the other, especially Singapore or Shanghai groups that did
some core components change, because there [was] no documentation
up there. They didn’t know that the components [failed] because they
[relied] on special functionality.”

Some of our interviewees reported about explicit measures to
stay up-to-date with the architecturally relevant changes to the
software product. A protocol might have been established that
developers must inform the chief architect before changing central
parts, core components, or data structures.

The integrated development environment can indicate archi-
tecture violations if set up in the right way. Also, nightly builds can
indicate when new code breaks an interface. In some companies,
the chief architect reviews changes to the code and based on the
reviews, discusses changes with the developers.

Guillaume from CO, keeps up-to-date with changes in the com-
mon parts of the software through regularly reading the source
code and common Wiki. “In the Wiki, every change in the software is
documented, not with a lot of detail. . . . In the trunk [of the CVS], we
document every commit. . . . I take care of [reading] source code, Wiki
and the commit.”

5.4.3. Interfacing to outward
Chief architects reported that they need to interface with peo-

ple working outside of the development team, e.g. people gathering
requirements and working with other related products, clients, or
end-users. In order to utilise the design architecture, chief archi-
tects have to ensure that agreement with people working outside
the development team have been made. In a conflict situation, chief
architects need to negotiate and compromise with those outside
sources.

Chief architects are aware that feedback from outside profes-
sionalises the development. They often discuss expectations of
implementation with their clients before conversations with devel-
opers. They go back and collect feedback from the clients or the
end-users before the next release. If chief architects have no direct
contact with clients or end-users, they have discussions with sales
and marketing people. The feedback from the clients or marketing
people will help the chief architects get a correct understanding of
the requirements. Based on this understanding, they prioritise and
delegate the requirements for the next release.

Due to business or organisational reasons, related software
products might be developed in different units. In some cases,
more than one team together develops a software product. In such
situations, chief architects needs to coordinate with other devel-
opment teams in the same company or from contract companies.
In software product lines, changes in one product may cause mal-
functions in other products. Thus, chief architects must take heed
of the changes. If problems occur, it is their task to find solutions.
The solutions vary from collaboration to architecting. Examples are
as follows: the chief architect communicates with another devel-

opment team on the changes overall effect; the chief architect
develops interfaces (e.g., API) to express a common concept of
another product; or the chief architect designs the architecture in
a way that accommodates the interest of both teams. Guillaume,
a chief technology officer at CO, told us how he handled recent

2 System

c
a
m
m
I
o

5

s
d
p
c
o

f
m
b
J
a
g
fi
m
c

5

g
a
i
a
m
d

5

i
o
d

b
o
t
f

5
r

r
t
s
i
t

5

i
t
a
k
t
a
f

w

222 H. Unphon, Y. Dittrich / The Journal of

hanges: “Last week, XYZ company published an API to access the
ddress book [of XYZ product]. There [was] a request to implement a
echanism to import the address book from XYZ to CO. XYZ imple-
ented most of APIs under a common umbrella [XYZ product]. I think
have to take care; I develop[ed an] interface in [a] generic way in
rder to express the concept of [XYZ product].”

.5. Communication about changes

Team members need to be updated about changes. In this sub-
ection, we categorise communication from the current software
evelopment practice that covers both human interaction and sup-
ort tools. Each practice presented below is ranked from the most
ommonly used to rarely used. Note that each interviewee reports
n more than one practice.

Verbal communication or face-to-face communication (i.e., free-
orm dialogue, explanation, and discussion) with colleagues is the

ost common practice. It is used by all interviewees and seems to
e the simplest way to update them about architectural changes.
ust another quote from our interviews complementing the many
bove: “We don’t have state diagrams and seldom use sequence dia-
rams. But we need knowledge about what [method is] to be called
rst or second. That’s not explicitly stated. It is just something that we
ake use of by asking people that know about this typical sequence of

alling.”

.5.1. Meeting
A development team must meet regularly where each developer

oes through all the tasks and updates what the other colleagues
re doing, so the team members can synchronise their understand-
ng of the architecture. In some distributed development projects,

whole development team assembles for a longer ‘coding camp’
eeting at the same location in order to brainstorm about new

esigns, or to finalise a new release.

.5.2. Nightly builds and testing
Nightly build mechanisms notify developers the next morning

f the changes checked in the day before had affected other parts
f the software, for instance, breaking interfaces or violating the
esign rules.

Email, mailing list, and instant messaging are used spontaneously
y team members to send messages (e.g., “By the way, you broke
ur code!”), or inform the other members within a team, or between
eams, about changes. Sometimes, messages are automatically sent
rom an IDE or a support tool.

.5.3. Concurrent versions system (CVS) and subversion
epository

In some cases, everyone in a team has their own branch on
epository to work on as a sand-box. Later, they carefully merge all
he changes in a common branch or a trunk in order to rebuild the
oftware. When a developer commits changes in the source code
nto the trunk, the CVS repository automatically sends an email to
he other developers.

.5.4. Rich IDE
Some IDE provides multi-disciplined team members with an

ntegrated set of tools for architecture, design, development and
esting of applications. The IDE can report problems in architecture
nd do quality assurance. Ole from ARG told us how his developers
new the relevant parts of the software were changing: “[IDE name]

ell us which part of the architecture has problems. . . . [IDE name] is
golden gun. It is a very complicated environment . . . It can do much

or software engineering.”
The main advantage of the integration is to handle the changes

ithin a monolithic tool. Guillaume from CO supported this with:

s and Software 83 (2010) 2211–2226

“I can modify code and the code is still consistent for all applications
. . . It is quite easy to handle.”

5.5.5. Code review
Changes in the source code are sometimes reviewed by chief

architects before one can commit to repository. They correct mis-
takes in the source code and improve the quality of software while
doing code review, then often discuss the changes and rationale
with developers.

5.5.6. Wiki
Every change in the software can be documented in a Wiki.

Though the documentation is not fully detailed, everybody is aware
of the changes and can use the Wiki to inform about the ones he
introduced (see also Section 5.4.2). Only few interviewees report
the use Wikis to update or inform team members about changes
to the source code or the architecture. However Wikis are some-
times used for collecting ideas and requirements from users and
developers.

5.6. Evolution and changes

When the original architecture had first been established, peo-
ple had no intention of ever changing it. However, changes initiated
by use and business contexts of software products resulted in
new requirements which in turn affected the architecture. Typical
examples include: a user request for some functionality that could
benefit other users; an intuition from a developer who might even
use the software himself triggers changes; and marketing strategy
or competition. From all organisations, our interviewees confirmed
that chief architects have to be involved in or responsible for all
changes, in particular, architectural changes. Chief architects need
to satisfy the change requests and existing architecture in order to
reduce the effect on the architecture. They might decide to add new
components or functionalities. If re-design of the whole architec-
ture is the only solution, they could suggest creating a new software
product.

Regarding changes implied in the development, protocols might
be put in place to support information and knowledge sharing. For
instance, developers are not allowed to change a common part, core
component, or data structure without informing the chief archi-
tect; the developers should synchronise their changes with their
colleague; or developments have to be done based on the latest
version.

Although chief architects are initially responsible for establish-
ing architecture and taking care of changes, it is difficult to keep
track when architecture evolves over long periods of time. Changes
in source code affect the other parts of a software product, for exam-
ple, changing a common part causes a software malfunction. One of
ABC senior developers complained: “On the entire ABC product line,
if you change something on the core components, you may destroy
95% of the component here. . .. It’s difficult to know exactly what com-
ponent you are touching by changing the code.” Furthermore, the
changes sometimes have effect beyond a company’s boundary. Ole,
a managing director at ARG, told us about customising a third party
product: “When they [the third party product developing companies]
make changes on the architecture, we know it by the malfunction on
our software, not by documentation.”

5.7. The problems of the practitioners

Our interviewees also talked about problems in their archi-
tectural practice. We add them here for two reasons: (a) they
triangulate and confirm the analysis so far and thus provide
additional support for our analysis; and (b) the section gives an

System

i
o
a
p
t
u
c

d
d
l
t
p

t
fi
s
o
n
c
[
o
w
[
o
t

B

t
c

i
o
L
c
t
b

i
c
l

g
w
h
w

s
e
o
o

w
a
d
p
d
n
a
k

e
t
t
a
c
p

H. Unphon, Y. Dittrich / The Journal of

ndication of what are the problems from the practitioners’ point-
f-view. The problems address technical infrastructure as well
s co-operational aspects of software development. The common
roblems in the technical context are changes in technical infras-
ructure, framework, or standard that a software product builds
pon, e.g. changing virus scanners to support Unicode base, or
hanging global unify identification mechanisms for CRM systems.

With respect to co-operational aspects, some software
esign/developmental approaches are likely to obstruct day-to-day
evelopment practices and hinder collaboration. Furthermore, a

ack of awareness, a lack of domain or software engineering exper-
ise, or the loss of architecture knowledge often causes architectural
roblems.

Gaëtan, a senior developer at EW, reported from his daily prac-
ice: “we have a problem with the model; it is a binary file, not text
le. So it means that if I want to change a part of the model, and, at the
ame time, the other developer wants to modify another part. It is just
ne file. So one of the two guys can make modification, the other can-
ot touch anything. Once one finishe[s] and commit[s] the file, another
an check-in. It is not easy to modify the design with more than one
person] at a time. For source code is different; the code is in plenty
f files. People can work on separate sets of the files. When people
ork on the same file, we can ‘make diff’ between elements and see

the code] that a guy works on this part [. . .] and merge [the] changes,
r the changes can be merged automatically because it doesn’t touch
he same part. It is easier to work [in a] collaborative way with code.
ut with this [the model] is not possible.” It is reasonable to claim
hat, apart from being ‘the’ actual document, code basis supports
ollaborative development better than the model.

Although many existing tools and practices are used for inform-
ng changes and controlling evolution (e.g., nightly builds, unit test,
r regression test), these tools do not resolve all the problems.
acking information about changes as reported in Section 5.4.2 is
ommon, especially in distributed software. Although documenta-
ion problems are addressed here, creating a document might not
e the best, or the only solution. Hans, a chief architect at GDT, said:
If it [the document] would be a good view on the software, I will do
t. . . .We don’t need overview for every class.” Therefore, giving and
ontrolling information about changes should be done at the right
evel.

One of ABC senior developers stated: “This kind of dependency
raph would tell them we have to tell someone about something, that
e [need to] make changes. It helps people be aware that if they change
ere [a component it] may affect [something] elsewhere. Otherwise, it
ill be difficult to see by [yourself].”

Finding the right people and keeping them is a challenge in many
oftware developing companies. “One year ago, unfortunately, our
xcellent developer left our company. It is very hard to find [new] devel-
pers. We are looking everywhere,” Guillaume, a chief technology
fficer at CO complained.

Developing software products need both domain and soft-
are engineering expertise, but they rarely come together. Ole,
managing director of ARG, addressed his biggest concern: “our

evelopers do not know anything about telecommunication via tele-
hone, although they use it. . . . On the other hand, [third party product]
evelopers understand very well about telecommunication and tech-
iques of voice over IP, or something like that, but they don’t know
nything about software factories, processes or design patterns. They
now nothing about the architecture and development cycle.”

If newcomers or developers have only software engineering
xpertise, they will need time to acquire sufficient domain exper-

ise. If the developers have only domain expertise, they will need
ime to get software engineering expertise. When developers
cquired both, their expertise becomes an important asset for
ompanies. However, companies cannot always hold onto their
ersonnel. Losing a central developer or chief architect can result

s and Software 83 (2010) 2211–2226 2223

in the failure of a software product. One of ABC senior developers
said: “the software products are going to the dying phase or dead-code
when the key programmer left. . . . We don’t know how to write them.”

6. Discussion

This empirical study focuses on the development of software
products. Software products constantly evolve. Changes in the soft-
ware product must be handled consciously in order to prevent
dead-end development. Through our interviews it became clear
that the main issue when developing software products is not
to implement the design architecture in order to assure certain
given qualities, but to maintain the architecture in a viable state
when evolving the software. The software needs to support future
requirements and innovations. The architecture – both the source
code architecture and, if it exists, the current design architecture
– are means towards that end. In the discussion now, we high-
light the aspects that we consider relevant for developing support
for architectural practices for software product development. The
importance of the ‘walking architecture’, ‘good reasons for bad doc-
umentation’ indicate the need to develop social protocols fitting
with local practices when introducing architecture representations
and documentation, and we finally propose a means to promote
architecture awareness.

6.1. Architecture awareness is achieved through ‘walking
architecture’ practices

The analysis indicates that product development teams depend
on chief architects or group of architects who act as what we started
to refer to as the ‘walking architecture’ for communicating the
architecture to the developers, and in turn communicate problems
which might become architectural issues to the software architect.
The walking architecture takes most, if not all, design decisions
and solves architectural problems throughout on-going develop-
ment. Architectural issues arise from inside as well as outside the
development team, cover technical and social aspects of software
development, and require domain, as well as software engineer-
ing expertise. In order to solve these issues, the chief architect
interacts with technical and business people, establishes tools and
practices, and recruits or trains team members for that expertise,
etc. Because architecturing is not just only a matter of technical
design, but also of juggling the social contexts of software devel-
opment that make it almost unable to automate (Unphon et al.,
2009a).

Our analysis both confirms the importance of inward and out-
ward interaction as part of the role of the software architect
(Kruchten, 1999), and deepens the understanding of the impor-
tance of this interaction. In the interviews, the rational behind these
practices becomes visible. On the one hand, developers need up-to-
date knowledge about the architecture, here and now. The architect
can explain the structure of the software in relationship to the
problem at hand. On the other hand, the chief architect may stay
in contact with the development of the source code and become
aware of potential issues. The emphasis on face-to-face commu-
nications provides a strong indication that whatever methods and
tools software engineering research proposes needs to be aligned
with the practices of knowledge sharing by, and with, the walking
architecture.

6.2. Good reasons for bad documentation

A lack of up-to-date architecture documentation is problem-
atic according to our interviewees, and software architecture
researchers often refer to the threat of loosing architectural

2 System

k
n

w
a
w
a
t
a
l
r
s
t
w
y
p
w
c
t
t
fi
t
t
i
w
s
m

d
a
t
a
d
a
a

c
a
a
p
t
t

r
m
P
d
m
i

6

w
i
i
c
t
o
i
o
c
i

s

224 H. Unphon, Y. Dittrich / The Journal of

nowledge by using the architect. If the lack of documentation phe-
omenon is so widespread, one might suspect ‘good reasons’ for

bad documentation’ (see also Heath and Luff (1996).)
As presented in Section 2, architecture research emphasises

ritten representation (e.g., formal notations or documentation)
nd codified knowledge. Written representation describing soft-
are architecture might suit a researcher’s practice rather than
chief architect’s practice. Based on our empirical evidence,

he architecture almost always exists as knowledge of people
pplied and communicated answering situated questions and prob-
ems. Documentation – if it exists – tend to play a secondary
ole. Architecture knowledge management can be described as
ocialisation-heavy. Face-to-face communication is the-state-of-
he-practice of architectural knowledge management. For example,
e often overhear a team member say to his/her colleague: “I know

ou worked on this component, please tell me about it,” or “The best
erson to ask is the architect.” Team members are used to conversing
ith a chief architect about their work. Through these discussions,

hief architects not only educate and inform developers, but also
ake heed of changes that may cause problems to the architec-
ure later on. They converse with the team members in order to
nd a solution for architectural problem. Given these practices,
he absence of documents is not a risk for software companies. On
he contrary, if documentation was successfully established even
n parts instead of the aforementioned practice, the chief architect

ould lose track of what is going on in the architecture. As a con-
equence, nobody could maintain the architecture anymore, which
ay in turn result in serious problems.
This confirms Naur’s observation that the communication of

esign knowledge needs to take place in a situated manner. Our
nalysis also implies that research needs to look for other ways
o address the threat of loosing the ‘walking architecture’. Maybe
conscious sharing of the architect role among a group of senior
evelopers might both improve the quality of the architecture and
llow for the survival of the product when the original ‘walking
rchitecture’ is no longer available.

If a shared form of documentation is established, social proto-
ols need to be established, as well, to make sure that the walking
rchitecture learns about developments in the code and potential
rchitectural issues. One example of such a social protocol is the
ractice of regularly reading the common wiki-based documenta-
ion, the checked in source code, and the CVS information in order
o keep up-to-date with the changes to the code.

Our observations and the challenges they imply for architecture
esearch and method, however, refer to software product develop-
ent and evolution. Software products are evolved continuously.

ractices for contract development (Mitchell et al., 2002) or the
evelopment of high-integrity systems (Hinchey and Bowen, 1999)
ight differ and the conclusions might need to be adjusted accord-

ngly.

.3. How to promote architecture awareness

Practitioners and researchers agree on the importance of soft-
are architecture being part of everyday software development

n order to enhance quality attributes (Naik and Tripathy, 2008),
n particular evolvability (Unphon et al., 2009b). Many software
ompanies have successfully evolved their products even though
hey hardly ever emphasise explicit architectural documentation,
r keep architecture documents up-to-date. However, source code
s a reification of design. Developers and architects are well aware

f the architectural structures: software developers know when to
hange the source code, where to change it, who to ask, who to
nform, etc. Architecture is alive with a walking architecture.

Tools and methods for promoting architecture awareness
hould support this practice, rather than establishing a diverg-

s and Software 83 (2010) 2211–2226

ing approach. Two promising examples of how to do this are as
follow:

One of our interviewees reports on his projects sharing and
cooperatively maintaining the architecture knowledge in the form
of a common wiki (see section 5.4.2). Documentation in a Wiki,
albeit not very detailed, can communicate changes to team mem-
bers, and as well to the chief architect. The documentation in Wiki
becomes one way of communicating about changes continuously
that does not hinder, but supports the chief architect. The chief
architect can then read through the changes and be aware of any-
thing that might affect the architecture.

A similar tool is proposed by Solís and Ali (2008). Personal
communication (Muhammad Ali Babar) on the usage of this tool
indicates that a social protocol similar to the one reported above,
evolved around its usage.

As part of the research with a product developing company,
Unphon (2009) presented the introduction of a ‘build hierarchy’
matching the static architecture as a technique to give develop-
ers continuous feedback about whether their code complied with
the design architecture. This architecture-based built hierarchy
can be implemented as part of the integrated development envi-
ronment (IDE), or the nightly build infrastructure. That way, the
build hierarchy supports on-going architecting through architec-
tural compliance checking between design architecture and code
architecture. If implemented as part of the IDE, developers can
be informed with every compile command whether their changes
affect the other parts of the software or break the architecture.
If changes in the source code break the design architecture, the
developers need to revise the changes or discuss it with the chief
architects and their colleagues. Through the discussion, the chief
architects are updated about development problems that might
become architectural issues.

7. Conclusions

This article began with postulating the question, what architec-
ture practices do software product developing companies apply to
keep their products alive, sometimes over several decades. Besides
providing a rich picture of industrial architecturing practices, we
would like to highlight three major results:

The first is the concept of the ‘walking architecture’ and his or
her practices of knowledge sharing and architecturing. Our study
shows that architecture practices emphasise face-to-face commu-
nication rather than the codification in documents. The analysis
emphasises the importance of a chief architect acting as a walk-
ing architecture who is responsible for maintaining and evolving
the software products’ architecture. Through face-to-face commu-
nication with developers, as part of the everyday development, the
chief architect communicates the software architecture in a form
most suited to help the developers with problems at hand, and at
the same time, becomes aware of potential architectural problems.

The second result is a more differentiated perspective of the
effect of documentation. Other research has promoted documen-
tation as a recommended practice for development teams. Based
on our analysis, we hesitate to join this chorus. Our interviewees
emphasise that documents quickly become outdated if not con-
tinuously maintained. Moreover, the documentation could disrupt
the practice of the walking architecture: if reading the document
replaced discussions, the chief architect would not be informed
about potential problems in a timely manner. As there thus might
exist ‘good reasons’ for ‘bad documentation’ it does not help to

re-iterate the praise of extended documentation. Rather, the intro-
duction of externalisations needs to be carefully devised not to
disturb the knowledge exchange in a way that does not jeopar-
dise the ability of the ‘walking architecture to guide the evolution
of the software product.

System

t
a
a
a
d
d
p
o
a
r
t
f

h
t
e
t
o
u

A

W
f
r

R
A
A
B
B
B
B
B
B
B
C
D
d
d
D
D
D
H. Unphon, Y. Dittrich / The Journal of

The third result we want to highlight complements the reflec-
ions on the role of documentation: applying the notions of
wareness and social protocols as a base for sharing information
bout a cooperatively achieved task allowed us to take another
pproach to tools and methods supporting software architecture. If
ocument and tools are devised, they need to be fitted into every-
ay developmental practices, and require a change in the social
rotocol around architecting. As examples, we discussed the usage
f a common Wiki from our field material, and using the build hier-
rchy as a reification of the design architecture based on related
esearch. With this result, the article confirms the importance of
aking cooperative aspects into account when devising solutions
or seemingly technical problems.

Given the research above, the reader might become curious on
ow exactly does the in situ communication take place, and how are
he social protocols without or around architecture documentation
nacted and maintained. The answer to these questions we need
o leave to future research, e.g. an observational study focussing
n the day-to-day architecturing in the context of software prod-
cts.

cknowledgements

We kindly thank all interviewees for participating in this study.
e should like to extend our gratitude to Dr. Wolf-Gideon Bleek

or his inspiration and help in setting up this study. The journal
eviewers contributed to substantial improvements!

eferences

llen, R., Garlan, D., 1997. A formal basis for architectural connection. ACM Trans.
Softw. Eng. Methodol. 6 (3), 213–249.

llen, R.B., Garlan, D., 1992. A formal approach to software architecture. In: Pro-
ceedings of the IFIP 12th World Computer Congress on Algorithms, Software,
Architecture – Information Processing’92, vol. 1. North-Holland Publishing Co.,
Amsterdam, The Netherlands, pp. 134–141.

ass, L.C.P.K.R., Klein, M., 2008. Models for evaluating and improving architecture
competence. Technical report CMU/SEI-2008-TR-006. Software Engineering
Institute.

ass, L., Clements, P., Kazman, R., 2003. Software Architecture in Practice, 2nd ed.
Addison-Wesley.

elady, L., Lehman, M., 1976. A model of large program development. IBM Systems
Journal 15 (1), 225–252.

ischofberger, W., Kühl, J., Löffler, S., 2004. Sotograph – a pragmatic approach
to source code architecture conformance checking. In: Software Architec-
ture. Vol. LNCS 3047/2004. 1st European Workshop on Software Architecture
(EWSA2004), Springer, Berlin/Heidelberg, pp. 1–9.

ooch, G., Rumbaugh, J., Jacobson, I., 1998. The Unified Modeling Language User
Guide. Addison-Wesley Professional.

redemeyer, D., 2010. The Role of the Architect in Software and Systems
Development, Last visited 15 April, 2010 http://www.bredemeyer.com/
Architect/RoleOfTheArchitect.htm.

rooks, F., Iverson, K., 1969. Automatic Data Processing (System 360 Edition). John
Wiley.

zarnecki, K., Eisenecker, U., Czarnecki, K., 2000. Generative Programming: Methods,
Tools, and Applications. Addison-Wesley Professional.

amian, D., Izquierdo, L., Singer, J., Kwan, I., 2007. Awareness in the wild: why com-
munication breakdowns occur. In: Proceedings of the international Conference
on Global Software Engineering, ICGSE, IEEE Computer Society, Washington, DC,
August 27–30, pp. 81–90.

e Souza, C., Froehlich, J., Dourish, P., 2005. Seeking the source: software source
code as a social and technical artifact. In: GROUP’05: Proceedings of the 2005
International ACM SIGGROUP Conference on Supporting Group Work. ACM, New
York, NY, USA, pp. 197–206.

e Souza, C.R.B., Redmiles, D., Cheng, L.-T., Millen, D., Patterson, J., 2004. Some-
times you need to see through walls: a field study of application programming
interfaces. In: CSCW’04: Proceedings of the 2004 ACM Conference on Computer
Supported Cooperative Work. ACM, New York, NY, USA, pp. 63–71.

ingsøyr, T., Conradi, R., 2002. A survey of case studies of the use of knowledge man-
agement in software engineering. International Journal of Software Engineering
and Knowledge Engineering 2 (1), 391–414.

ourish, P., Bellotti, V., 1992. Awareness and coordination in shared workspaces. In:
CSCW’92: Proceedings of the 1992 ACM Conference on Computer-Supported
Cooperative Work. ACM, New York, NY, USA, pp. 107–114.

unsire, K., O’Neill, T., Denford, M., Leaney, J., 2005. The ABACUS architectural
approach to computer-based system and enterprise evolution. In: ECBS’05:
Proceedings of the 12th IEEE International Conference and Workshops on Engi-

s and Software 83 (2010) 2211–2226 2225

neering of Computer-Based Systems, IEEE Computer Society, Washington, DC,
USA, pp. 62–69.

Eriksson, J., 2008. Supporting the cooperative design process of end-user tailoring.
Ph.D. thesis. Department of Interaction and System Design, School of Engineer-
ing, Blekinge Institute of Technology, Sweden.

Farenhorst, R., de Boer, R.C., 2009. Architectural knowledge management: Sup-
porting architects and auditors. Ph.D. thesis. VU University Amsterdam, ISBN:
978-90-8659r-r346-0.

Feiler, P.H., Lewis, B.A., Vestal, S., 2006. The SAE Architecture Analysis & Design
Language (AADL) a standard for engineering performance critical systems. In:
Proceedings of the IEEE Computer Aided Control System Design IEEE Interna-
tional Conference on Control Applications IEEE International Symposium on
Intelligent Control, 4–6 October, pp. 1206–1211.

Fowler, M., 1999. Refactoring: Improving the Design of Existing Code. Addison-
Wesley.

Fowler, M., 2003. Who needs an architect? IEEE Software 20 (5), 11–13.
Garlan, D., Monroe, R., Wile, D., 1997. Acme: an architecture description interchange

language. In: CASCON’97: Proceedings of the 1997 Conference of the Centre for
Advanced Studies on Collaborative Research. IBM Press, p. 7.

Garlan, D., Shaw, M., 1994. An introduction to software architecture. Technical
report, Pittsburgh, PA, USA, http://portal.acm.org/citation.cfm?id=865128.

Gasparis, E., Nicholson, J., Eden, A.H., 2008. LePUS3: an object-oriented design
description language. In: Diagrams’08: Proceedings of the 5th International Con-
ference on Diagrammatic Representation and Inference. Springer-Verlag, Berlin,
Heidelberg, pp. 364–367.

Gerson, E.M., Star, S.L., 1986. Analyzing due process in the workplace. ACM Trans.
Inf. Syst. 4 (3), 257–270.

Glaser, B.G., Strauss, A., 1967. The Discovery of Grounded Theory: Strategies for
Qualitative Research. Aldine, Chicago.

Greenfield, J., Short, K., Cook, S., Kent, S., August 2004. Software Factories: Assem-
bling Applications with Patterns, Models, Frameworks, and Tools. Wiley.

Grinter, R.E., 1999. Systems architecture: product designing and social engineering.
SIGSOFT Softw. Eng. Notes 24 (2), 11–18.

Grinter, R.E., 2003. Recomposition: coordinating a web of software dependencies.
Comput. Supported Coop. Work 12 (3), 297–327.

Gutwin, C., Greenberg, S., 2002. A descriptive framework of workspace awareness
for real-time groupware. Comput. Supported Coop. Work 11 (November (3)),
411–446.

Hansen, M.T., Nohria, N., Tierney, T., 1999. What’s your strategy for managing knowl-
edge? Harv. Bus. Rev. 77 (2), pp. 106–16, 187.

Hansson, C., Dittrich, Y., Gustafsson, B., Zarnak, S., 2006. How agile are industrial
software development practices? J. Syst. Softw. 79 (9), 1295–1311.

Harrison, N.B., Avgeriou, P., Zdun, U., 2007. Using patterns to capture architectural
decisions. IEEE Softw. 24 (4), 38–45.

Heath, C., Luff, P., 1992. Collaboration and control: crisis management and multime-
dia technology in London underground line control rooms. Computer Supported
Cooperative Work (CSCW) 1 (March (1–2)), 69–94.

Heath, C., Luff, P., 1996. Documents and professional practice: “bad” organisational
reasons for “good” clinical records. In: CSCW’96: Proceedings of the 1996 AC

M

Conference on Computer Supported Cooperative Work. ACM, New York, NY,
USA, pp. 354–363.

Hinchey, M.G., Bowen, J.P., 1999. High-Integrity System Specification and Design.
Springer-Verlag, New York, Inc., Secaucus, NJ, USA.

Hofmeister, C., Nord, R., Soni, D., 2000. Applied software Architecture. Addison-
Wesley Longman Publishing Co., Inc., Boston, MA, USA.

IEEE 1471-2000, September, 2000. IEEE Recommended Practice for Architectural
Description of Software-Intensive Systems.

Jansen, A., Bosch, J., 2005. Software architecture as a set of architectural design
decisions. In: WICSA’05: Proceedings of the 5th Working IEEE/IFIP Conference
on Software Architecture. IEEE Computer Society, Washington, DC, USA, pp.
109–120.

Kruchten, P., 1995. The 4 + 1 View Model of Architecture. IEEE Softw. 12 (6),
42–50.

Kruchten, P., February 1999. The architects—the software architecture team. In:
Donohoe, P. (Ed.), Software Architecture: TC2 First Working IFIP Conference on
Software Architecture (WICSA1). Kluwer Academic, San Antonio, TX, USA, pp.
565–583.

Kruchten, P., 2008. Controversy corner: what do software architects really do? J.
Syst. Softw. 81 (12), 2413–2416.

Kruchten, P., Lago, P., van Vliet, H., 2006. Building up and Reasoning about Architec-
tural Knowledge, pp. 43–58 http://dx.doi.org/10.1007/11921998 8.

Lago, P., Avgeriou, P., Capilla, R., Kruchten, P., 2008. Wishes and boundaries for a soft-
ware architecture knowledge community. In: Software Architecture, Working
IEEE/IFIP Conference, pp. 271–274.

Lehman, M., 1980. On understanding law, evolution, and conservation in the large-
program life cycle. J. Syst. Softw. 1 (3), 213–231.

Lehman, M.M., 1996. Laws of software evolution revisited. In: EWSPT’96: Proceed-
ings of the 5th European Workshop on Software Process Technology, Springer-
Verlag, London, UK, pp. 108–124, http://www .ic.ac.uk/∼mml/feast2/
papers/pdf/556 .

Lientz, B.P., Swanson, E.B., 1980. Software Maintenance Management. Addison-
Wesley Longman Publishing Co., Inc., Boston, MA, USA.

Lientz, B.P., Swanson, E.B., 1981. Problems in application software maintenance.
Commun. ACM 24 (11), 763–769.

Luckham, D.C., 1996. Rapide: a language and toolset for simulation of distributed
systems by partial orderings of events. Technical report, Stanford, CA, USA.

http://portal.acm.org/citation.cfm?id=865128

2 System
L
M
M
M
M

N

N
N
N
P
P
P
P
P
P
R
S
S
S
S
S
S

226 H. Unphon, Y. Dittrich / The Journal of

uckham, D.C., Vera, J., 1995. An event-based architecture definition language. IEEE
Trans. Softw. Eng. 21 (9), 717–734.

adhavji, N.H., Fernandez-Ramil, J., Perry, D., 2006. Software Evolution and Feed-
back: Theory and Practice. John Wiley & Sons.

edvidovic, N., Rosenblum, D.S., Taylor, R.N., 1999. A language and environment for
architecture-based software development and evolution. In: ICSE’99: Proceed-
ings of the 21st International Conference on Software Engineering. ACM, New
York, NY, USA, pp. 44–53.

itchell, R., McKim, J., Meyer, B., 2002. Design By Contract, By Example. Addison
Wesley Longman Publishing Co., Inc., Redwood City, CA, USA.

urphy, G.C., Notkin, D., Sullivan, K., 1995. Software reflexion models: bridging the
gap between source and high-level models. In: SIGSOFT’95: Proceedings of the
3rd ACM SIGSOFT Symposium on Foundations of Software Engineering. ACM,
New York, NY, USA, pp. 18–28.

aik, K., Tripathy, P., 2008. Software Testing and Quality Assurance: Theory and
Practice. John Wiley & Sons, Inc.

aur, P., 1985. Programming as Theory Building. Microprocess. Microprog. 15,
253–261.

onaka, I., 1994. A dynamic theory of organizational knowledge creation. Org. Sci.
5 (February (1)), 14–37.

onaka, I., 1998. The knowledge-creating company. In: Harvard Business Review on
Knowledge Management. Harvard Business School Publishing, Boston.

adgett, D.K., 2008. Qualitative Methods in Social Work Research, 2nd ed. SAGE
Publications.

arnas, D., 1971. Information distribution aspects of design methodology. In: Pro-
ceedings of the 1971 IFIP Congress, North Holland.

arnas, D., 1972. On the criteria to be used in decomposing systems into modules.
Commun. ACM 15 (12), 1053–1058.

arnas, D., 1974. On a ‘Buzzword’: hierarchical structure. In: Proceedings of the 1974
IFIP Congress. Kluwer.

arnas, D., 1976. On the design and development of program families. IEEE Trans.
Softw. Eng. 2 (1).

arnas, D.L., Clements, P.C., 1986. A rational design process: how and why to fake it.
IEEE Trans. Softw. Eng. 12 (February (2)), 251–257.

obson, C., 2002. Real World Research: A Resource for Social Scientists and
Practitioner–Researchers, 2nd ed. Blackwell Publishing, UK.

chach, S.R., Jin, B., Yu, L., Heller, G.Z., Offutt, J., 2003. Determining the distribution
of maintenance categories: survey versus measurement. Empirical Softw. Eng.
8 (4), 351–365.

chmidt, K., 2002. The problem with ‘awareness’: introductory remarks on ‘aware-
ness in CSCW’. Comput. Supported Coop. Work 11 (3), 285–298.

chmidt, K., Simone, C., 1996. Coordination mechanisms: towards a conceptual
foundation of CSCW systems design. Comput. Supported Coop. Work 5 (2–3),
155–200.

EI, 2010. Duties, Skills, & Knowledge of a Software Architect, Last visited 15 April,

2010. http://www.sei.cmu.edu/architecture/research/competence/duties.cfm.

haw, M., DeLine, R., Klein, D.V., Ross, T.L., Young, D.M., Zelesnik, G., 1995. Abstrac-
tions for software architecture and tools to support them. IEEE Trans. Softw. Eng.
21 (4), 314–335.

haw, M., Garlan, D., 1996. Software Architecture: Perspectives on an Emerging
Discipline. Prentice Hall.

s and Software 83 (2010) 2211–2226

Solís, C., Ali, N., 2008. ShyWiki-a spatial hypertext Wiki. In: Proceedings of the 2008
International Symposium on Wikis. WikiSym’08. ACM.

Storey, M.-A.D., Cubranic, D., German, D.M., 2005. On the use of visualization to sup-
port awareness of human activities in software development: a survey and a
framework. In: SoftVis’05: Proceedings of the 2005 ACM Symposium on Soft-
ware Visualization. ACM, New York, NY, USA, pp. 193–202.

The Open Group, 2009. ArchiMate 1.0 Specification. Van Haren Publishing.
Tran, J.B., Godfrey, M.W., Lee, E.H., Holt, R.C., 2000. Architectural repair of open source

software. In: International Conference on Program Comprehension, p. 48.
Tyree, J., Akerman, A., 2005. Architecture decisions: demystifying architecture. IEEE

Softw. 22 (2), 19–27.
Unphon, H., 2009. Making use of architecture throughout the software life cycle –

how the build hierarchy can facilitate product line development. In: SHARK’09:
Proceedings of the 2009 ICSE Workshop on Sharing and Reusing Architectural
Knowledge. IEEE Computer Society, Washington, DC, USA, pp. 41–48.

Unphon, H., Babar, M.A., Dittrich, Y., 2009. Identifying and understanding software
architecture evaluation practices. ITU technical report (almost finished).

Unphon, H., Dittrich, Y., 2008. Organisation matters: how the organisation of soft-
ware development influences the development of product line architecture. In:
IASTED International Conference on Software Engineering, Innsbruck, Austria,
pp. 178–183.

Unphon, H., Dittrich, Y., Hubaux, A., 2009b. Taking care of cooperation when
evolving socially embedded systems: the PloneMeeting case. In: CHASE’09:
Proceedings of the 2009 ICSE Workshop on Cooperative Human Aspects
on Software Engineering. IEEE Computer Society, Washington, DC, USA, pp.
96–103.

van der Ven, J., Jansen, A., Nijhuis, J., Bosch, J., 2006. Design decisions: the bridge
between rationale and architecture. In: Dutoit, A.H., McCall, R., Mistrk, I., Paech,
B. (Eds.), Rationale Management in Software Engineering. Springer, Berlin,
Heidelberg, pp. 329–348 (chapter 16) http://dx.doi.org/10.1007/978-3-540-
30998-7 16.

Weilkiens, T., 2008. Systems engineering with SysML/UML: modeling, analysis,
design. Morgan Kaufmann.

Wikipedia, 2010. Software Architect, Last visited 15 April, 2010 http://en.
wikipedia.org/wiki/Software architect.

Hataichanok Unphon is a PhD candidate and a research fellow of evolvable soft-
ware product (ESP) project at IT University of Copenhagen. Her research topic is
re-engineering for evolvability that considers social, as well as technical require-
ments for software products. Her research interests to date have focused on software
product lines, software architecture, organisation in software development, and
qualitative empirical study.

Dr. Yvonne Dittrich is associate professor the IT-University of Copenhagen. Her
research interests are use-oriented design and development of software, and soft-

ware development as cooperative work. She developed the empirical research
approach, Cooperative Method Development which is based on problem-oriented
software process improvement organised as a learning cycle, both for the indus-
trial partner, as well as for the researchers involved. In a recent project, she applied
this approach to investigating the development, customization, and appropriation
of software products.

  • Software architecture awareness in long-term software product evolution
  • Introduction
    Architecture, knowledge, and awareness
    Software architecture
    The role of the software architect
    Software product evolution and architecture
    Knowledge management
    Awareness in software engineering
    Research methodology
    Grounded theory
    Interviews
    Analytic process
    Confidence
    The companies and their architectural practice
    Interviewees and organisation profiles
    The presence of software architecture
    Analysis of interviews
    Architecture: who needs it and at what level?
    Documentation
    Code base as actual documentation
    The absence of a document
    Architecture knowledge acquisition: how newcomers learn the architecture
    Discussion with a chief architect
    Intermixed with programming
    Learning by doing
    The role of a chief architect
    Controlling and communicating architecture within a development team
    Updating the ‘walking architecture’
    Interfacing to outward
    Communication about changes
    Meeting
    Nightly builds and testing
    Concurrent versions system (CVS) and subversion repository
    Rich IDE
    Code review
    Wiki
    Evolution and changes
    The problems of the practitioners
    Discussion
    Architecture awareness is achieved through ‘walking architecture’ practices
    Good reasons for bad documentation
    How to promote architecture awareness
    Conclusions
    Acknowledgements
    References

A

J

D

a

A

R

R
A
A

K

S

R

E

S
P
Q
A

1

f

i

o

e

r

t

W

t

s

t

f

T

1
W
e
r
r
e
a

(

t

0
d

The Journal of Systems a

nd Soft

w

are 8

3

(2010) 2441–245

5

Contents lists available at ScienceDire

ct

The Journal of Systems and Software

j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / j s s

n exploratory study of architectural effects on requirements decisions�

ames A. Miller, Remo Ferrari ∗, Nazim H. Madhavji ∗

epartment of Computer Science, University of Western Ontario, London, ON, Canada N6A 5B

7

r t i c l e i n f o

rticle history:
eceived 31 January 2009
eceived in revised form 5 July 2010
ccepted 5 July 2010
vailable online 15 July 2010

eywords:

a b s t r a c t

The question of the “manner in which an existing software architecture affects requirements decision-
making” is considered important in the research community; however, to our knowledge, this issue has
not been scientifically explored. We do not know, for example, the characteristics of such architectural
effects. This paper describes an exploratory study on this question. Specific types of architectural effects o

n

requirements decisions are identified, as are different aspects of the architecture together with the extent
of their effects. This paper gives quantitative measures and qualitative interpretation of the findings. The

oftware architecture
equirements engineerin

g

mpirical study
oftware quality
rocess improvement

understanding gained from this study has several implications in the areas of: project planning and risk
management, requirements engineering (RE) and software architecture (SA) technology, architecture
evolution, tighter integration of RE and SA processes, and middleware in architectures. Furthermore, we
describe several new hypotheses that have emerged from this study, that provide grounds for future
empirical work. This study involved six RE teams (of university students), whose task was to elicit ne

w

ng a

p

requ

uantitative and qualitative research
rchitecture and requirements technology

requirements for upgradi
on a new meta-model for

. Introduction

No one would deny that if we were to extend an existing edi-
ce, many of its functional and non-functional features would be
f central importance in considering new requirements for the
xtension. Yet, in the software engineering (SE) literature, this is
ather an understated issue—that is, consideration of existing sys-
em design is not a key factor in engineering new requirement

s.

hile in software practice many developers are indeed aware of
he need to assess the fitness of new requirements with the existing
ystem design, the approaches are rather subjective and experien-
ial. SWEBOK (IEEE SWEBOK, 2004) – the SE body of knowledge –
or example, does not describe any practices to deal with this issue.
o explore this issue further, we conducted a preliminary survey of
7 professional requirements engineers and software architects.
e found that the average rating of the importance of consid-

ring existing system architecture (SA1) when engineering new

equirements was 4.5 (on a 1–5 Likert-scale)—implying that the
espondents strongly agreed with this concept. Despite this, sev-
ral respondents noted in the qualitative part of the survey that in
ctual practice, many organizations neglect this consideration, or

� A preliminary version of this paper was published in Miller et al. (2008).
∗ Corresponding authors.

E-mail addresses: rnferrar@csd.uwo.ca (R. Ferrari), madhavji@csd.uwo.ca
N.H. Madhavji).

1 For the rest of the paper, the acronym SA refers to system (or software) archi-
ecture as a software artefact.

164-1212/$ – see front matter

© 2010 Elsevier Inc. All rights reserved.

oi:10.1016/j.jss.2010.07.006

re-existing banking software infrastructure. The data collected was base

d

irements decisions, which is a bi-product of this study.

© 2010 Elsevier Inc. All rights reserved.

only perform analysis on existing high-level feature descriptions
of the current system, and not the system’s architecture. In many
situations, a lack of consideration for an existing system in the new
requirements work can lead to rework of requirements and design,
incurring extensive costs especially if further downstream in the
development process (Boehm and Basili, 2001).

The uptake of this, architecture-requirements, issue in research
is not impressive either. It was not until 1994 that the role of an
existing SA in requirements engineering (RE) was recognised as
important in a panel session. However, at that time, “we still [did]
not have a clear understanding of [it]” (Shekaran, 1994a). Shortly
thereafter, 5 of the 34 identified indicators of RE success were
found to have links with SA (El Emam and Madhavji, 1995). A few
years later, the question of an architecture’s role in RE was rais

ed

again (Nuseibeh and Easterbrook, 2000). While the awareness of
an architecture’s role in the RE process has no doubt increased, to
our knowledge, the effects of an existing SA on RE decisions have
not been scientifically explored. It is not until such studies are con-
ducted, and a dependable body of knowledge created, that practice
can begin to use such knowledge in day-to-day projects. As a first
step in this direction, this paper describes an exploratory case study

on the effects of an existing SA on RE decisions. Specifically, we ask:

“In which manner does an architecture affect requirements
decision-making2?”

2 Decision-making leads from recognition of a problem to be solved to a specifi-
cation of that problem or a solution strategy.

dx.doi.org/10.1016/j.jss.2010.07.00

6

http://www.sciencedirect.com/science/journal/0164121

2

http://www.elsevier.com/locate/jss

mailto:rnferrar@csd.uwo.ca

mailto:madhavji@csd.uwo.ca

dx.doi.org/10.1016/j.jss.2010.07.006

2

tems a

p
o

c
l
t
s
t

a
t

e

n
s

f

m

p
o
a
w
m
s
t
i
a
m
t
w
g

n
c
t
r

t
o

a
a
c

r
c
r

o
t

c
s
h
s
t
u
o

f
p
t
s
h

w

i
b

442

J.A. Miller et al. / The Journal of Sys

We explore this question on two fronts: (1) the kind of role a SA
lays in requirements decision-making and (2) the specific aspects
f the architecture that affect RE decisions.

For point (1) above, it has already been suggested that a SA might
onstrain a RE process (Shekaran, 1994b). For example, while ana-
ysts could be eliciting requirements to employ a new technology
hat requires a specific communication protocol, the current legacy
ystem has long implemented a conflicting communications pro-
ocol, thereby constraining the current RE strategy. For point (2

)

bove, while SA aspects are likely largely unique to the domain of
hese cases, they would give us an indication of which parts of an
xisting software architecture can affect RE decision-making (e.g.,
on-functional SA areas outside the focus of an RE agent) and, con-
equently, which parts of the architecture are critical to document
or use by requirements engineers.

Our results indicate that the relationship between SA and RE is
ore complex than what is intuitively known in the literature. In

articular, “SA as a constraint” is only one of the four types of effects
bserved in our study. The other three types of effects we found
re: enabled, influenced and neutral. In short, an enabled effect is
here the proposed solution (denoted by the new requirements) is
ade feasible because of the implemented decisions in the existing

ystem; an influenced effect is where the architectural configura-
ion has an effect on the requirements decision without affecting
ts feasibility; and a neutral effect is where there is no noticeable
rchitectural effect on the decision. This paper gives quantitative
easures on these effects from the study and qualitative interpre-

ation of the findings. Also, in our study, nine architectural aspects
ere identified across 117-recorded decisions. Again, this paper

ives quantitative measures and qualitative interpretations.
A deeper understanding of the role of SA in RE could open up

ew opportunities for RE and architecting methods, tools and pro-
esses. For instance, in the area of planning and risk assessment,
he management could make more informed cost estimates of new
equirements by considering how the SA has historically affected
he various types of requirements. Likewise, in the area of technol-
gy improvement, RE and SA tools can be integrated so that analysts
nd architects can share, access and change requirements and
rchitecture information more easily. We describe several other
ases in the paper.

Our empirical study involved six RE teams that gathered new
equirements for an existing system and were observed over the
ourse of 2 months. The project was in the banking domain and
equired the RE teams to elicit and analyse new requirements based
n a set of high-level features that needed to be integrated into
he current architecture. A requirements decision meta-model was
reated as a basis for the development of a requirements-tool that
erved to gather data from the participants during the project on
ow requirements decisions they were making were affected by
pecific aspects of the existing architecture. This paper describes:
he study context, participant details, project work involved, the
nderlying decision meta-model3 for the data that is gathered, use
f tools for gathering data, and the various threats to validity.

The key results are the quantitative characterization of the dif-
erent interaction effects mentioned earlier. For example, for this
articular system, nine SA aspects affected approximately 60% of
he RE decisions. From the findings, we have derived four hypothe-

es that provide a basis for future studies. A general description of
ow each of these studies could be conducted is also described.

This paper is structured as follows: Section 2 discusses related
ork; Section 3 describes the exploratory study; Section 4 presents

3 The decision meta-model defines the type of data relevant to this study and
s a basis for the tool developed for data gathering. The meta-model and tool are
i-products of this study.

nd Software 83 (2010) 2441–2455

the results; Section 5 discusses various implications from the
results; Section 6 discusses future empirical work and emergent
hypotheses from this study, and Section 7 concludes the paper.

2. Related work

This section describes the work that is related to our study.
The section focuses on three key aspects: (i) observations, com-
mentary and empirical work on the relationship between RE and
SA, (ii) technological research spanning RE–SA, and (iii) recen

t
t

echnological-based research on architecture evolution. In Section
2.4, the section concludes with a reflection on the current state of
research described in Sections 2.1–2.3.

2.1. RE and SA relationsh

ip

There is an increasing interest in exploring and refining the
transitions between various activities in the software development
process. In particular, the relationship between RE and SA, and their
impact on each other was the focus of a couple of workshops 7–9
years ago (STRAW, 2001, 2003). In fact, even earlier, Jackson argued
in a panel session (Jackson, 1994) for a tight coupling of the RE
and SA processes, suggesting that the most successful developers
are those who are able to move relatively more freely between
stages within the development cycle. In Kozaczynski (2002), the
author discusses that a level of foresight on the part of architects
to focus on those requirements that are architecturally relevant
can help to mitigate development risk in the software process, by
being able to develop the architecture early without all require-
ments being elicited. This, early development, can then be fed back
to the requirements process to further refine the requirements.

In our earlier work in El Emam and Madhavji (1995), they
presented an instrument for measuring RE success. Through an
industry field study to design this instrument, we found that in evo-
lutionary work, the level of understanding of the existing software
architecture can have an impact on the success of the RE process. In
understanding the architecture, requirements engineers can pro-
vide requirements solutions that are consistent with the current
technical and corporate orientation of its organization. In turn, this
can lead to better cost/benefit analysis during RE. This early under-
standing, however, did not delve into the type of technical effects
an existing architecture has on RE decision-making; in this paper,
we investigate this issue further.

In Garlan (1994), he recognises that architectural families con-
strain system requirements. Further, he identifies that solutions
can drive requirements. For example, the architecture of a fam-
ily of systems determines the range of variability allowed in a
product line. Though not explicitly stated, one can interpret this
as not only architectures imposing “constrains” on requirements
decision-making, but also as “enabling” and “influencing” such
decision-making. This is a central aspect of the current paper.

In Bass et al. (2003), they discuss that different stakeholders of
the architecture will have different needs for documentation, and
the level of detail provided to them should reflect this. Depend-
ing on the stakeholders’ needs, they can be provided with detailed
information, some details or overview information of the various
architectural views available. The specific architectural aspects that
could be important in RE, however, are not mentioned in Bass et al.
(2003); our study uncovers these details.

Three previous studies of ours, described below, empiri-

cally examine RE–SA interaction issues from the viewpoints of:
architecting-problems rooted in requirements, the effect of using
different types of human agents when architecting, and the
impact of an SA on requirements characteristics. In Ferrari and
Madhavji (2008a), we report on a multiple-case study that investi-

tems a

g
a
p
o
w
s
p
o
r
a
t
e
g
a
t
s
r
p
o
c
E
s
d
(
g
r
d
a

n
e

w
a

o
n

a

2

s
t
2
fi
u

m
t

d
t

w
t
a
p
o
(
(
r
R

b

p
t
o
t
t
p
f

J.A. Miller et al. / The Journal of Sys

ated requirements-oriented problems that are encountered while
rchitecting. Overall, we found that approximately 35% of the
roblems encountered during architecting were requirements-
riented. Also, specific problem areas together with their severity
ere identified (such as, quality satisfaction, requirements under-

tanding and quality drivers) as well as the relative frequency of
roblems occurring in these areas. Implications of this work are
n improving methods, tools, and techniques to transition from
equirements to architecture. In another study, described in Ferrari
nd Madhavji (2008b), we report on a controlled-study that inves-
igates the impact of software architects having RE knowledge and
xperience when performing SA. Specifically, two types of study
roups were used, the one type of group had previous training
nd/or experience in RE, and the other type of group did not. Both
ypes of groups conducted the same architecting project given the
ame initial set of requirements from the banking domain. The
esults show that the architects with RE knowledge/experience
roduced a significantly better architecture (10% difference in the
verall architectural quality), and the study also highlighted spe-
ific architecting areas where these architects performed better.
xamples of these areas include: determining architectural tactics,
electing/creating an architectural pattern to satisfy key quality
rivers, and interface specification. In a more recent study of ours
Miller et al., 2009), we report on a controlled-study that investi-
ates the impact an SA has on the characteristics of newly elicited
equirements. Two types of study groups were used and con-
ucted the same requirements project. One type of group had
ccess to a previous SA; whereas, the other type of group did
ot. The results showed that a multitude of characteristics (e.g.,
nd-user focus, technological focus, abstraction, and importance)
ere significantly affected by the presence or absence of an SA,

nd the results also showed extent of this effect

. Implications

f this work are on RE process engineering in the contexts of
ew development and legacy systems, and on post-requirements
nalysis.

.2. RE–SA technology

There is a growing body of technological work (e.g., methods,
oftware tools, processes, development paradigms, notations, etc.)
hat is aimed at bridging the areas of RE and SA (STRAW, 2001,
003). The study presented in this paper is meant to elicit new
ndings regarding the RE–SA interplay that could then possibly be
sed in improving such technologies.

Bass et al.’s stakeholder-centred attribute-driven design (ADD)
ethod (Bass et al., 2003) focuses on iteratively building architec-

ures based on the key architectural drivers of the system. These
rivers are composed of key requirements and quality scenarios
hat shape the architecture. The drivers are input into the process
here architectural patterns are created/selected to realize the tac-

ics (i.e., the architectural design choices made) which, in turn, are
imed at satisfying the quality scenarios. Tradeoffs emerge in the
atterns between various quality attributes, and the architects and
ther stakeholders must negotiate a resolution to these tradeoffs
similar in principle to the architecture tradeoff analysis method
ATAM) (Kazman et al., 2000)) to finalize patterns that would rep-
esent an architecture that is most suited to meet the system’s goals.
ecently, a prototype tool, called ArchE (Diaz-Pace et al., 2008) has
een developed to provide support to the ADD method. This sup-
ort is in the form of modelling the functional responsibilities of
he architecture, storing the quality scenarios, and through analysis

f the architecture and quality scenarios, the tool suggests tactics
hat can be used to satisfy the quality requirements. To date, the
ool supports modifiability and performance quality attributes, but
rovides plug-in support so users can add reasoning and analysis
rameworks for other quality attributes.

nd Software 83 (2010) 2441–2455 244

3

In our previous work, we had developed a method that traces
architectural concerns back to the requirements—the architecture-
centric concern analysis method (ACCA) (Wang et al., 2005). The
method uses a concern traceability map (CT-map) that captures
and presents architectural design decisions starting from software
requirements through to the software architecture and these are
then linked to architectural concerns that are identified in the archi-
tecture evaluation phase. Through a visual, decision-based, model
this method aids in identifying potentially problematic, or sen-
sitive, requirements or decisions that resulted in the concerned
architectural parts.

Egyed et al., in their component-bus-system and properties
(CBSP) method (Egyed et al., 2001), use an intermediate language
(and tool support) for expressing requirements in a form that more
closely relates to architecture, where requirements are identified
and categorized based on various properties such as whether they
should be implemented as components, bus, system properties, and
so on. This method is focused on early architecting work and is
not intended for the entire architecting process. In Hofmeister et
al. (2005), the authors deal with the identification and analysis of
global factors—those that take into account more holistic issues
such as the environment in which the system is built, developing
organization, external technological solutions, flexibility or rigid-
ity of requirements, and more. Their two-phase method is a means
to design and describe a high-level architecture, and analyse and
resolve architectural issues introduced by global factors. In partic-
ular, the second phase of their approach (global analysis phase),
explicitly captures alternative high-level architectural strategies
with decomposed design decisions and supporting rationale, and
also provides traceability to the requirements.

In Bruin and Vliet (2003), the authors propose an architec-
tural design method called quality-driven architecture composition
(QAC) where the emphasis is on the reuse of architectural solu-
tions. Their method is iterative and starts with the design of an
architecture – based only on functional features – and where vari-
ability points of the architecture are identified. These variability
points are expected to cater to the non-functional requirements.
The authors call this initial design the “reference architecture”.
Next, the method focuses on the non-functional requirements
by iteratively applying known design solutions (i.e., architec-
tural and design patterns). The feature-set (FS) graph (which
contains pre-existing knowledge about the domain—expressed
as requirements) and the resultant design fragments (with their
accompanying rationale, assumptions, etc.) that can satisfy the
requirements drive this entire process. In Farenhorst et al. (2007),
the authors report on a case study that was conducted to explore
practitioner’s needs for tool support that focuses on architec-
tural knowledge. The study found that practitioner’s require a tool
that provides “just-in-time” architectural knowledge, defined as
access and delivery of the pertinent architectural knowledge to the
right person at any given point in time. Given this broad require-
ment, the authors developed an architectural knowledge-sharing
portal that stores various types of architectural knowledge and
allows for near-instant retrieval through integrated codification
techniques.

In Stoll et al. (2008), the authors present the influencing fac-
tors method that guides architects in transitioning from high-level
stakeholder concerns to preliminary architectural decisions. An
“influencing factor” is any stakeholder concern that is considered to
play an influential role on the architecture. These influencing fac-
tors can be derived from, to name a few, software quality attributes,

business goals, market trends, project experience, etc. The method
itself has three main steps: identification of influencing factors,
which is accomplished through interviews and workshops with
stakeholders; prioritisation of the influencing factors; and lastly,
the factors are analysed with respect to their impact on software

2 tems a

q
i

i
t
t
m
t
r

n
t

s
r
c
u
o
T
i
a
d
i

a
t
m
e
(
a
(
g
p
p
a
i

f
a
s
o
a

2

e
a
r
c

p
a
i
t
a
e
a
(
w
a
t
v
a
t
t
a
a

444 J.A. Miller et al. / The Journal of Sys

uality attributes, which can then aid the architect to make prelim-
nary architectural design decisions.

Cui et al. (2008) present an architectural design approach that
s also aimed at transitioning from requirements to architecture
hrough the automatic synthesis of candidate architectural solu-
ions. The authors construct their approach on a meta-model that

odels issues (architecturally relevant requirements), architec-
ural solutions, rationale, and architectural decisions and their
elationships. The authors argue that these elements are the key
otions for architecture design and the derivation of target archi-
ectures. The approach itself has four phases. In the first phase the
ystem stakeholders elicit all possible issues (i.e., architecturally
elevant requirements). In the second phase, the architects derive
andidate architectures for each issue. The third phase involves the
se of a formal grammar that facilitates the automatic synthesis
f the candidate architectures developed in the previous phase.
hese architectural solutions are then presented to the architects
n the final phase who can then decide to adopt or reject various
spects (or the entire architectures) and provide rationale for their
ecision which is then stored for future architectural development

terations.
In Schwanke (2005), the author discusses the “good enough

rchitectural requirements process” (GEAR). This process is meant
o further refine an initial set of requirements through architectural

eans. The process is based on three architectural requirements
ngineering approaches: model-driven requirements engineering
where elicited candidate-requirements are modelled as use cases,
ctivity diagrams, state charts, etc.), quality attribute scenarios
used to elicit, document and prioritise stakeholder concerns), and
lobal analysis (a general way of organizing information about the
roblem context that surrounds the architecture). The main pur-
ose of the process is to show where the above approaches overlap
nd where they complement each other, providing insight into the
dentification of architectural requirements.

Rapanotti et al. (2004) propose the extension of “problem-
rames” into “architecture frames”, which capture information
bout architectural styles and their interaction with the problem
pace. The benefit of this mechanism is that in introducing solution-
riented approaches early in development, one can refine problem
nalysis.

.3. Architecture evolution

An area of research that is related to our work is architecture
volution, in particular from the viewpoint of methods, processes,
nd tools development. In the following section we highlight recent
esearch in this area; later in Section 5, we discuss how our study
an benefit architectural evolution research.

In Keuler et al. (2008), the authors propose an approach for
erforming quality impact analysis on an SA. Their approach uses
n aspect-oriented solution to automate integration of automated
ntegration of specific concerns (e.g., performance) into architec-
ural models, providing specific quality impact evaluations. This
pproach is structured in four phases, the first two which can be
xecuted concurrently: (1) architectural styles are applied to cre-
te an initial style that is specific to the product architecture and
2) quality models for the key quality drivers are created, along
ith their accompanying evaluation models. In the third phase,

spects are used to automatically connect the quality models to
he existing architecture; and, in the fourth phase, quality specific
iews are extracted from the integrated architectural models and

re assessed against the evaluation models from (2). The output of
his approach is an identification of the specific parts of the archi-
ecture that are affecting the achievement of quality attributes. The
rchitect can then use this information for planning changes to the
rchitecture as appropriate.
nd Software 83 (2010) 2441–2455

In LaMantia et al. (2008), the authors provide case study results
from two architecture evolution projects examined over multi-
ple releases where, in each project, architectural modeling was
aided by design structure matrices and in accordance with Bald-
win and Clark’s design rule theory (Baldwin and Clark, 2000). Design
rule theory is a formal theory that explains how design rules
(such as splitting or substituting modules) can be used to resolve
interdependencies and create modular architectures by specifying
the interface between modules. Design structure matrices were
designed to support this theory, and are a means of formally model-
ing interactions between modules of engineered systems. In short,
design structure matrix is a square matrix, in which each mod-
ule corresponds both to a row and a column of the matrix. A cell
is checked if and only if the design decision corresponding to its
row depends on the design decision corresponding to the column.
Based on the two case studies results, the authors argue that the
use of design structure matrices and design rule theory improved
the modifiability of the systems by (1) allowing for different con-
current levels of evolution in different modules with no negative
consequence on system or development process and (2) facilitated
the substitution of risky components with newly proposed com-
ponents without substantial change to other parts of the system.
The authors conclude that the functionality of design rule theory
and design structure can be expanded to provide prescriptive and
predictive power in software evolution. Specifically, that the tech-
nology could be used to proactively plan for system refactoring.

In Shen and Madhavji (2006), the authors propose a method for
developing evolutionary scenarios that provide information con-
cerning the impact different types of historical changes (e.g., those
related to specific functionality, or those related to external con-
cerns such as security, performance, availability, or those due to
internal concerns such as maintainability, system defects, etc.) have
had on the quality of software architectural elements of interest.
Software maintainers, in particular software architects, can use
this information when planning system changes. For example, if
the maintainer receives a request for a performance modification,
they can consult the scenarios to determine the past impact of
performance modifications on the modifiability of the system. The
scenarios could suggest, for example, that a major refactoring job
was required for most past performance modifications, so the main-
tainer can then plan accordingly the resource and time allocation
to complete the performance modification and any accompanying
changes. The evolutionary scenarios focus on the different types
of changes that have historically affected a given architectural ele-
ment at different times in the evolution of the system. This affect is
indicated by measures of the quality of the component, for exam-
ple performance, fault-proneness, level of maintainability, etc. The
scenarios also provide information on which component sets that
have been affected by a given type of change at different times in the
evolution of the system. To create these evolutionary scenarios, the
evolutionary scenario development method was designed. This struc-
tured and automated (where possible) method is needed since the
data sources on which the scenarios are constructed can be quite
large. The possible inputs to the method can include: bug reports,
CVS data, source code, change log fixes, architectural design docu-
ments and feature requests. Currently, the method and supporting
technology facilitate building evolutionary scenarios that have a
focus on maintainability and fault-proneness.

The above work describes research that is focused on performing
“off-line” evolution, which basically assumes that the system can be
shutdown to perform and integrate the new changes. Other recent

research in the area of architecture evolution proposes technology
for performing automated run-time architectural evolution.

In Waignier et al. (2007), the authors propose a framework
for performing automated architectural evolution. Specifically,
they detail FIESTA, a framework that aids architects in adding

tems a

n
f
i
b
t
i
s
i
a
a
t
b
i
l
g
D
a
t
s
s
a

2

a
t
s
w

c
A
E
a
d
2
h
o
i
i
t
t
a
t
p
w
t
i
c
o
i
w
t
t
f

w
c
a
p

o
f

w
i

w
J.A. Miller et al. / The Journal of Sys

ew functionality when performing architectural evolution. Their
ramework is generic in that it allows an architecture to be specified
n any architectural description language. The framework functions
y taking as input a formal specification of the new functionality
o be added and the architect then decides where in the exist-
ng architecture the new functionality should be integrated. The
ystem then automatically makes the transformation into a mod-
fied architecture. In another automated architectural evolution
pproach, described in Morrison et al. (2007), the authors propose
formal architectural description language called Archware-ADL

hat facilitates active architectures, namely an architecture that can
e evolved automatically during system run-time based on both

nternal system and external changes. The basic premise of the
anguage is to formally model the architecture as part of the on-
oing computation, thereby allowing evolution during execution.
evelopers can express new components, connectors, constraints
nd evolutionary rules in this notation and initiate integration with
he system. The system will then accordingly modify and monitor
ystem, without any downtime. The authors also propose a set of
upport technologies to support this language for these evolution-
ry purposes.

.4. Reflection on research

The previous three subsections discuss research in the primary
reas that are related to our study: current knowledge pertaining
o the relationship between RE and SA; technology aimed at tran-
itioning from RE to SA; and, architecture evolution. In this section,
e reflect on the current state of research in these areas.

As discussed in Section 2.1, as early as 1994, researchers dis-
ussed the importance of the role of an SA in RE (Shekaran, 1994a,b).

few other works have commented on this issue since then (El
mam and Madhavji, 1995; Nuseibeh and Easterbrook, 2000), and
lso other knowledge-seeking empirical studies have been con-
ucted in the area of RE–SA relationship (Ferrari and Madhavji,
008a,b; Miller et al., 2009). However, beyond these works there
as been, to our knowledge, sparse research conducted in the area
f the role of an SA in RE. When looking at the other direction
n the RE–SA relationship, i.e., transitioning from RE to SA, there
s an abundance of research work conducted in this area, par-
icularly with a focus on technological approaches. In the RE–SA
echnological works described in Section 2.2, there is an implicit
ssumption that the development is starting from “scratch”, i.e.,
here is no existing system that is being enhanced. In industrial
ractices, however, software development is largely conducted
ithin evolutionary processes (IEEE SWEBOK, 2004). Conversely,

he research work presented in Section 2.3 (architecture evolution)
s focused on the improvement of the architecting process in the
ontext of an evolving system. However, this work solely focused
n architecting; the RE process is not explicitly considered dur-
ng architectural evolution and is treated as a “black-box” process

here requirements are simply input into the architecture evolu-
ionary processes. Therefore, there is little to no consideration in
his research for the RE–SA interaction as highlighted in the works
rom Sections 2.1 and 2.2.

Thus, there is a need to consider the current system explicitly
hen performing RE–SA. Furthermore, there is a lack of empiri-

al evidence regarding the specific interaction effects between RE
nd SA. The empirical study presented in this paper is meant to
resent detailed quantitative findings on the effect of the presence
f a current architecture when performing RE. Such findings can be

ed back into research on state-of-the-art technologies (such as the
ork described in Sections 2.2 and 2.3) to facilitate improvement

n RE–SA evolutionary processes.
Though the importance of conducting empirical studies in soft-

are engineering (SE) has been recognised (Tichy et al., 1995;

nd Software 83 (2010) 2441–2455 2445

Wieringa and Heerkens, 2006), Shaw’s analysis (Shaw, 2003) of
research papers submitted at a prominent 2002 SE conference sug-
gests that only 12% were submitted in the category of “Design,
evaluation, or analysis of a particular instance” and 0% in the cate-
gory of “Feasibility study or exploration”. In Ferrari and Madhavji
(2008b), we presented our own analysis of published papers. In the
fields of RE and SA, since the year 2000, only approximately 15%
of the published papers were in the above-mentioned categories,
suggesting that studies such as the work described in this paper
are currently rather rare. Our work is meant to help in filling this
research gap.

3. The study

Exploratory studies are used when the “research looks for pat-
terns, ideas, or hypotheses rather than research that tries to test
or confirm hypotheses” (Vogt, 1993). The current research about
architectural effects on RE decisions has been anecdotal (Nuseibeh,
2001; Shekaran, 1994a), and thus there is not much grounded
theory on this subject. Our study fits the exploratory study char-
acteristics. By having multiple cases, we are able to identify trends
and patterns beyond a single-case study design.

The following sections deal with: the research questions, par-
ticipants, the requirements project, data collection, and threats to
validity.

3.1. Research questions

Recall from Section 1 that the intent of this case study was to
investigate the role of an architecture in requirements decision-
making. We thus have two pertinent research questions:

Q1. How does an architecture affect requirements decision-
making?

This question deals with the impact the presence of an archi-
tecture has on decision-making in RE. This is accomplished by
asking the participants of this study, for every decision that they
make, how has the architecture affected that decision. By having
a quantitative profile of various architectural effect-types, we can
investigate improvement to RE and software architecting technol-
ogy with the help of this new information.

Q2. Which aspects of the architecture affect requirements deci-
sions?

This second question is intended to probe into the details of Q1.
Whereas Q1 was aimed more generally at the effect of architec-
ture on requirements decisions, this question aims to characterize
the various architectural aspects that are found to have an effect.
Through characterization of the different architectural aspects, we
can begin to examine improvement opportunities during architect-
ing that can optimize future requirements work on a system.

A purposeful tool was developed to gather the data for both the
research questions Q1 and Q2 above. The tool is discussed in Section
3.4.2.

3.2. Participants

The population of this study is requirements engineers working
in the evolutionary phase (i.e., after the initial release) of a sys-

tem. The participants of the study were 12 graduate and final-year
undergraduate level computer science students at the University
of Western Ontario who were randomly assigned to 6 teams, each
composed of 2 members. The external validity threat from using
students in studies is discussed in Section 3.5.1.

2 tems and Software 83 (2010) 2441–2455

3

i
t
c
a
F
a
f
s

1
2

3

4

c
s
s
r
p

a
3

o
a
2

u
m

p
t

e
r

8

d
s
(

g



3

r
2
i
w

abstract since it subsumes many possible drivers of change includ-
ing (but not limited to) shifting business goals and needs, new
contractual requirements, changes in the system’s environment,

8 Some terms to note: Requirements decision—denotes a chosen subset of high-
level requirements (or solution strategies) amongst a set of alternatives in order to
achieve a goal; Issue—an important topic or problem for debate or discussion relat-

446 J.A. Miller et al. / The Journal of Sys

.3. The RE project

In this study, the participants were given a set of tasks that
nvolved upgrading the requirements for an existing banking sys-
em as represented by its architecture. Their work involved both
reating new requirements and evolving old ones in order to cre-
te a new requirements set that satisfied the requested changes.
or this purpose, they were given the pre-existing requirements
nd architecture documents (described in the following sections)
rom the previous version of the system. Each team was given the
ame 4 requirements tasks:

. Add Interac service to the existing system. It assumes that the
transaction is conducted by the bank’s employee on behalf of the
user. For other services, like Internet banking, this time could be
different because of external factors like the user’s connection.

. Create a new wireless banking application which would provide
features to the customers to carry out basic banking transactions
through their cell phones or PDAs.

. Reduce the operational cost of the telephone banking system.

. Increase modifiability in the web banking system.

These tasks were chosen since they constituted a sizeable and
omplex RE project that would still be feasible within the con-
traints of a University course. We held numerous peer-review
essions with a total of six experts to validate these four tasks with
espect to their appropriateness in giving a project that met both
edagogical and study needs.

The requirements elicitation process and techniques followed
re described in Kotonya and Sommerville (1998).

.3.1. The pre-existing requirements document
The pre-existing requirements for the system were originally

btained from an external source. These requirements were used to
rchitect the previous version of the system (Ferrari and Madhavji,
008b). The final requirements from that project are what were
sed as a baseline requirements set for enhancement in the require-
ents project on which the study was conducted. Thus, the study

roject essentially involved one iteration of an evolutionary cycle of
he system’s requirements.

However, these requirements were re-validated by several
xperts for acceptability in the enhancement project (i.e., the four
equirements tasks described earlier). There were approximately
0 requirements in the set, and supporting use cases and sequence
iagrams for ten of the key functions of the system. The document
tructure followed the guidelines from Kotonya and Sommerville
1998).

We list here a few example requirements in natural language to
ive their flavour:

The system must complete a transaction in less than 3 s. The
transaction is conducted by the bank’s employee on behalf of the
user. For other services, like Internet banking, this time could be
different because of external factors like the user’s connection.
A customer shall be able to deposit money using ATM into the
indicated account by cheque or cash.
A customer shall be provided access to Internet banking services
based on valid bank account number, user defined password, and
access permissions set out for the bank customer.

.3.2. The architectural document

The architectural document given to each of the RE teams

esulted from the described previous study (Ferrari and Madhavji,
008b). That study involved a set of 16 software architecting teams

n an academic setting, each of which worked to create a soft-
are architecture, using the ADD method (Bass et al., 2003). The

Fig. 1. A meta-model for RE decisions.8

participants in that study created their architectures based on the
requirements mentioned in Section 3.3.1. That study also involved
identifying one particular architectural document as being of the
highest quality based on an instrument designed for this purpose
(Ferrari and Madhavji, 2008b).

The architecture in question was documented in a 161-page
document and included information on: quality attribute scenarios,
tactics employed, module decomposition views, user/layer views,
class views, component and connector views, deployment views,
interface specification, work assignment view, sequence diagrams,
state charts, and architectural rationale.

3.4. Data collection

In order to gather appropriate data to answer the two research
questions, Q1 and Q2 (see Section 3.1), we first designed a meta-
model for requirements decisions. Also, to simplify data collection
and organization, we developed a software tool based on this meta-
model. Furthermore, we had specific measures in place to ensure
that quality data would be obtained from this study. These issues
are described below in more detail.

3.4.1. The decision meta-model
The decision meta-model specifies the types of entities and

relationships involved in the myriad of decisions underlying the
requirements process. This meta-model, therefore, can guide data
gathering. Since research on requirements decisions is limited,
there was no established meta-model available which fitted the
specific investigative needs of this study. Instead, a combination of
elements from two different sources was used: Ramesh and Jarke’s
rationale submodel (Ramesh and Jarke, 2001) and Wang and Mad-
havji’s traceability meta-model (Wang et al., 2005). The integrated
meta-model is illustrated in Fig. 1 and uses UML notation to depict
the elements and links.

The meta-model captures the key notions of decisions, assump-
tions, requirements and solution approaches. It links various
elements to the system through decisions. The input to the model
is the change driver element. Change driver is left intentionally

ing to the acceptance/rejection of a solution approach; System—computing system of
interest; Rationale—why a requirement is needed with respect to the goals it real-
izes; Argument—statement supporting or refuting the solution approach; Domain
knowledge—the valid knowledge used to refer to an area of human endeavour (in
this case the Banking domain); Assumption—a statement which is considered true
regarding any aspect of system development.

J.A. Miller et al. / The Journal of Systems a

a
t

m
a
i
t

e
t

(
a
a
r

c
t
r
a
t
a
a
u
s

r
a
E
r
r
a
t
d

(
i
d

t

Fig. 2. A sample decision tree from the meta-model.

nd end-user change requests. In our study, the change drivers were
he four project tasks given to the teams (see Section 3.3).

One of the primary attributes that differentiates our meta-
odel from the earlier ones (Ramesh and Jarke, 2001; Wang et

l., 2005) is that, in our model, requirements decisions relate only
ndirectly to requirements, issues and assumptions, through solu-
ion approaches. That is, in the ensuing instance-level model (or
nactment of the model), each solution approach (i.e., a strategy
o meet high-level requirements) involves its own set of issues
e.g., cost implications, constraints, actions, etc.), requirements and
ssumptions (see Fig. 2). These are only instantiated if the solution
pproach is accepted through a decision (and hence the “indirect”
elationship).

For example, a decision concerning the reduction of operating
osts in the telephone banking system might involve two solu-
ion approaches; reducing the number of human operators and/or
educing the available functionality of the system. Both solution
pproaches are feasible, and the RE team must choose (based on
he associated issues) whether or not to implement4 either of the
pproaches. Note that it is possible to choose both or neither. Once
decision is made, rationale can be given describing why a partic-
lar decision was made (e.g., why a particular solution approach
hould be implemented over another solution approach).

Specific issues can apply to many solution approaches. Each
equirement and assumption is associated to a single solution
pproach, which can then be traced to one or more decisions.
ach requirement has its own rationale, underlying assumptions,
elationships to other requirements, importance and other project-
elated attributes such as cost estimate and tasks. However, these

re not elaborated in the meta-model for simplicity. It is around
his model5 that the data-collection tool (see Section 3.4.2) was
esigned.

4 Note: Though the RE team is not expected to do downstream development work
design, coding, testing, etc.), it is evident here that their decision here is carving an
mplementation path through the “solution approach” they would choose, thereby
enoting a problem–solution space relationship.
5 Prior to the start of the study, peer review with RE experts was used to validate

he model’s quality.

nd Software 83 (2010) 2441–2455 2447

For each of the elements that help to make up the meta-model,
relevant information was captured by the tool. Each element had
a unique set of attributes that were captured. The attribute that is
of particular interest here is the role that the architecture played
in requirements decision-making. The role of the architecture is
denoted by whether it acted as an effect (constrained, enabled,
influenced, or none) on the requirements decision, and the aspect of
the architecture that had the effect. The system and domain knowl-
edge elements are not directly implemented in the tool, but are
meant to provide a context for the rest of the elements and how
they fit with the overall system.

3.4.2. Data-gathering tool
The data-gathering tool could best be described as a decision-

centric requirements engineering tool. The subjects logged each
decision they made into the tool. Each decision had a series of
potential solution approaches associated with it, all of which were
also logged (see Fig. 3 for an example screenshot). Underlying the
decisions captured, and the way the tool operated is the deci-
sion “meta-model” described earlier (see Fig. 1). The tool was
implemented in Visual Basic 6 (VB6). It had the dual purpose of sup-
porting the subjects’ work and of recording decision data relevant
to this study.

Because of this semi-automated tool, data quality could be
ensured in several ways that a manual tool (such as forms that
subjects must fill out) could not. For example, the subjects could be
required to fill in essential fields at the right time such as a require-
ment’s rationale when a requirement is logged so that there’s no
danger that they might be left blank or, worse, filled in at a later
time when the knowledge is no longer fresh. Other fields (e.g., the
time of modification) could be generated automatically, thus, alle-
viating the subjects’ workload while guaranteeing correctness of
the data.

3.4.3. Data collection
The data-collection phase of this study took place over a span

of 2 months. To help ensure the quality of the data, each team
was given 1 h a week to meet with a system “stakeholder” played
by the course’s teaching assistant. During the meetings the sub-
jects had the opportunity to ask questions about the company’s
needs regarding the new system. Their work to date was reviewed
priori to, and discussed at, these meetings to ensure that the sub-
jects properly understood how to use the tool for logging data.
Additionally, e-mail communication was used to answer questions
regarding tool usage.

3.5. Threats to validity

Based on Johnson and Christensan (2004), three types of threats
that might apply to the type of study conducted here were iden-
tified: external, construct, and conclusion validity. Because we are
not attempting to demonstrate causality between variables, threats
to internal validity are not a concern.

3.5.1. Threats to external validity
External validity refers to the degree to which the results

of a study can be generalized across a population (Johnson
and Christensan, 2004). Threats to external validity occur when
researchers draw incorrect conclusions about the population based
on the sample data (Creswell, 2003).

Population validity is the ability to generalize the study results

from the sample to the population. Because exploratory studies on
students have become so prevalent, there is much work done to
explore the population validity of students. Specifically regarding
SE related student-based studies in academic settings, important
results have been found in several cases, e.g., in requirements triage

2448 J.A. Miller et al. / The Journal of Systems and Software 83 (2010) 2441–2455

Fig. 3. A sample screen shot from the decisions data-gathering tool. The left-side pane lists the decisions that have been logged in the system. The highlighted decision is
the one being currently worked on. The right-side of the screen is split between two sets of windows: the left-side is where architectural constraint information is logged,
and the right-side is where architecture acting as an enabler information is logged.

(
t
t
a
c
e
s
i
s

3

c
p
s
F
o
m
w
r
t
c

Runeson, 2003), code inspection (Carver et al., 2002), and in lead-
ime impact assessment (Host et al., 2000). We do acknowledge the
hreat in generalizing to experienced requirements engineers and
rchitects; however, there is no evidence suggesting that the results
ould not be generalizable to, at the very least, novice requirements
ngineers and architects in industry. Regardless, exploratory studies
uch as this are an important first step towards eventually solidify-
ng a body of knowledge and providing the groundwork for future
tudies in wider contexts.

.5.2. Threats to construct validity
Construct validity refers to the extent to which a measurement

orresponds to theoretical concepts (constructs) concerning the
henomenon under study. In this study, the constructs (e.g., deci-
ions) were operationalized through the decision meta-model (see
ig. 1) and the tool that was built on this model. We held numer-
us peer-review sessions with a total of six experts to validate the

eta-model and tool with respect to the theoretical constructs we
anted to investigate (see Section 3.4.1). Also, at no stage in the

esearch process did we come across any instant of data or rela-
ionship that questioned the validity of the meta-model or the tool’s
apability in capturing data pertaining to the meta-model. We are

thus confident in the effectiveness of these artefacts for collecting
data pertaining to the study’s constructs.

3.5.3. Threats to conclusion validity
Conclusion validity is the degree to which conclusions we make

based on our findings are reasonable (Trochim, 2006). There are two
accepted principles for improving conclusion validity (Trochim,
2006) that applied to our study: ensuring reliability of data mea-
surements and proper implementation of study processes. For
reliability of data measurements, we utilized a data-collection tool
and weekly meetings to ensure tool was utilized correctly (see Sec-
tion 3.4.3). Proper implementation was in-place by having a single
researcher involved in the study design to perform the various
research tasks. Additionally, we discuss the conclusions in the last
section of the paper, and there we demonstrate that all our con-
clusions are rooted in the results, thereby maintaining conclusion
validity.

4. Results

This section discusses the findings of the study. We describe first
the manner in which the architecture affects requirements deci-

tems and Software 83 (2010) 2441–2455 2449

s
t
d

4
(

s
(
w
o

4

m
e
e
b
a
a
t
f
e
k

E
s
w
s

Q
a
r

w
b
r
w
c
t

E
f
b

U

p
d
E
o
t

t
b
s

f
ar

c

h
it

ec
tu

ra
l

im
p

ac
t

o
n

re
q

u
ir

em
en

ts
d

ec
is

io
n

s.

A
rc

h
it
ec
tu
ra
l

A
sp

ec
ts

D
ec

is
io

n
s

E
x

is
ti

n
g

h
ar

d
w

ar
e

N
F

ch
ar

ac
te

ri
st

ic
s

(s
am

e
su

b
-s

y
st

em
)

N
F
ch
ar
ac
te
ri
st
ic
s

(d
if

fe
re

n
t

su
b

-s
y

st
em

)

R
eu

sa
b

le
m

o
d

u
le

s
A

rc
h

it
ec

tu
ra

l
p

at
te

rn
s

M
o

d
ifi

ab
il

it
y

St
ru

ct
u

ra
l

fe
at

u
re

D
ec
is
io

n
al

re
ad

y
m

ad
e

C

o

m
m

u
n

ic
at

io
n

N
u

m
b

er
o

f
d

ec
is
io
n

s
af

fe
ct

ed

2
4

1
6

5
3

5
1

4
3

3
6

2
4

8
5

3
3

4
4

6
3

1
0

2
2

0
3

0
0

0
2

7
4

8

si
o
n

s:
1

1
7

T

o
ta

l
#

o
f

“e
ff

ec
t-

c

o
u

n
ts

”:
1

2
2

n
d

ec
is
io
n

ca
n

b
e

af
fe

ct
ed

b
y

m
o

re
th

an
o

n
e

ar
ch

it
ec
tu
ra

l
as

p
ec

t
an

d
th

er
ef

o
re

t

h
e

n
u

m
b
er
o

f
ef

fe
ct

-c
o

u
n

ts
m

ay
n

o
t

eq
u

al
th

e
n

u
m

b
er

o
f
af
fe
ct
ed

d
ec

is
io

n
s.

J.A. Miller et al. / The Journal of Sys

ions (Q1). Then, we describe the quantitative findings related to
he specific architectural aspects that affected the requirements
ecisions (Q2).

.1. How an architecture affects requirements decision-making
Q1)

The six project teams recorded a total of 117 requirements deci-
ions, all of which related to the four requirements tasks assigned
described in Section 3.3). A significant portion of these decisions
as affected in some way by the architecture. We describe the types

f effects found in our study and their characteristics.

.1.1. Types of architectural effects
We identified four types of architectural effects on require-

ents decisions from our data, shown in Table 1 (leftmost column):
nabled, constrained, influenced, and neutral. An architectural
ffect is of type enabled if it makes a solution approach (more) feasi-
le because of the current architectural configuration. Conversely,
n architectural effect is of type constrained if it makes a solution
pproach less (or in-) feasible. An influenced is where the architec-
ural effect altered a requirements decision without affecting the
easibility of its solution approaches. Finally, the neutral type of
ffect is one where there is no noticeable architectural effect of any
ind.

xample 1. Enabled

Decision: Implement a back up system for the Interac banking
ystem.

Solution approach 1 (rejected): Introduce new web server which
ill be used as a backup for Interac transactions.

Solution approach 2 (accepted): Use Internet sub-system web
erver as a backup for Interac transactions.

Architectural enabler: The web server for Internet already exists.
ueue will allow us to hold over 500 transactions and deal with
ll the requested transactions. Overall system requirement 1.19
equires that the system should not fail in case of overload.

In this example the decision to use the existing Internet banking
eb server as a backup for the Interac sub-system was made easier

ecause the team in question knew that existing performance and
eliability requirements were sufficient to accommodate the extra
orkload. This is an example of being enabled by “non-functional

haracteristics from a different sub-system” (an aspect of the archi-
ecture) than the one being worked upon.

xample 2. Constrained

Decision: Establish communications protocol(s) that will be used
or the wireless banking system.

Solution approach 1 (accepted): Communication protocol should
e GPRS (general packet radio service).

Solution approach 2 (rejected): Communication protocol shall be
MTS (universal mobile telecommunications system).

Architectural constraint: Architecture document clearly stated a
reference for GPRS; otherwise we would have chosen UMTS.

Here, the example is of a decision being constrained because the
ecision had already been made in the architectural document.

xample 3. Influenced

Decision: Deploy the Interac system.
Solution approach 1 (accepted): Develop the Interac system based

n the conceptual model of system based on the current implemen-

ation of the ATM sub-system.

Architectural influence: The architecture document defines func-
ionality for the ATM system. The Interac system can be loosely
ased upon this conceptual model since the ATM system has been
uccessfully implemented and maintained. Therefore, the presence Ta

b
le

1
C

h
ar
ac
te
ri
st
ic
s
o

T
y

p
e

o
f

ef
fe

ct

E
n

ab
le

d
C

o
n

st
ra

in
ed

In
fl

u
en

ce
d

N
eu

tr
al

To
ta
l
#
o
f
d
ec
i

N
o

te
th

at
a

g
iv

e

2 tems a

o
s

e
p
s
n
p

E
m
s

l
a

g

n
c
m
o

e
t
c
w

i
r
I
t
n
r
a
s
e
b

4

t
b
c
h
b
t
5
6
a
c
a
o
i
w
i
i

e
t
t

450 J.A. Miller et al. / The Journal of Sys

f the ATM sub-system and how it was implemented influences the
olution approach for the Interac system.

This is an example where the decision was not constrained or
nabled; nothing about the ATM sub-system makes any of the pro-
osed solution approaches more or less feasible. However, for the
ake of consistency, the requirement engineers chose to model the
ew Interac system after the ATM system. This decision is an exam-
le of a decision being influenced by architectural patterns.

xample 4. Neutral

Decision: Determine support for different languages in the
obile banking application.
Solution approach 1 (rejected): English will be the only language

upported.
Solution approach 2 (accepted): English language as the default

anguage of the system, with other languages to be downloaded
nd installed on request.

Solution approach 3 (rejected): Provide support for many lan-
uages together with the application.

Architecture effect (none): The mobile banking application has
ot been developed, and therefore the technical challenges asso-
iated with implementing language support on a wide-variety of
obile devices are considered outside the scope of the current

verall system architecture.
Example 4 demonstrates a decision that was unrelated to the

xisting architecture. In this situation, the mobile banking applica-
ion has not yet been implemented so the requirement engineers
an consider various solution approaches for language support
ithout considering the current architecture.

Note that the described effects are “technical” in nature. That
s, our focus is on the “architectural basis” for deciding whether a
equirement decision is enabled, constrained, influenced or neutral.
n a given software project, there are other factors that also need
o be considered in prioritising requirements and in release plan-
ing, e.g., implementation cost, revenue potential, and resource
equirements. Irrespective of these factors, it is invaluable to know
t elicitation-time what the architectural effects are on the deci-
ions being made. Thus, for example, with revenue potential being
qual among two competitive decisions, an enabled decision would
e more favourable than a constrained one.

.1.2. Architectural impact characteristics
Of the 117 requirements decisions mentioned in Table 1, 69 of

he decisions were affected by the architecture. A decision could
e affected by more than one architectural effect (for example, the
hoice of upgrading a database could be enabled by the current
ardware configuration, but also be constrained by poor modifia-
ility in the system components that would need to interact with
he database). With reference to Table 1, in our study there were
such cases, so we had a total of 122 “effect-counts”.6 Out of the

9 affected decisions, an effect-count of 74 out of 122 (61%) were
ffected by the architecture. This is a substantial number of effect-
ounts that were affected in some way. There is, more or less,
n even-split between those “enabled” and “constrained”, which
utnumber the category of “influenced” by a factor of 5. Equally
mportant is to note that 48 (41%) of the requirements decisions

ere not affected by the architecture (i.e., type neutral). Also, all
nstances of architectural effects on requirements decision-making

n our study fit into the defined types of effects.

In previous literature (Shekaran, 1994b), only the “constraint”
ffect-type was identified. In Section 4.1.1, we identify additional
ypes of effects. Also, in this section, we give quantitative character-

6 The effect-count includes some decisions in more than one category of effects,
hus the summation does not tally, or the % is more than 100.

nd Software 83 (2010) 2441–2455

istics of the various effect-types. However, it should be noted that
different application systems are expected to have different quanti-
tative values because these values depend on factors specific to the
development of individual products or systems. Still, it is a subject
for future studies as to whether there are approximate quantita-
tive ranges for different effect-types across different applications
and application-domains.

4.2. Architectural aspects affecting requirements decisions (Q2)

The types and quantitative characterization (see Table 1) of
architectural effects on requirements decision-making (Q1) is
complemented by the findings of the different aspects of the archi-
tecture that had impact on requirements decisions (Q2).

Table 1 shows, on the top, 9 architectural aspects that were
found to affect requirements decisions in the project. These aspects
are:

1. Existing hardware: Decisions that were affected by the existing
hardware in the system.

2. Non-functional characteristics (from the same sub-system): Deci-
sions that were affected by non-functional characteristics of the
same sub-system as the one with which the decision was con-
cerned.

3. Non-functional characteristics (from a different sub-system): Deci-
sions that were affected by non-functional characteristics from
a different sub-system than the one with which the decision was
concerned.

4. Reusability of modules: Decisions that were affected by the pos-
sibility of reusing existing modules.

5. Architectural patterns: Decisions that were affected by the choice
of architectural patterns already implemented.

6. Modifiability: Decisions affected by existing features that were
known to be easily modifiable.

7. Structural features: Decisions that were affected by structural
features of the existing SA.

8. Decisions already made: Decisions that were affected when it was
realized that the decision in question had already been made in
the existing architecture.

9. Communications: Decisions that were affected by the existing
choice of communications protocols.

Below, we analyse architectural aspects against effect-types and
against the project groups.

4.2.1. Architectural aspects across effect-types
Table 1 depicts the role of the architectural aspects (top row) in

relation to the type of effects (leftmost column) on the total set of
“effect-counts” (122) recorded by the project teams.

Though the category influenced occurred less frequently than
enabled and constrained, they are still noteworthy. In our study,
influenced usually denoted that solution approach used in another
part of the system was being used to solve the problem at hand. For
example, an architectural pattern might be chosen because it has
been implemented successfully elsewhere in the system.

While this may suggest a movement towards a more
homogonous architecture, an aspect acting as an influence on future
RE decisions may be less foreseeable (by a software architect) than
those acting as types enabled and constrained. In particular, whereas
enabled and constrained are related to creating requirements which
are consistent with the established architecture and previously

made decisions, influenced involve implementing previous (or sim-
ilar) decisions in a new context (i.e., a different part of the system
than was originally intended). The risk associated with this, how-
ever, is not clear. Thus, if an aspect is known to be of type influenced,
the architect should be aware that design decisions involving that

tems and Software 83 (2010) 2441–2455 2451

a
n
t

4

a
t
i
(

b
T
(
m

i
a
s
p
m
w
a
t
5
a
u
e

s
r
t
b
b
s
5

5
f
5

t
a
r
m
c
t
t
(
F
m
b
T
b
p
o
r
t
b
l
b

b
et

w
ee

n
ar

ch
it

ec
tu
ra
l

as
p

ec
ts

an
d

p
ro

je
ct

te
am

s.
A
rc
h
it
ec
tu
ra
l
as
p
ec
ts
D
ec
is
io
n
s
E
x
is
ti
n
g
h
ar
d
w
ar
e
N
F
ch
ar
ac
te
ri
st
ic
s
(s
am
e
su
b
-s
y
st
em
)
N
F
ch
ar
ac
te
ri
st
ic
s
(d
if
fe
re
n
t
su
b
-s
y
st
em
)
R
eu
sa
b
le
m
o
d
u
le
s
A
rc
h
it
ec
tu
ra
l
p
at
te
rn
s
M
o
d
ifi
ab
il
it
y
St
ru
ct
u
ra
l
fe
at
u
re
D
ec
is
io
n
al
re
ad
y
m
ad
e
C
o
m
m
u
n
ic
at
io
n

#
A

ff
ec

te
d

#
T

o
ta

l
%

A
ff

ec
te

d
0
0
0
0
0
2
0
0
2
4

1
2

3
3
1
0
0
0

0
1

1
0

1
4

1
2
3
3
0
0

3
1

2
0

0
0

0
6

1
6

3
8

2
3

1
0
1
4
2
3

3
4

3
2

3
8

8
4

0
1
7
4
0
1
0
3
0
1

6
2

4
6

7
0

5
0

0
0
0
0
0
2

7
1

5
4

7

3
9

2
0

6
6

6
4

6
9

6
9

1
1

7
5

9
(1

1
7

)
3

8
1

7
5

5
5

3
5

8
ed

(6
9

)
4

1
3

2
9

9
9

9
6

9
1

3
J.A. Miller et al. / The Journal of Sys

spect may have ramifications in other parts of the system that may
ot be obvious. Care should therefore be taken when architecting
hese aspects.

.2.2. Architectural aspects across project groups
Table 2 shows the number of requirements decisions that were

ffected by each architectural aspect and for each of the six project
eams. The table shows that the architectural aspect “NF character-
stics of a different sub-system” affected most number of decisions
20; or 17% of 117 decisions; or 29% of 69 affected decisions).

Besides this, all the remaining architectural aspects affected
etween 4% and 13% of the affected decisions (see last row in
able 2). Also, we see that in Table 1, the aspect “NF characteristics
different sub-system)” has the greatest % of “enabled” require-

ents decisions (16 of 36, or 44%).
We do see some discrepancies, however. While “NF character-

stics (different sub-systems)” was the most active architectural
spect (see Table 2), the instances of affected requirements deci-
ions came from groups 3, 4 and 5. One explanation for this
henomenon could be that this particular aspect depended on how
uch effort the subjects put into understanding sub-systems that
ere non-local to those in their focus of attention. Indeed, while

cquiring an understanding of the other sub-systems in the archi-
ecture did actually affect the decision-making of groups 3, 4 and
, it is possible that the other groups simply did not focus their
ttention on seemingly unrelated sections of the architectural doc-
mentation. We do not have data for this analysis, and so future
mpirical studies could help explain this phenomenon.

Despite this variance between teams, we include all data points
ince this a multiple-case study. However, including this data
esults in 59% of requirements decisions being affected by archi-
ectural aspects (as seen in Table 2, last column, 3rd row from the
ottom), while their exclusion would result in 52% of the decisions
eing affected, so there is not much difference. Thus, we will simply
tate that the architectural aspects listed affected approximately
0% of the requirements decisions.

. Implications

There are a number of implications for SA and RE of the findings
rom our study.

.1. Planning and risk management

The analysis and categorisation, during the RE process, of archi-
ectural effects on RE decisions (see Section 4.1.1) could help
rchitects to separate the more easily implementable, enabled,
equirements from the more difficult to implement (or compro-
ised), constrained, requirements. This separation of concerns

ould be useful from the point of view of project planning (e.g.,
ime-to-implement, resource allocation, requirements prioritisa-
ion and scheduling), risk management (e.g., implementability)
Boehm, 1988), and product evolution (e.g., new feature planning).
or example, one group in the dataset elicited high-level require-
ents to reduce the cost of telephone operators in telephone

anking by introducing an automatic speech recognition system.
hese requirements were “enabled” in two principle ways: one,
y readily available COTS systems/components from the market-
lace and two, by the modifiability of the current implementation
f the telephone sub-system. The same group elicited high-level

equirements for the mobile banking application, specifically that
he existing infrastructure (i.e., servers and their throughput) could
e used to handle the mobile banking application transaction

oad. However, these requirements were assessed as “constrained”
ecause of the existing performance demands from the other major Ta

b
le

2
T

h
e

re
la

ti
o

n
sh

ip

T
ea

m

1 2 3 4 5 6 #
T

o
ta
l
%
o
u
t
o

f
to

ta
l

%
o

u
t

o
f
af
fe
ct

2 tems a

t
n
S
t
p
t
t
a
s

5

c
m
(
A
i
q
p
w
i
i
q
m

a
p
s
c
i
d
t

5

c
p
a
i
i
a
n
o
u
u
t
i
i
S
c
q
e
o
a
a
e

t
t

452 J.A. Miller et al. / The Journal of Sys

ypes of access to the system (e.g., Internet, teller, etc.). So, for plan-
ing purposes, the management had to decide: should I upgrade the
A in order to implement the requirements for the mobile applica-
ion, which has a potentially high positive impact on the customer’s
oint of view? Or, should I implement instead the requirements for
he automated phone system, where these are less desirable from
he customer point of view but less-time consuming to implement
nd hence can lead to releasing the system faster and thus start
aving money by removing the human telephone operators?

.2. RE and SA technology

Similarly, this separation of concerns of architectural effects
ould help researchers and tool developers to enrich the require-
ents elicitation and analysis tools (e.g., DOORS, Requisite pro, i*

Liu and Yu, 2001), etc.) which, in turn, could enrich SA tools (e.g.,
rchE (Diaz-Pace et al., 2008), Software Architect, etc.) in mak-

ng judicious choices of architectural tactics and patterns to satisfy
uality requirements. Currently, RE and SA tools do not consider the
resence of an existing system when performing further RE and SA
ork, and therefore do not facilitate the presentation or analysis of

nformation describing the RE–SA interaction effects (such as the
nformation in Table 1). Integrating this information, and subse-
uent analysis support, into RE–SA tools could then enable users to
ake decisions based on information that is currently left implicit.
Likewise, this separation of concerns can help in implementing,

utomatic, dialogue-triggering mechanisms in RE-to-SA workflow
rocesses (Georgakopoulos et al., 1995), especially for the “con-
traint” category of requirements. That is, the RE and SA agents
an be notified automatically to resolve the tradeoffs between
mplementing a constrained decision (at the expense of customer
issatisfaction) and implementing an unconstrained decision (at
he expense of architectural modifications).

.3. Architectural evolution

Historical trends of aggregate quantitative data (as in Table 1)
an aid in SA management and in opportunistic or restrained RE
ractice. For example, if the trend shows that too many RE decisions
re constrained by the specific aspects of the legacy SA (e.g., 8, 5 or 6
n the “constrained” row in Table 1) then this might call for: (i) exam-
nation of SA practices and developing checklists to ensure that
rchitects are not inadvertently restricting potential future busi-
ess goals; (ii) restructuring7 the SA to align it with business goals;
r (iii) restraining the RE process (from attempting to integrate
nconstrained requirements into the constraining parts of the SA)
ntil such time that the architecture has been adequately restruc-
ured. Conversely, trends of too many enabled decisions (e.g., 16
n the “enabled” row for “NF characteristics (different sub-system)”
n Table 1) could possibly indicate that the enabling aspects of the
A are, at least, technologically supportive of the new ventures and
an unleash RE to be more opportunistic. This type of analysis and
uestioning is not a part of architecting methods (e.g., ADD (Bass
t al., 2003), GRL (Liu and Yu, 2001), and CBSP (Egyed et al., 2001))
r architecture evolution approaches (e.g., ArchWare (Morrison et

l., 2007), ESDM (Shen and Madhavji, 2006), FIESTA (Waignier et
l., 2007)), and, doing so, could allow for improved architectural
volution support.

7 SA restructuring can include such tasks as: capability analysis (of the SA as
o whether it can cope with stakeholder scenarios), tactics and pattern choices,
echnology assessment, deployment strategies, and others.

nd Software 83 (2010) 2441–2455

5.4. Tighter SA–RE integration

With over 50% of the RE decisions being affected by an SA (see
Table 2), and many of these (29% or 20/69) originating from the
aspect “NF characteristics of a different sub-system”, this is strong
empirical evidence in favour of integrating software architect-
ing and RE processes more tightly (Nuseibeh, 2001). Specifically,
the SA agents could work with the RE agents during require-
ments elicitation, negotiation and feasibility analysis in order to
provide critical insight on the technical feasibility of the elicited
requirements in terms of them being constrained or enabled from
a different sub-system as opposed to the sub-system they are
working on.

Therefore, a hypothesis emerges (see Section 6) that, in order
to reduce the amount of backtracking and requirements-rework
(and also reduce the associated project costs), it is important that
the architects provide “live” feedback to the RE agents on these
potential system-wide “constraints” and “enablers”.

However, due to resource constraints in RE–SA processes of
a software project (for example, in some projects it may not be
possible for requirements engineers to have extensive interaction
with the architects), at the very least requirements engineers could
analyse different sub-systems than the one they are working on
to possibly discover more local requirements decisions that could
be enabled. If this is so, requirements engineers could be trained,
and appropriate tools developed, specifically for this circumspec-
tive analysis in order to yield more enabled solutions for better
service and satisfaction to the end-user. As mentioned in Section 1
of this paper, the current industry practice does not align with this
recommendation.

5.5. RE-to-SA feed-forward process

Iterative development approaches (such as RUP (Kruchten,
2001) and spiral (Boehm, 1988)) tend to promote that significant
chunks of requirements are validated and prioritised preceding
the development effort. While this may be quite appropriate in
many situations, there is room to be agile in some situations across
RE-architecting processes by introducing “feed-forward” processes
from RE to SA. In particular, requirements engineers can pack-
age critical information and deliver this to the architects prior to
the delivery of the validated new requirements. For example, in
our case study projects the requirements engineers could have
packaged information about the four architectural categories of
high impact (see Section 4.1.2: existing hardware, NF standards
(same sub-system), NF characteristics (different sub-system), and
architectural patterns), the specific requirement decisions that are
affected, and how they were affected (e.g., constrained, enabled
or influenced). This package of information, if made available to
the architects “ahead of time”, could facilitate groundwork for spe-
cific architectural enhancements, and change, while the rest of
new requirements are still being elicited in the RE process. We
note that agile practices (Larman, 2003) do not explicitly promote
such feed-forward processes from user stories to system develop-
ment.

5.6. Increased middleware

The neutral type of effect has a significant amount of cases
(approx. 40%, see Table 1). Neutral cases actually mean that the
developers will likely have to “wire in” the design and code for

a new requirement into the system much more deeply than in
the “enabled” cases where, for example, the groundwork would
already have been prepared in the existing architecture for the new
requirements to be implemented. Deeper the “wiring in”, higher
the software costs in general and more arduous the development.

tems a

T
t
d

5

m
s
i
i

6

p
a
t
t
o
f
a
o
s
q
p
h
g

d
t

H
a
a

o
s
g
a
e
g
w
m
t
g
r
v
r

H
s
f

s
m
i
t
i
i

a
t
t
N
g

J.A. Miller et al. / The Journal of Sys

hus, some of the “wiring-in” work could possibly be reduced in
he future by increased “middleware” strategy in the architectural
esign.

.7. Analysis

So, as we see above, there are quite a few implications of deter-
ining architectural effects on requirements decisions: on early

oftware development practices, methods and tools. The identified
mplications are threads for further empirical work to ground them
n development processes.

. Future empirical work

One purpose of an exploratory study is to lay a foundation for
ossible future work on the theme of the research so as to build an
ppropriate body of knowledge (IEEE SWEBOK, 2004). In a sense,
he exploratory study is conducted in a “bottom-up” manner, where
he research question acts as a guide to collecting a wide range
f data about the research topic, and the findings are discovered
rom the exploratory analysis of this data. In an effort to lay such
foundation, it is important to identify any emergent hypotheses
r investigative questions from this research. From such hypothe-
es, it would then be possible to conduct, in a “top-down” manner,
uantitative studies that focus on specific research issues. The main
urpose of conducting a “top-down” study is to statistically test the
ypothesis to lend quantitative support to the topic being investi-
ated.

From the results of our study and their implications, below we
escribe the following four emergent hypotheses that could be
ested in future studies and how they could be tested:

ypothesis 1. If the architects provide “live” feedback to the RE
gents on potential system-wide constraints and enablers, then the
mount of requirements-rework will be reduced.

See Section 5.4 for a more detailed discussion of the background
f this hypothesis. To test this hypothesis, we would need to mea-
ure the amount of requirements rework between two different
roups of software engineers. This measurement could include the
mount of requirements-rework needed to be done, and also the
xtent of the rework (i.e., effort and time). One of these study
roups would have requirements engineers and architects who are
orking together in an integrated manner to develop the require-
ents and architecture; the other type of group would not have

he requirements engineers and architects working as closely inte-
rated. For this hypothesis, the independent variable would be the
equirements and architecting process used, and the dependent
ariable is the time and effort expended performing requirements-
ework.

ypothesis 2. Non-functional (NF) characteristics of a non-local
ub-system significantly affect (enable or constrain) requirements
or the local sub-system being worked on.

In Table 1, we see that NF characteristics of a different sub-
ystem than the one being worked upon affected requirements
ore than any other aspect. This could have potentially important

mplications on RE–SA technology as discussed in Section 5. Despite
his importance, prior to investigating into new technologies, there
s a need to replicate this study in different domains and contexts
n order to determine generalizability.

To test this hypothesis, therefore, two types of RE–SA groups

re needed for the study: one that is given the entire architec-
ure including information regarding the NF characteristics of all
he sub-systems; whereas, the other group does not receive this
F information. Both groups would elicit requirements for a sin-
le sub-system and, as in this study, architectural aspect analysis is

nd Software 83 (2010) 2441–2455 2453

performed and the number of impacted requirements is logged and
statistically compared. The independent variable would then be the
presence/absence of NF characteristic information of non-local sub-
systems, and the dependent variable would be the reported number
of impacted requirements.

Hypothesis 3. If the history of interaction effects between SA
and RE is used effectively, then the time/effort spent performing
evolutionary work in requirements and architecture processes will
decrease.

As discussed in Section 5, maintaining and using the history of
information presented in Table 1 could be useful for evolutionary
work in the requirements and architecting processes. This hypothe-
sis aims at providing scientific evidence as to whether or not having
such information is useful and, if so, to what extent.

A controlled experiment involving two study groups could
be used to test this hypothesis. Development teams expected to
enhance a system (both requirements and architecture) would
be used. One type of study group would be given the histori-
cal interaction effect information from the past revisions of the
system; whereas, the other group would not receive this informa-
tion. Process data such as effort and time would be gathered and
then analysed to determine any statistically significant differences
between the two types of groups. The independent variable is the
historical information, and the dependent variable is the time and
effort spent in performing an evolutionary phase in an RE and SA
project.

Hypothesis 4. Architectural communication protocols used in the
current system have a significant effect on new requirements.

In Table 1 in Section 4.2, communication protocols used in
the architecture have an effect on new requirements. Despite the
finding that the effect is mostly constrained, this is more likely
due to a function of our product circumstances, thus we gen-
eralize this hypothesis for all the types of effects. Establishing
further evidence of this claim can lead to improved RE and SA
technology where this issue is more explicitly considered in those
processes.

To test this hypothesis, a study with two types of RE groups
enhancing the requirements for an existing system is needed.
One type of group will be given an existing system architecture
with fully realized communication protocols. The other type of
group would be given an existing architecture, however, the com-
munication protocols would be undetermined. The two types of
groups would provide data on the architectural aspects affecting
the requirements they are eliciting, and in the end the number
of requirements affected by communication protocols would be
statistically compared to determine evidence to support or refute
the hypothesis. The independent variable is the realization of com-
munication protocols, and the dependent variable would be the
reported number of affected requirements.

7. Conclusions

The role of an existing software architecture (SA) in require-
ments engineering (RE) was recognised as important over a decade
ago (Shekaran, 1994a). However, to our knowledge, this issue
has not been scientifically explored. This paper describes an
exploratory study on this question. This study involved six RE teams
eliciting requirements to enhance an existing system, and collect-
ing and analyzing data from their in-project decisions that they

made. Collection of data was facilitated by a tool that allowed the
teams to not only do their requirements work but also capture
study-specific data. This tool was based on a requirements deci-
sion meta-model (see Fig. 1) that was designed and validated for
use in this study.

2 tems a

r
S
t

e
c
(
c
(
c
(
a
b
n
n

t
t
t
w
s
a

A
n
R

B

B
B
B
B
C
C
C
D

454 J.A. Miller et al. / The Journal of Sys

From the findings of the study, we conclude that:

There exist at least four types of architectural effects on RE deci-
sions (see Section 4.1.1): as an enabler (30%), as a constraint
(25%), as an influence (6%), and as neutral (39%). This means
that approximately 60% of the RE decisions were affected (or
approximately 40% were not affected) by the SA. These character-
istics add significant new knowledge to the literature (Shekaran,
1994b) where the existence of the “constraint” effect was sus-
pected but the different types of effects and their extent were not
known.
Also, different aspects of the SA can have different degrees of
effects on RE decisions (see Section 4.1.2). From our study, there
were nine different aspects of which “non-functional character-
istics (of sub-systems other than the one the analyst is working
on for eliciting new requirements)” had the most impact on the
affected RE decisions: approximately 29%.

There are several implications of the findings on: planning and
isk management; RE and SA technology; architecture evolution;
A and RE processes; and Middleware. These are discussed in Sec-
ion 5.

Apart from the general need to replicate empirical studies, sev-
ral notable suggestions for future empirical work would be to
onduct studies based on the following four emergent hypotheses:
1) architects providing “live” feedback to RE agents on system-wide
onstraints and enables will reduce amount of requirements-rework,
2) non-functional characteristics of non-local sub-system signifi-
antly affect requirements for the local sub-system being worked on,
3) time/effort spent performing evolutionary work in requirements
nd architecting processes will decrease if history of interaction effects
etween SA and RE is used effectively, and (4) architectural commu-
ication protocols used in a current system has a significant effect on
ew requirements.

Since ours was only one exploratory study in a particular con-
ext, it would be a mistake to generalize these results verbatim
o other contexts (

Z

ave, 1997). However, this does not diminish
he importance of the findings described in this paper. Instead,
e encourage the readers to view this study as an important first

tep for establishing grounded theory for future studies in this
rea.

cknowledgement

This work was, in part, supported by Natural Science and Engi-
eering Research Council (NSERC) of Canada.

eferences

aldwin, C.Y., Clark, K.B., 2000. Design Rules, vol. 1: The Power of Modularity. The
MIT Press.

ass, L., Clements, P., Kazman, R., 2003. Software Architecture in Practice. Addison-
Wesley.

oehm, B., 1988. A spiral model of software development and enhancement. IEEE
Computer 21 (5), 61–72.

oehm, B., Basili, V., 2001. Software defect reduction top 10 list. IEEE Computer 34
(January (1)), 135–137.

ruin, H.d., Vliet, H.V., 2003. Quality-driven software architecture composition. Jour-
nal of Systems and Software 66 (3), 269–284.

arver, J., Shull, F., Basili, V., 2002. Observational studies to accelerate process expe-
rience in classroom studies: an evaluation. In: Proc. 2003 Int. Symp. on Emp.
Software Engineering (ISESE ‘03), Rome, Italy, December, pp. 72–79.

reswell, J.W., 2003. Research Design: Qualitative, Quantitative, and Mixed Methods
Approaches. Sage Publications, Thousand Oaks, CA.

ui, X., Sun, Y., Mei, H., 2008. Towards automated solution synthesis and ratio-

nale capture in decision-centric architecture design. In: Proc. Seventh Working
IEEE/IFIP Conference on Software Architecture, February, pp. 221–230.

iaz-Pace, A., Hyunwoo, K., Len, B., Philip, B., Felix, B., 2008. Integrating quality
attribute reasoning frameworks in the ArchE design assistant. In: Proc. QoSA’08
4th International Conference on the Quality of Software Architecture, University
of Karlsruhe (TH), Germany, October 14–17.

nd Software 83 (2010) 2441–2455

Egyed, A., Grunbacher, P., Medvidovic, N., 2001. Refinement and evolution issues
in bridging requirements and architecture—the CBSP approach. In: Proc. First
International Workshop from Software Requirements to Architectures (STRAW
‘01), Toronto, Canada, June.

El Emam, K., Madhavji, N.H., 1995. Measuring the success of requirements engi-
neering processes. In: Proc. 2nd IEEE Int. Symp., RE, York, England, March, pp.
204–211.

Farenhorst, R., Lago, P., Vliet, H.V., 2007. EAGLE: effective tool support for shar-
ing architectural knowledge. International Journal of Cooperative Information
Systems 16 (3–4), 413–437.

Ferrari, Madhavji, 2008a. Architecting-problems rooted in requirements. Informa-
tion and Software Technology 50 (January (1–2)), 53–66.

Ferrari, Madhavji, 2008b. Software architecting without requirements knowledge
and experience: what are the repercussions? Journal of Systems and Software
81 (September (9)), 1470–1490.

Garlan, D., 1994. The role of software architecture in requirements engineering. In:
Proc. First Int. Conf. on Requirements Engineering, April, p. 240.

Georgakopoulos, D., Hornick, M., Sheth, A., 1995. An overview of workflow manage-
ment: from process modeling to workflow automation infrastructure. Journal
of Distributed and Parallel Databases 3 (April (2)), 119–153.

Hofmeister, C., Nord, R., Soni, D., 2005. Global analysis: moving from software
requirements specification to structural views of the software architecture. IEEE
Proceedings Software 152 (August (4)), 187–197.

Host, M., Regnell, B., Wohlin, C., 2000. Using students as subjects—a comparative
study of students and professionals in lead-time impact assessment. Empirical
Software Engineering, 201–214.

IEEE SWEBOK, 2004. Guide to the Software Engineering Body of Knowledge: 2004
Version. IEEE and IEEE Computer Society Project, http://www.swebok.org/.

Jackson, M., 1994. The role of architecture in requirements engineering. In: Proc.
First Int. Conf. on Requirements Engineering, April, p. 241.

Johnson, R.B., Christensan, L., 2004. Educational Research: Quantitative, Qualitative
and Mixed Approaches, 2nd ed. Allyn & Bacon.

Kazman, R., Klein, M., Clements, P., 2000. ATAM: method for architecture evaluation.
Technical Report, Software Engineering Institute, Carnegie Melon University,
CMU/SEI-2000-TR-004 ESC-TR-2000-004.

Keuler, T., Muthig, D., Uchida, T., 2008. Efficient quality impact analyses for iterative
architecture construction. In: Proc. Seventh Working IEEE/IFIP Conference on
Software Architecture (WICSA 2008), pp. 19–28.

Kotonya, G., Sommerville, I., 1998. Requirements Engineering. John Wiley & Sons,
Ltd.

Kozaczynski, W., 2002. Requirements, architectures and risks. In: Proc. IEEE Joint
International Conference on Requirements Engineering, Essen, Germany, pp.
6–7.

Kruchten, P., 2001. The Rational Unified Process: An Introduction, 2nd ed. Addison-
Wesley, Boston.

LaMantia, M.J., Cai, Y., MacCormack, A., Rusnak, J., 2008. Analyzing the evolution of
large-scale software systems using design structure matrices and design rule
theory: two exploratory cases. In: Proc. Seventh Working IEEE/IFIP Conference
on Software Architecture (WICSA 2008), pp. 83–92.

Larman, C., 2003. Agile and Iterative Development: A Manager’s Guide. Addison-
Wesley Professional.

Liu, L., Yu, E., 2001. From requirements to architectural design—using goals and
scenarios. In: Proc. 2nd Int. Workshop from Soft. Reqts. to Arch. (STRAW ‘01),
Toronto, Canada, June.

Miller, J., Ferrari, R., Madhavji, N.H., 2008. Architectural effects on requirements
decisions: an exploratory study. In: Proc. 7th Working IEEE/IFIP Confer-
ence on Software Architecture (WICSA ‘08), Vancouver, Canada, pp. 231–
240.

Miller, J., Ferrari, R., Madhavji, N.H., 2009. Characteristics of new requirements
in the presence or absence of an existing system architecture. In: Proc.
17th IEEE Conference on Requirements Engineering (RE ‘09), Atlanta, USA,
August.

Morrison, R., Balasubramaniam, D., Oquendo, F., Warboys, B., Greenwood, R.M.,
2007. FIESTA: a generic framework for integrating new functionalities into soft-
ware architectures. In: Proc. First European Conference on Software Architecture
(ECSA 2007), LNCS 4758, pp. 2–10.

Nuseibeh, B., 2001. Weaving together requirements and architectures. IEEE Com-
puters 34 (March (3)), 115–117.

Nuseibeh, B., Easterbrook, S., 2000. Requirements engineering: a roadmap.
In: Proc. Conf. on the Future of Software Engineering, ACM Press,
pp. 35–46.

Ramesh, B., Jarke, M., 2001. Toward reference models for requirements traceability.
IEEE Transactions on Software Engineering 2 (January (1)), 58–93.

Rapanotti, L., Hall, G., Jackson, M., Nuseibeh, B., 2004. Architecture-driven prob-
lem decomposition. In: Proc. 12th IEEE International Requirements Engineering
Conference (RE 2004), Kyoto, Japan, pp. 80–89.

Runeson, P., 2003. Using students as experiment subjects—an analysis on
graduate and freshman student data. In: EASE’03—Proc. 7th Int. Conf.
on Empirical Assessment & Evaluation in Software Engineering, April,
pp. 95–102.

Schwanke, R., 2005. GEAR: a good enough architectural requirements process. In:
Proc. 5th Working IEEE/IFIP Conference on Software Architecture (WICSA 05),
Pittsburgh, USA, pp. 57–66.

Shekaran, C., 1994a. Panel overview: the role of software architecture in require-
ments engineering. In: Proc. First Int. Conf. on Requirements Engineering, April,
p. 239.

tems a
S
S
S
S
S
S
T
T

V

W
W
W
Z
J.A. Miller et al. / The Journal of Sys

hekaran, C., 1994b. The role of software architecture in requirements engi-
neering. In: Proc. First Int. Conf. on Requirements Engineering, April,
p. 245.

haw, M., 2003. Writing good software engineering research papers: minitutorial.
In: Proc. 25th International Conference on Software Engineering (ICSE 2003),
Portland, USA, Tutorial Session, pp. 726–736.

hen, Y., Madhavji, N.H., 2006. ESDM—a method for developing evolutionary scenar-
ios for analysing the impact of historical changes on architectural elements. In:
Proc. 22nd IEEE International Conference on Software Maintenance (ICSM’06),
pp. 45–54.

toll, P., Wall, A., Norstrom, C., 2008. Guiding architectural decisions with the
influencing factors method. In: Proc. Seventh Working IEEE/IFIP Conference on
Software Architecture, February, pp. 179–188.

oftware Requirements to Architectures Workshop (STRAW), 2001. Proc. Interna-
tional Conference on Software Engineering (ICSE) Workshop, Toronto, Canada,
June.

oftware Requirements to Architectures Workshop (STRAW), 2003. Proc. Interna-
tional Conference on Software Engineering (ICSE) Workshop, Portland, USA,
May.

ichy, W.F., Lukowicz, Prechelt, L., Ernst, A., 1995. Experimental evaluation in com-
puter science: a quantitative study. Journal of Systems and Software (January),
1–18.

rochim, W., 2006. Research Methods Knowledge Base (This is available at
http://www.socialresearchmethods.net/kb/design.php. Last accessed January,
2009).

ogt, P., 1993. Dictionary of Statistics and Methodology: A Nontechnical Guide for
the Social Sciences. Sage Publications, California, USA.

aignier, G., Le Meur, A.F., Duchien, L., 2007. FIESTA: a generic framework for
integrating new functionalities into software architectures. In: Proc. First
European Conference on Software Architecture (ECSA 2007), LNCS 4758,
pp. 76–91.

ang, Z., Sherdil, K., Madhavji, N., 2005. ACCA: an architecture-centric concern anal-
ysis method. In: Proc. IEEE Working Int. Conference on Software Architecture

(WICSA), Pittsburgh, USA, November, pp. 99–108.

ieringa, R.J., Heerkens, J., 2006. The methodological soundness of requirements
engineering papers: a conceptual framework and two case studies. Require-
ments Engineering Journal 11, 295–307.

ave, P., 1997. Classification of research efforts in requirements engineering. ACM
Computing Surveys 29 (4), 315–321.

nd Software 83 (2010) 2441–2455 2455

Nazim H. Madhavji is a Professor in the Department of Computer Science at the
University of Western Ontario, Canada. He obtained his Ph.D. from the Univer-
sity of Manchester, England, in 1980. His research interests includes: software
requirements; software architectures; evolution of software; software quality and
measurements; defect tracking and analysis; congruence between software prod-
ucts and processes; and empirical studies.

He has led a number of research projects in software engineering, involving
corporations such as IBM Canada, DMR Group, CAE Electronics, Transport Canada,
and CRIM, and was a Principal Investigator in several multi-university projects. He
is the chief architect and editor of the 27-chapter book “Software Evolution and
Feedback: Theory and Practice” with Juan F. Ramil and Dewayne Perry, John Wiley,
2006. He is an Editor (with Khaled El Emam) of the book: “Elements of Software
Process Assessment and Improvement”, IEEE Computer Society Press, 1999. He is
on the Editorial Boards of several scientific journals. He is a consultant to several
organisations in the field of software and is a consultant to several universities
internationally in the areas of Software Engineering research, pedagogy, and faculty
development.

Remo Ferrari is a doctoral candidate at the University of Western Ontario, London,
Ontario, Canada. His research interest is in Software Engineering, specifically in the
areas of Software Architecture, Requirements and Project Management. In particu-
lar, his work has investigated these areas through an empirical viewpoint, examining
such issues as the technical effects an architecture has on new requirements, and
the impact of human agents background experience on software architecting. His
primary research goal is to help in advancing the underlying scientific knowledge
and theory with respect to these and related issues. This goal is being pursued in
two phases: to first conduct “laboratory” or exploratory studies to generate pre-
liminary results and then, based on these findings, to conduct industrial studies.
Such empirical work is intended to form underlying grounded theory on which pro-
cesses, methods, tools, etc. can be developed. In addition to research, he teaches
both Software Engineering and Computer Science courses.

James Miller completed his Master of Science degree at the University of Western

Ontario, London, Ontario, Canada in the area of Software Engineering. His spe-
cific research interests are in the areas of Software Architecture and Requirements
Engineering, in particular, his investigations in these areas are from an empiri-
cal perspective, typically exploratory studies to generate preliminarily results and
grounded theory on which further, more confirmatory work can be based. He is
currently serving as an officer in the Canadian Air Forces.

http://www.socialresearchmethods.net/kb/design.php

  • An exploratory study of architectural effects on requirements decisions
  • Introduction
    Related work
    RE and SA relationship
    RE–SA technology
    Architecture evolution
    Reflection on research
    The study
    Research questions
    Participants
    The RE project
    The pre-existing requirements document
    The architectural document
    Data collection
    The decision meta-model
    Data-gathering tool
    Data collection
    Threats to validity
    Threats to external validity
    Threats to construct validity
    Threats to conclusion validity

    Results
    How an architecture affects requirements decision-making (Q1)
    Types of architectural effects
    Architectural impact characteristics
    Architectural aspects affecting requirements decisions (Q2)
    Architectural aspects across effect-types
    Architectural aspects across project groups

    Implications
    Planning and risk management
    RE and SA technology
    Architectural evolution
    Tighter SA–RE integration
    RE-to-SA feed-forward process
    Increased middleware
    Analysis
    Future empirical work
    Conclusions
    Acknowledgement
    References

J.Software Engineering & Applications, 2010, 3, 827-838

doi:10.4236/jsea.2010.39096 Published Online September 2010 (http://www.SciRP.org/journal/jsea)

Copyright © 2010 SciRes. JSEA

827

What’s Wrong with Requirements Specification?
An Analysis of the Fundamental Failings of
Conventional Thinking about Software
Requirements, and Some Suggestions for

Getting it Right

Tom Gilb

Result Planning Limited, Norway and UK.
Email: Tom@Gilb.com

ABSTRACT

We know many of our IT projects fail and disappoint. The poor state of requirements methods and practice is frequently
stated as a factor for IT project failure. In this paper, I discuss what I believe is the fundamental cause: we think like
programmers, not engineers and managers. We do not concentrate on value delivery, but instead on functions, on
use-cases and on code delivery. Further, management is not taking its responsibility to make things better. In this paper,
ten practical key principles are proposed, which aim to improve the quality of requirements specification.

Keywords: Requirements, Value Delivery, Requirements Definition, Requirements Methods

1. Introduction

We know many of our IT projects fail and disappoint.
We know bad ‘requirements’, that is requirements that
are ambiguous or are not really needed, are often a factor.
However in my opinion, the real problem is one that al-
most no one has openly discussed or dealt with. Certainly,
it fails to be addressed by many widely known and
widely taught methods. So what is this problem? In a
nutshell: it is that we think like programmers, and not as
engineers and managers. In other words, we do not con-
centrate on value delivery, but instead on functions, on
use cases and on code delivery. And no one is attempting
to prevent this: IT project management and senior man-
agement are not taking their responsibility to make things
better.

2. Ten Key Principles for Successful
Requirements

In this paper, my ten key principles for improving the
approach to requirements are outlined. These principles
are not new, and they could be said to be simply com-
monsense. However, many IT projects still continue to
fail to grasp their significance, and so it is worth restating

them. These key principles are summarized in Figure 1.
Let’s now examine these principles in more detail and
provide some examples.

Note, unless otherwise specified, further details on all
aspects of Planguage can be found in [1].

2.1. Understand the Top Level Critical
Objectives

I see the ‘worst requirement sin of all’ in almost all pro-
jects we look at, and this applies internationally. Time
and again, the high-level requirements (the ones that
funded the project), are vaguely stated, and ignored by
the project team. Such requirements frequently look like
the example given in Figure 2.

The requirements in Figure 2 have been slightly ed-
ited to retain anonymity. They are for a real project that
ran for eight years and cost over 100 million US dollars.
The project failed to deliver any of these requirements.
However, the main problem is that these are not top-level
requirements: they fail to explain in sufficient detail what
the business is trying to achieve. There are additional
problems as well that I’ll discuss further later in this pa-
per (such as lack of quantification, mixing optional de-
signs into the requirements, and insufficient background

What’s Wrong with Requirements Specification? An Analysis of the Fundamental Failings of Conventional
Thinking about Software Requirements, and Some Suggestions for Getting it Right

Copyright © 2010 SciRes. JSEA

828

Ten Key Principles for Successful Requirements

1 Understand the top level critical objectives

2 Look towards value delivery: systems thinking, not just software

3 Define a ‘requirement’ as a ‘stakeholder-valued end state’

4 Think stakeholders: not just users and customers!

5 Quantify requirements as a basis for software engineering

6 Don’t mix ends and means

7 Focus on the required system quality, not just its functionality

8 Ensure there is ‘rich specification’: requirement specifications need far more information than the requirement

itself!

9 Carry out specification quality control (SQC)

10 Recognize that requirements change: use feedback and update requirements as necessary

Figure 1. Ten key principles for successful requirements.

Example of Initial Top Level Objectives

1 Central to the corporation’s business strategy is to be the world’s premier integrated service provider

2 Will provide a much more efficient user experience

3 Dramatically scale back the time frequently needed after the last data is acquired to time align, depth correct,

splice, merge, recomputed and/or do whatever else is needed to generate the desired products

4 Make the system much easier to understand and use than has been the case with the previous system

5 A primary goal is to provide a much more productive system development environment then was previously the

case

6 Will provide a richer set of functionality for supporting next generation logging tools and applications

7 Robustness is an essential system requirement

8 Major improvements in data quality over current practices

Figure 2. Example of initial top level objectives.

description).

Management at the CEO, CTO and CIO level did not
take the trouble to clarify these critical objectives. In fact,
the CIO told me that the CEO actively rejected the idea
of clarification! So management lost control of the pro-
ject at the very beginning.

Further, none of the technical ‘experts’ reacted to the
situation. They happily spent $100 million on all the
many suggested architecture solutions that were mixed in
with the objectives.

It actually took less than an hour to rewrite one of
these objectives so that it was clear, measurable, and
quantified. So in one day’s work the project could have
clarified the objectives, and avoided 8 years of wasted
time and effort.

1) The top ten critical requirements for any project can
be put on a single page.

2) A good first draft of the top ten critical require-
ments for any project can be made in a day’s work, as-

suming access to key management.

2.2. Look towards Value Delivery: Systems
Thinking, not Just a Focus on Software

The whole point of a project is delivering realized value,
also known as benefits, to the stakeholders: it is not the
defined functionality, and not the user stories that count.
Value can be defined as the benefit we think we get from
something [1]. See Figure 3. Notice the subtle distinc-
tion between initially perceived value (‘I think that
would be useful’), and realized value: effective and fac-
tual value (‘this was in practice more valuable than we
thought it would be, because …’).

The issue is that conventional requirements thinking is
that it is not closely enough coupled with ‘value’. IT
business analysts frequently fail to gather the information
supporting a more precise understanding and/or the cal-
culation of value. Moreover, the business people when
stating their requirements frequently fail to justify them

What’s Wrong with Requirements Specification? An Analysis of the Fundamental Failings of Conventional
Thinking about Software Requirements, and Some Suggestions for Getting it Right
Copyright © 2010 SciRes. JSEA

829

Figure 3. Value can be delivered gradually to stakeholders. Different stakeholders will perceive different value.

using value.

The danger if requirements are not closely tied to
value is that:

1) We risk failure to deliver the value expected, even if
‘requirements’ are satisfied

2) We risk having a failure to think about all the things
to do that are necessary prerequisites to actually deliver-
ing full value to real stakeholders on time: we need sys-
tems thinking – not just programming.

How can we articulate and document notions of value
in a requirement specification? See the Planguage exam-
ple for Intuitiveness, a component quality of Usability, in
Figure 4.

For brevity, a detailed explanation is unable to be
given here. Hopefully, the Planguage specification is
reasonably understandable without detailed explanation.
For example, the Goal statement (80%) specifies which
market (USA) and users (Seniors) it is intended for,
which set of tasks are valued (the ‘Photo Tasks Set’), and
when it would be valuable to get it delivered (2012). This
‘qualifier’ information in all the statements, helps docu-
ment where, who, what, and when the quality level ap-
plies. The additional Value parameter specifies the per-
ceived value of achieving 100% of the requirement. Of
course, more could be said about value and its specifica-
tion, this is merely a ‘wake-up call’ that explicit value
needs to be captured within requirements. It is better than
the more common specifications of the Usability re-
quirement that we often see, such as: “2.4. The product
will be more user-friendly, using Windows”.

So who is going to make these value statements in re-
quirements specifications? I don’t expect developers to
care much about value statements in requirements. Their

job is to deliver the requirement levels that someone else
has determined are valued. Deciding what sets of re-
quirements are valuable is a Product Owner (Scrum) or
Marketing Management function. Certainly only the IT-
related value should be determined by the IT staff.

2.3. Define a ‘Requirement’ as a
‘Stakeholder-Valued End State’

Do we all have a shared notion of what a ‘requirement’ is?
I am afraid that another of our problems. Everybody has
an opinion, and most of the opinions about the meaning
of the concept ‘requirement’ are at variance with most
other opinions. I believe that few of the popular defini-
tions are correct or useful. Below I provide you with my
latest ‘opinion’ about the best definition of ‘requirement’,
but note it is a ‘work in progress’ and possibly not my
final definition. Perhaps some of you can help improve
this definition even further.

To emphasize ‘the point’ of IT systems engineering, I
have decided to define a requirement as a “stakeholder-
valued end state”. You possibly will not accept, or use
this definition yet, but this is the definition that I shall
use in this paper, and I will argue the case for it. In addi-
tion, I have also identified, and defined a large number of
requirement concepts [1]. A sample of these concepts is
given in Figure 5.

Further, note that I make a distinction amongst:
1) A requirement (a stakeholder-valued end state)
2) A requirement specification
3) An implemented requirement
4) A design in partial, or full service, of implementing

a requirement.

What’s Wrong with Requirements Specification? An Analysis of the Fundamental Failings of Conventional
Thinking about Software Requirements, and Some Suggestions for Getting it Right
Copyright © 2010 SciRes. JSEA

830

Usability. Intuitiveness:

Type: Marketing Product Requirement.

Stakeholders: {Marketing Director, Support Manager, Training Center}.

Impacts: {Product Sales, Support Costs, Training Effort, Documentation Design}.

Supports: Corporate Quality Policy 2.3.

Ambition: Any potential user, any age, can immediately discover and correctly use all functions of the product, without

training, help from friends, or external documentation.

Scale: % chance that a defined [User] can successfully complete the defined [Tasks] , with no external

help.

Meter: Consumer Reports tests all tasks for all defined user types, and gives public report.

—————————————————————– Analysis —————————————————————–

Trend [Market = Asia, User = {Teenager, Early Adopters}, Product = Main Competitor, Projection = 2013]: 95% ± 3%

< - Market Analysis.

Past [Market = USA, User = Seniors, Product = Old Version, Task = Photo Tasks Set, When = 2010]: 70% ± 10% < -

Our Labs Measures.

Record [Market = Finland, User = {Android Mobile Phone, Teenagers}, Task = Phone + SMS Task Set, Record Set =

January 2010]: 98% ± 1% < - Secret Report.

———————————————————— Our Product Plans ———————————————————–

Goal [Market = USA, User = Seniors, Product = New Version, Task = Photo Tasks Set, When = 2012]: 80% ± 10% < -

Draft Marketing Plan.

Value [Market =USA, User = Seniors, Product = New Version, Task = Photo Tasks Set, Time Period = 2012]: 2 M

USD.

Tolerable [Market = Asia, User = {Teenager, Early Adopters}, Product = Our New Version, Deadline = 2013]: 97% ±

3% < - Marketing Director Speech.

Fail [Market = Finland, User = {Android Mobile Phone, Teenagers}, Task = Phone + SMS Task Set, Product Release

9.0]: Less Than 95%.

Value [Market = Finland, User = {Android Mobile Phone, Teenagers}, Task = Phone + SMS Task Set, Time Period =

2013]: 30K USD.

Figure 4. A practical made-up Planguage example, designed to display ways of making the value of a requirement clear.

Figure 5. Example of Planguage requirements concepts.

What’s Wrong with Requirements Specification? An Analysis of the Fundamental Failings of Conventional
Thinking about Software Requirements, and Some Suggestions for Getting it Right
Copyright © 2010 SciRes. JSEA

831

These distinctions will be described in more detail

later in this paper.

2.4. Think Stakeholders: Not Just Users and
Customers!

Too many requirements specifications limit their scope to
being too narrowly focused on user or customer needs.
The broader area of stakeholder needs and values should
be considered, where a ‘stakeholder’ is anyone or any-
thing that has an interest in the system [1]. It is not just
the end-users and customers that must be considered: IT
development, IT maintenance, senior management, gov-
ernment, and other stakeholders matter as well.

2.5. Quantify Requirements as a Basis for
Software Engineering

Some systems developers call themselves ‘software en-
gineers’, they might even have a degree in the subject, or
in ‘computer science’, but they do not seem to practice
any real engineering as described by engineering profes-
sors, like Koen [2]. Instead these developers all too often
produce requirements specifications consisting merely of
words. No numbers, just nice sounding words; good
enough to fool managers into spending millions for
nothing (for example, “high usability”).

Engineering is a practical bag of tricks. My dad was a
real engineer (with over 100 patents to his name!), and I
don’t remember him using just words. He seemed forever
to be working with slide rules and back-of-the-envelope
calculations. Whatever he did, he could you tell why it
was numerically superior to somebody else’s product. He
argued with numbers and measures.

My life changed professionally, when, in my twenties,
I read the following words of Lord Kelvin: “In physical
science the first essential step in the direction of learning
any subject is to find principles of numerical reckoning
and practicable methods for measuring some quality
connected with it. I often say that when you can measure
what you are speaking about, and express it in numbers,
you know something about it; but when you cannot
measure it, when you cannot express it in numbers, your
knowledge is of a meagre and unsatisfactory kind; it may
be the beginning of knowledge, but you have scarcely in
your thoughts advanced to the state of Science, whatever
the matter may be” [3]. Alternatively, more simply, also
credited to Lord Kelvin: “If you can not measure it, you
can not improve it.”

The most frequent and critical reasons for software
projects are to improve them qualitatively compared to
their predecessors (which may or may not be automated
logic). However, we seem to almost totally avoid the

practice of quantifying these qualities, in order to make
them clearly understood, and also to lay the basis for
measuring and tracking our progress in improvement
towards meeting our quality level requirements.

This art of quantification of any quality requirement
should be taught as a fundamental to university students
of software and management disciplines (as it is in other
sciences and engineering). One problem seems to be that
the teachers of software disciplines do not appreciate that
quality has numeric dimensions and so cannot teach it.
Note the problem is not that managers and software peo-
ple cannot and do not quantify at all. They do. It is the lack
of ‘quantification of the qualitative’ – the lack of numeric
quality requirements – that is the specific problem.

Perhaps we need an agreed definition of ‘quality’ and
‘qualitative’ before we proceed, since the common inter-
pretation is too narrow, and not well agreed. Most soft-
ware developers when they say ‘quality’ are only think-
ing of bugs (logical defects) and little else. Managers
speaking of the same software do not have a broader
perspective. They speak and write often of qualities, but
do not usually refer to the broader set of ‘-ilities’ as
qualities, unless pressed to do so. They may speak of
improvements, even benefits instead.

I believe that the concept of ‘quality’ is simplest ex-
plained as ‘how well something functions’. I prefer to
specify that it is necessarily a ‘scalar’ attribute, since
there are degrees of ‘how well’. In addition to quality,
there are other requirement-related concepts, such as
workload capacity (how much performance), cost (how
much resource), function (what we do), and design (how
we might do function well, at a given cost) [4,1]. Some
of these concepts are scalar and some, binary. See Fig-
ures 6 and 7 for some examples of quality concepts and
how quality can be related to the function, resources and
design concepts.

My simple belief is that absolutely all qualities that we
value in software (and associated systems) can be ex-
pressed quantitatively. I have yet to see an exception. Of
course most of you do not know that, or believe it. One
simple way to explore this is to search the internet. For
example: “Intuitiveness scale measure” turns up 3 million
hits, including this excellent study [5] by Yanga et al.

Several major corporations have top-level policy to
quantify all quality requirements (sometimes suggested
by me, sometimes just because they are good engineers).
They include IBM, HP, Ericsson and Intel [4,1].

The key idea for quantification is to define, or reuse a
definition, of a scale of measure. For example: (earlier
given with more detail)

To give some explanation of the key quantification
features in Figure 8:

What’s Wrong with Requirements Specification? An Analysis of the Fundamental Failings of Conventional
Thinking about Software Requirements, and Some Suggestions for Getting it Right
Copyright © 2010 SciRes. JSEA

832

Figure 6. A way of visualizing qualities in relation to function and cost. Qualities and costs are scalar variables, so we can
define scales of measure in order to discuss them numerically. The arrows on the scale arrows represent interesting points,
such as the requirement levels. The requirement is not ‘security’ as such, but a defined, and testable degree of security [1].

Figure 7. A graphical way of understanding performance attributes (which include all qualities) in relation to function, de-
sign and resources. Design ideas cost some resources, and design ideas deliver performance for given functions. Source [1].

1) Ambition is a high level summary of the require-
ment. One that is easy to agree to, and understand
roughly. The Scale and Goal following it MUST corre-
late to this Ambition statement.

2) Scale is the formal definition of our chosen scale of
measure. The parameters [User] and [Task] allow us to
generalize here, while becoming more specific in detail
below (see earlier example). They also encourage and
permit the reuse of the Scale, as a sort of ‘pattern’.

3) Meter is a defined measuring process. There can be

more than one for different occasions. Notice the Kelvin
quotation above, how he twice in the same sentence dis-
tinguishes carefully between numeric definition (Scale),
and measurement process or instrument (Meter). Many
people, I hope you are not one, think they are the same
thing, for example: Km/hour is not a speedometer, and a
volt is not a voltmeter.

4) Goal is one of many possible requirement levels
(see earlier detail for some others; Fail, Tolerable,
Stretch, Wish, are other requirement levels). We are de-

What’s Wrong with Requirements Specification? An Analysis of the Fundamental Failings of Conventional
Thinking about Software Requirements, and Some Suggestions for Getting it Right
Copyright © 2010 SciRes. JSEA

833

fining a stakeholder valued future state (state = 80% ±
10%).

One stakeholder is ‘USA Seniors’. The future is 2012.
The requirement level type, Goal is defined as a very
high priority, budgeted promise of delivery. It is of
higher priority than a Stretch or Wish level. Note other
priorities may conflict and prevent this particular re-
quirement from being delivered in practice.

If you know the conventional state of requirements
methods, then you will now, from this example alone,
begin to appreciate the difference that I am proposing.
Especially for quality requirements. I know you can
quantify time, costs, speed, response time, burn rate, and
bug density–but there is more!

Here is another example of quantification. It is the ini-
tial stage of the rewrite of Robustness from the Figure 2
example. First we determined that Robustness is complex
and composed of many different attributes, such as Test-
ability. See Figure 9.

And see Figure 10, which quantitatively defines one
of the attributes of Robustness, Testability.

Note this example shows the notion of there being dif-
ferent levels of requirements. Principle 1 also has rele-
vance here as it is concerned with top-level objectives
(requirements). The different levels that can be identified
include: corporate requirements, the top-level critical few
project or product requirements, system requirements and
software requirements. We need to clearly document the

Usability. Intuitiveness:

Type: Marketing Product Quality Requirement.

Ambition: Any potential user, any age, can immediately discover and correctly use all functions of the product, without
training, help from friends, or external documentation.

Scale: % chance that defined [User] can successfully complete defined [Tasks] with no external help.

Meter: Consumer reports tests all tasks for all defined user types, and gives public report.

Goal [Market = USA, User = Seniors, Product = New Version, Task = Photo Tasks Set, When = 2012]: 80% ± 10% < -

Draft Marketing Plan.

Figure 8. A simple example of quantifying a quality requirement, ‘Intuitiveness’.

Robustness:

Type: Complex Product Quality Requirement.

Includes: {Software Downtime, Restore Speed, Testability, Fault Prevention Capability, Fault Isolation Capability, Fault

Analysis Capability, Hardware Debugging Capability}.

Figure 9. Definition of a complex quality requirement, Robustness.

Testability:

Type: Software Quality Requirement.

Version: Oct 20, 2006.

Status: Draft.

Stakeholder: {Operator, Tester}.

Ambition: Rapid duration automatic testing of with extreme operator setup and initiation.

Scale: The duration of a defined [Volume] of testing or a defined [Type of Testing] by a defined [Skill Level] of system

operator under defined [Operating Conditions].

Goal [All Customer Use, Volume = 1,000,000 data items, Type of Testing = WireXXXX vs. DXX, Skill Level = First

Time Novice, Operating Conditions = Field]: < 10 minutes.

Design: Tool simulators, reverse cracking tool, generation of simulated telemetry frames entirely in software, application

specific sophistication for drilling – recorded mode simulation by playing back the dump file, application test harness

console <- 6.2.1 HFS.

Figure 10. Quantitative definition of testability, an attribute of Robustness.

What’s Wrong with Requirements Specification? An Analysis of the Fundamental Failings of Conventional
Thinking about Software Requirements, and Some Suggestions for Getting it Right
Copyright © 2010 SciRes. JSEA

834

level and the interactions amongst these requirements.

An additional notion is that of ‘sets of requirements’.
Any given stakeholder is likely to have a set of require-
ments rather than just an isolated single requirement. In
fact, achieving value could depend on meeting an entire
set of requirements.

2.6. Don’t Mix Ends and Means

“Perfection of means and confusion of ends seem to
characterize our age.” Albert Einstein. 1879-1955

The problem of confusing ends and means is clearly an
old one, and deeply rooted. We specify a solution, design
and/or architecture, instead of what we really value–our
real requirement [6]. There are explanatory reasons for
this – for example solutions are more concrete, and what
we want (qualities) are more abstract for us (because we
have not yet learned to make them measurable and con-
crete).

The problems occur when we do confuse them: if we
do specify the means, and not our true ends. As the say-
ing goes: “Be careful what you ask for, you might just
get it” (unknown source). The problems include:

1) You might not get what you really want
2) The solution you have specified might cost too

much or have bad side effects, even if you do get what
you want

3) There may be much better solutions you don’t know
about yet.

So how to we find the ‘right requirement’, the ‘real
requirement’ [6] that is being ‘masked’ by the solution?
Assume that there probably is a better formulation, which
is a more accurate expression of our real values and
needs. Search for it by asking ‘Why?’ Why do I want X,
it is because I really want Y, and assume I will get it
through X. But, then why do I want Y? Because I really
want Z and assume that is the best way to get X. Con-
tinue the process until it seems reasonable to stop. This is
a slight variation on the ‘5 Whys’ technique [7], which is

normally used to identify root causes of problems (rather
than high level objectives).

Assume that our stakeholders will usually state their
values in terms of some perceived means to get what
they really value. Help them to identify (The 5 Whys?)
and to acknowledge what they really want, and make that
the ‘official’ requirement. Don’t insult them by telling
them that they don’t know what they want. But explain
that you will help them more-certainly get what they
more deeply want, with better and cheaper solutions,
perhaps new technology, if they will go through the ‘5
Whys?’ process with you. See Figure 11.

Note that this separation of designs from the require-
ments does not mean that you ignore the solutions/de-
signs/architecture when software engineering. It is just
that you must separate your requirements including any
mandatory means, from any optional means.

2.7. Focus on the Required System Quality, Not
Just its Functionality

Far too much attention is paid to what the system must
do (function) and far too little attention is given to how
well it should do it (qualities)–in spite of the fact that
quality improvements tend to be the major drivers for
new projects. See Table 1, which is from the Confirmit
case study [8]. Here focusing on the quality requirements,
rather than the functions, achieved a great deal!

2.8. Ensure there is ‘Rich Specification’:
Requirement Specifications need Far More
Information than the Requirement itself

Far too much emphasis is often placed on the require-
ment itself; and far too little concurrent information is
gathered about its background, for example: who wants
this requirement and why? What benefits do they per-
ceive from this requirement? I think the requirement it-
self might be less than 10% of a complete requirement
specification that includes the background information.

I believe that background specification is absolutely

Why do you require a ‘password’? For Security!

What kind of security do you want? Against stolen information

What level of strength of security against stolen information are you willing to pay for? At least a 99% chance that

hackers cannot break in within 1 hour of trying! Whatever that level costs up to €1 million.

So that is your real requirement? Yep.

Can we make that the official requirement, and leave the security design to both our security experts, and leave it to

proof by measurement to decide what is really the right design? Of course!

The aim being that whatever technology we choose, it gets you the 99%?

Sure, thanks for helping me articulate that!

Figure 11. Example of the requirement, not the design feature, being the real requirement.

What’s Wrong with Requirements Specification? An Analysis of the Fundamental Failings of Conventional
Thinking about Software Requirements, and Some Suggestions for Getting it Right
Copyright © 2010 SciRes. JSEA

835

Table 1. Extract from confirmit case study [8].

Description of requirement/work task Past Status

Usability. Productivity: Time for the system to generate a survey 7200 sec 15 sec

Usability. Productivity: Time to set up a typical market research report 65 min 20 min

Usability. Productivity: Time to grant a set of end-users access to a report set and
distribute report login information

80 min 5 min

Usability. Intuitiveness: The time in minutes it takes a medium-experienced pro-
grammer to define a complete and correct data transfer definition with Confirmit
Web Services without any user documentation or any other aid

15 min 5 min

Performance. Runtime. Concurrency: Maximum number of simultaneous respondents
executing a survey with a click rate of 20 sec and a response time < 500ms given a defined [Survey Complexity] and a defined [Server Configuration, Typical]

250 users 6000

mandatory: it should be a corporate standard to specify a
great deal of this related information, and ensure it is
intimately and immediately tied into the requirement
specification itself.

Such background information is the part of a specifi-
cation, which is useful related information, but is not
central (core) to the implementation, and nor is it com-
mentary. The central information includes: Scale, Meter,
Goal, Definition and Constraint. Commentary is any de-
tail that probably will not have any economic, quality or
effort consequences if it is incorrect, for example, notes
and comments.

Background specification includes: benchmarks {Past,
Record, Trend}, Owner, Version, Stakeholders, Gist

(brief description), Ambition, Impacts, and Supports. The
rationale for background information is as follows:

1) To help judge value of the requirement
2) To help prioritize the requirement
3) To help understand risks with the requirement
4) To help present the requirement in more or less de-

tail for various audiences and different purposes
5) To give us help when updating a requirement
6) To synchronize the relationships between different

but related levels of the requirements
7) To assist in quality control of the requirements
8) To improve the clarity of the requirement.
See Figure 12 for an example, which illustrates the

help given by background information regarding risks.

Reliability:

Type: Performance Quality.

Owner: Quality Director. Author: John Engineer.

Stakeholders: {Users, Shops, Repair Centers}.

Scale: Mean Time Between Failure.

Goal [Users]: 20,000 hours < - Customer Survey, 2004.

Rationale: Anything less would be uncompetitive.

Assumption: Our main competitor does not improve more than 10%.

Issues: New competitors might appear.

Risks: The technology costs to reach this level might be excessive.

Design Suggestion: Triple redundant software and database system.

Goal [Shops]: 30,000 hours < - Quality Director.

Rationale: Customer contract specification.

Assumption: This is technically possible today.

Issues: The necessary technology might cause undesired schedule delays.

Risks: The customer might merge with a competitor chain and leave us to foot the costs for the component parts

that they might no longer require.

Design Suggestion: Simplification and reuse of known components.

Figure 12. A requirement specification can be embellished with many background specifications that will help us to under-
stand risks associated with one or more elements of the requirement specification [9].

What’s Wrong with Requirements Specification? An Analysis of the Fundamental Failings of Conventional
Thinking about Software Requirements, and Some Suggestions for Getting it Right
Copyright © 2010 SciRes. JSEA

836

Let me emphasize that I do not believe that this back-

ground information is sufficient if it is scattered around
in different documents and meeting notes. I believe it
needs to be directly integrated into a master sole reusable
requirement specification object for each requirement.

Otherwise it will not be available when it is needed, and
will not be updated, or shown to be inconsistent with
emerging improvements in the requirement specification.
See Figure 13 for a requirement template for function
specification [1], which hints at the richness possible

TEMPLATE FOR FUNCTION SPECIFICATION

Tag: tion>.

Type: <{Function Specification, Function (Target) Requirement, Function Constraint}>.

=================================== Basic Information ===================================

Version: .

Status: <{Draft, SQC Exited, Approved, Rejected}>.

Quality Level: .

Owner: .

Stakeholders: .

Gist: .

Description:

detailed. Remember to include definitions of any local terms>.

===================================== Relationships =====================================

Supra-functions:

even more illuminating. Note: an alternative way of expressing supra-function is to use Is Part Of>.

Sub-functions:

alternative ways of expressing sub-functions are Includes and Consists Of>.

Is Impacted By:

actual function is NOT modified by the design idea, but its presence in the system is, or can be, altered in some way.

This is an Impact Estimation table relationship>.

Linked To:

the above specified hierarchical function relations and IE-related links. Note: an alternative way is to express such a

relationship is to use Supports or Is Supported By, as appropriate>.

====================================== Measurement ====================================

Test: .

================================ Priority and Risk Management =============================

Rationale: < Justify the existence of this function. Why is this function necessary? >.

Value: ]:

livering the requirement>.

Assumptions:

lems if they were not true, or later became invalid>.

Dependencies:

which this function itself, is dependent on in any significant way>.

Risks:

ments and expected results>.

Priority:

before. Give any relevant reasons>.

Issues: .

====================================== Specific Budgets ==================================

Financial Budget:

tion>.

Figure 13. A template for function specification [1].

What’s Wrong with Requirements Specification? An Analysis of the Fundamental Failings of Conventional
Thinking about Software Requirements, and Some Suggestions for Getting it Right
Copyright © 2010 SciRes. JSEA

837

for background information.

2.9. Carry out Specification Quality Control
(SQC)

There is far too little quality control of requirements,
against relevant standards for requirements. All require-
ments specifications ought to pass their quality control
checks before they are released for use by the next proc-
esses. Initial quality control of requirements specification,
where there has been no previous use of specification
quality control (SQC) (also known as Inspection), using
three simple quality-checking rules (‘unambiguous to
readers’, ‘testable’ and ‘no optional designs present’),
typically identifies 80 to 200+ words per 300 words of
requirement text as ambiguous or unclear to intended
readers [10]!

2.10. Recognise That Requirements Change: Use
Feedback and Update Requirements as
Necessary

Requirements must be developed based on on-going
feedback from stakeholders, as to their real value.
Stakeholders can give feedback about their perception of
value, based on realities. The whole process is a ‘Plan
Do Study Act’ cyclical learning process involving many
complex factors, including factors from outside the sys-
tem, such as politics, law, international differences, eco-
nomics, and technology change.

The requirements must be evolved based on realistic
experience. Attempts to fix them in advance, of this ex-
perience flow, are probably wasted energy: for example,
if they are committed to–in contracts and fixed specifica-
tions.

3. Who or What will Change Things?

Everybody talks about requirements, but few people
seem to be making progress to enhance the quality of
their specifications and improve support for software
engineering. I am pessimistic. Yes, there are internation-
ally competitive businesses, like HP and Intel that have
long since improved their practices because of their
competitive nature and necessity. But they are very dif-
ferent from the majority of organizations building soft-
ware. The vast majority of IT systems development
teams we encounter are not highly motivated to learn or
practice first class requirements (or anything else!). Nei-
ther the managers nor the developers seem strongly mo-
tivated to improve. The reason is that they get by with,
and get well paid for, failed projects.

The universities certainly do not train IT/computer sci-

ence students well in requirements, and the business
schools also certainly do not train managers about such
matters [11]. The fashion now seems to be to learn over-
simplified methods, and/or methods prescribed by some
certification or standardization body. Interest in learning
provably more-effective methods is left to the enlight-
ened and ambitions few – as usual. So, it is the only the
elite few organizations and individuals who do in fact
realize the competitive edge they get with better practices
[8,12]. Maybe this is simply the way the world is: first
class and real masters of the art are rare. Sloppy ‘mud-
dling through’ is the norm. Failure is inevitable or per-
haps, denied. Perhaps insurance companies and lawmak-
ers might demand better practices, but I fear that even
that would be corrupted in practice, if history is any
guide (think of CMMI and the various organizations at
Level 5).

Excuse my pessimism! I am sitting here writing with
the BP Gulf Oil Leak Disaster in mind. The BP CEO
Hayward just got his reward today of £11 million in pen-
sion rights for managing the oil spill and 11 deaths. In
2007, he said his main job was “to focus ‘laser like’ on
safety and reliability” [13]. Now how would you define,
measure and track those requirements?

Welcome if you want to be exceptional! I’d be happy
to help!

4. Summary

Current typical requirements specification practice is
woefully inadequate for today’s critical and complex
systems. There seems to be wide agreement about that. I
have personally seen several real projects where the ex-
ecutives involved allowed over $100 million to be
wasted on software projects, rather than ever changing
their corporate practices. $100 million here and there,
corporate money, is not big money to these guys!

We know what to do to improve requirements specifi-
cation, if we want to, and some corporations have done
so, some projects have done so, some developers have
done so, some professors have done so: but when is the
other 99.99% of requirements stakeholders going to
wake up and specify requirements to a decent standard?
If there are some executives, governments, professors
and/or consultancies, who want to try to improve their
project requirements, then I suggest start by seeing how
your current requirements specifications measure up to
addressing the ten key principles in this paper.

5. Acknowledgements

Thanks to Lindsey Brodie for editing this paper.

What’s Wrong with Requirements Specification? An Analysis of the Fundamental Failings of Conventional
Thinking about Software Requirements, and Some Suggestions for Getting it Right
Copyright © 2010 SciRes. JSEA

838

REFERENCES

[1] T. Gilb, “Competitive Engineering: A Handbook for Sys-
tems Engineering, Requirements Engineering, and Soft-
ware Engineering Using Planguage,” Elsevier Butter-
worth-Heinemann, Boston, 2005.

[2] B. V. Koen, “Discussion of the Method: Conducting the
Engineer’s Approach to Problem Solving,” Oxford Uni-
versity Press, Oxford, 2003.

[3] L. Kelvin, “Electrical Units of Measurement,” a Lecture
Given on 3 May 1883, Published in the Book “Popular
Lectures and Addresses, Volume 1,” 1891.

[4] T. Gilb, “Principles of Software Engineering Manage-
ment,” Addison-Wesley, Boston, 1988.

[5] Z. Yanga, S. Caib, Z. Zhouc and N. Zhoua, “Develop-
ment and Validation of an Instrument to Measure User
Perceived Service Quality of Information Presenting Web
Portals,” Information & Management, Vol. 42, No. 4,
2005, pp. 575-589.

[6] T. Gilb, “Real Requirements.” http://www.gilb.com/tiki-
download_file.php?fileId =28

[7] T. Ohno, “Toyota Production System: Beyond Large-

Scale Production,” Productivity Press, New York, 1988.

[8] T. Johansen and T. Gilb, “From Waterfall to Evolutionary
Development (Evo): How we Created Faster, More
User-Friendly, More Productive Software Products for a
Multi-National Market,” Proceedings of INCOSE, Roch-
ester, 2005. http://www.gilb.com/tiki-download_file.php?
fileId=32

[9] T. Gilb, “Rich Requirement Specs: The Use of Planguage
to Clarify Requirements,” http://www.gilb.com/tiki-down-
load_file.php?fileId=44

[10] T. Gilb, “Agile Specification Quality Control, Testing
Experience,” March 2009. www.testingexperience.com/
testingexperience01_08

[11] K. Hopper and W. Hopper, “The Puritan Gift,” I. B. Tau-
rus and Co. Ltd., London, 2007.

[12] “Top Level Objectives: A Slide Collection of Case Stud-
ies.” http://www.gilb.com/tiki-download_file.php?fileId=
180

[13] “Profile: BP’s Tony Hayward, BBC Website: News US
and Canada,” 27 July 2010. http://www.bbc.co.uk/news/
world-us-canada-10754710

U

M

a

b

a

A

R

R
A
A

K
R
U

S

S

1

H
w

s

e

d
2

o

i
a
2
m
s
f
a
u
s
m
a

a
t
e
(

0
d

The Journal of System

s and Software 84 (2011) 328–339

C

ontents lists available at ScienceDirect

The Journal of Systems and Software

j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / j s s

ser requirements modeling and analysis of software-intensive systems

ichel dos Santos Soares a,∗, Jos Vrancken b, Alexander Verbraeck b

Universidade Federal de Uberlândia, P.O. Box 593, 38400-902 Uberlândia, Brazil
Delft University of Technology, P.O. Box 5015, NL 2600 GA, Delft, The Netherlands

r t i c l e i n f o

rticle history:
eceived 20 July 2010
eceived in revised form 7 October 2010

a b s t r a c t

The increasing complexity of software systems makes Requirements Engineering activities both more
important and more difficult. This article is about user requirements development, mainly the activities
of documenting and analyzing user requirements for software-intensive systems. These are modeling

ccepted 14 October 2010
vailable online 26 October 2010

eywords:
equirements
ML
ysML

activities that are useful for further Requirements Engineering activities. Current techniques for require-
ments modeling present a number of problems and limitations. Based on these shortcomings, a list of
requirements for requirements modeling languages is proposed. The proposal of this article is to show
how some extensions to SysML diagrams and tables can fulfill most of these requirements. The approach
is illustrated by a list of user requirements for a Road Traffic Management System.

© 2010 Elsevier Inc. All rights reserved.

oftware-intensive systems

. Introduction

Software-intensive systems (Wirsing et al., 2008; Tiako, 2008;
inchey et al., 2008) are large, complex systems in which soft-
are is an essential component, interacting with other software,

ystems, devices, actuators, sensors and with people. Being an
ssential component, software influences the design, construction,
eployment, and evolution of the system as a whole (ANSI/IEEE,
000). These systems are in widespread use and their impact
n society is increasing. Developments in engineering software-
ntensive systems have a large influence on the gains in productivity
nd prosperity that society has seen in recent years (Dedrick et al.,
003). Their complexity is increased due to the large number of ele-
ents and reliability factors. Thus, they must be decomposed into

everal smaller components in order to manage complexity and
acilitate their implementation and verification. In addition, there is
need to increase the level of abstraction, hiding whenever possible
nnecessary complexity, by the intense use of models. Examples of
oftware-intensive systems can be found in many sectors, such as
anufacturing plants, transportation, military, telecommunication

nd health care.
More specifically, the type of software-intensive systems that

re investigated in this article are the Distributed Real-Time Sys-
ems. The term Real-Time System usually refers to systems with
xplicit timing constraints (Gomaa, 2000; Laplante, 2004). Dijkstra
2002) recognized that some applications are concurrent in nature.

∗ Corresponding author.
E-mail addresses: michel@facom.ufu.br, mics.soares@gmail.com (M.d.S. Soares).

164-1212/$ – see front matter © 2010 Elsevier Inc. All rights reserved.
oi:10.1016/j.jss.2010.10.020

In concurrent problems, there is no way of predicting which sys-
tem component will provide the next input, which increases design
complexity. Moreover, system components, such as sensors and
actuators, are often geographically distributed in a network and
need to communicate according to specific timing constraints
described in requirements documents.

Requirements for software are a collection of needs expressed
by stakeholders respecting some constraints under which the soft-
ware must operate (Pressman, 2009; Robertson and Robertson,
2006). Requirements can be classified in many ways. The first
classification used in this article is related to the level of detail
(the second classification is presented in Section 7.1). In this case,
the two classes of requirements are user requirements and sys-
tem requirements (Sommerville, 2010). User requirements are
high-level abstract requirements based on end users’ and other
stakeholders’ viewpoint. They are usually written using natural lan-
guage, occasionally with the help of domain specific models such
as mathematical equations, or even informal models not related to
any method or language (Luisa et al., 2004). The fundamental pur-
pose of user requirements specification is to document the needs
and constraints gathered in order to later develop software based
on those r

equirements.

Systems requirements are derived from user requirements but
with a detailed description of what the system should do, and
are usually modeled using formal or semi-formal methods and

languages. This proposed classification allows the representation
of different views for different stakeholders. This is good Soft-
ware Engineering practice, as requirements should be written from
different viewpoints because different stakeholders use the

m for

various purposes.

dx.doi.org/10.1016/j.jss.2010.10.020

http://www.sciencedirect.com/science/journal/01641212

http://www.elsevier.com/locate/jss

mailto:michel@facom.ufu.br

mailto:mics.soares@gmail.com

dx.doi.org/10.1016/j.jss.2010.10.020

ystem

w
t
E
v
(
i
S
a
i
p
K
2
r
1
t
i
k
a
o
t
b
G
f
d
u
r
s
o
r

o
m
a
i
c
r
a

T

m
i
t
t

1
a
s

t
r
r
h
i
a
f
f
s

1

T
g

M.d.S. Soares et al. / The Journal of S

The process by which requirements for systems and soft-
are products are gathered, analyzed, documented and managed

hroughout the development life cycle is called Requirements
ngineering (Sommerville, 2010). Requirements Engineering is a
ery influential phase in the life cycle. According to the SWEBOK
Abran et al., 2004), it concerns Software Design, Software Test-
ng, Software Maintenance, Software Configuration Management,
oftware Engineering Management, Software Engineering Process,
nd Software Quality Knowledge Areas. Requirements Engineer-
ng is generally considered in the literature as the most critical
hase within the development of software (Juristo et al., 2002;
omi-Sirviö et al., 2003; Damian et al., 2004; Minor and Armarego,
005). Dealing with ever-changing requirements is considered the
eal problem of Software Engineering (Berry, 2004). Already in
973, Boehm suggested that errors in requirements could be up
o 100 times more expensive to fix than errors introduced dur-
ng implementation (Boehm, 1973). According to Brooks (1987),
nowing what to build, which includes requirements elicitation
nd technical specification, is the most difficult phase in the design
f software. Lutz (1993) showed that 60% of errors in critical sys-
ems were the results of requirements errors. Studies conducted
y the Standish Group (TSG, 2003) and other researchers (van
enuchten, 1991; Hofmann et al., 2001) found that the main factors

or problems with software projects (cost overruns, delays, user
issatisfaction) are related to requirements issues, such as lack of
ser input, incomplete requirements specifications, uncontrolled
equirements changing, and unclear objectives. In an empirical
tudy with 12 companies (Hall et al., 2002), it was discovered that,
ut of a total of 268 development problems cited, 48% (128) were
equirements problems.

Requirements Engineering can be divided into two main groups
f activities (Parviainen et al., 2004): (i) requirements develop-
ent, including activities such as eliciting, documenting, analyzing,

nd validating requirements, and (ii) requirements management,
ncluding activities related to maintenance, such as tracing and
hange management of requirements. This article is about user
equirements development, mainly the activities of documenting
nd analyzing user requirements for software-intensive systems.
hese are modeling activities that are useful for further Require-
ents Engineering activities. The assumption in this article is that

mproving requirements modeling may have a strong impact on
he quality of later requirements activities, such as requirements
racing, and in the design phase.

.1. Research question

The main research question to be answered in this article is given
s follows:

How to improve user requirements modeling and analysis for
oftware-intensive systems?

This question is mainly answered through the early introduc-
ion of graphical models, which are used to document and analyze
equirements. The identification and graphical representation of
equirements relationships facilitate that traces are made. This
elps in uncovering the impact that changes in requirements have

n the system design. Requirements are important to determine the
rchitecture. When designing the architecture, at least part of the
unctional requirements should be known. In addition, the non-
unctional requirements that the architecture has to conform with
hould be made explicit.

.2. Article outline

Initially, a subset of a list of user requirements for a Road
raffic Management System (RTMS) is presented, using natural lan-
uage, to be further modeled and analyzed (Section 2). Current

s and Software 84 (2011) 328–339 329

techniques for requirements modeling are presented in Section
3. A number of problems and limitations related to these tech-
niques are discussed in the same section. These shortcomings led
to a list of requirements for requirements modeling languages in
Section 4 and the proposed approach in Section 5 to fulfill the miss-
ing characteristics of the list. From the conclusion of Section 4,
the starting point for requirements modeling languages is to use
SysML diagrams and tables, which are presented in detail in Sec-
tion 6. Then, SysML’s constructions are extended in Section 7 and
proposed to model the initial list of user requirements (Section
8). The article ends with discussion (Section 9) and conclusions
(Section 10).

2. List of requirements for RTMS

The list of requirements given below is a subset from a document
which contains 79 atomic requirements for RTMS (AVV, 2006).
The document is a technical auditing work based on an extensive
literature study and interviews, in which the stakeholders were
identified. The requirements were gathered through interviews
with multiple stakeholders.

The stakeholders (and the related number of requirements)
were classified as: the Road Users (1), the Ministry of Transport,
Public Works and Water Management (2), the Traffic Managers
(10), the Traffic Management Center (8), the Task, Scenario and
Operator Manager (22), the Operators (4), the Designers of the
Operator’s Supporting Functions (15), and the Technical Quality
Managers (17). In this article the requirements of the Traffic Man-
ager were selected as example to be modeled using SysML diagrams
and constructions in Section 8. The requirements are given as fol-
lows.

Traffic Manager:

• TM4—It is expected that software systems will be increasingly
more intelligent for managing the traffic-flow in a more effective
and efficient manner.

• TM5—To optimize traffic flow, it is expected that gradually,
region-wide traffic management methods will be introduced.

• TM6—The traffic management systems must have a conve-
nient access to region-wide, nation-wide, or even European-wide
parameters so that the traffic-flow can be managed optimally.

• TM7—It must be possible for the Traffic Managers/experts to
express (strategic) “task and scenario management frames”, con-
veniently.

• TM8—The system should effectively gather and interpret all kinds
of information for the purpose of conveniently assessing the per-
formance of the responsible companies/organizations that have
carried out the construction of the related traffic systems and/or
infrastructure.

• TM9—The system must support the Traffic Managers/experts so
that they can express various experimental simulations and ana-
lytical models.

• TM10—The system must enable the Traffic Managers/experts to
access various kinds of statistical data.

• TM11—The system must enable the Traffic Managers/experts to
access different kinds of data for transient cases such as incidents.

• TM12—The system must provide means for expressing a wide
range of tasks and scenarios.

• TM13—The traffic management will gradually evolve from object
management towards task and scenario management.

3. Requirements modeling approaches

There are several approaches to modeling requirements. Basi-
cally, these approaches can be classified as graphics-based, purely

3 ystems and Software 84 (2011) 328–339

t
a

u
t
e
a
(

u
N
g
p
s
a
a

m
t
t
i
i
r
o

U
n
p
o
i
a
f
r
2
l
t
t
m

r
d

R
g
f
m
s
i
i
t
t
t
S
i

a
s
p

4

s

l
t
m

Table 1
List of requirements properties and representation techniques.

List of requirements NL SNL XP UC RD T

(M) Graphical modeling © © © � � ©
(M) Human readable � � � � � �
(M) Independent towards methodology � � © � � �
(M) Relationship between requirements © © © � � �
(M) Relationship requirements/design © © © © � �
(M) Requirements risks © © � © © ©
(M) Identify types of requirements � � � © © ©
(M) Priority between requirements © � � © © ©
(M) Non-functional requirements � � � © � �
(M) Grouping related requirements � © © � © ©
(M) Consistency © © © � � �
(M) Modifiable � � � � � �
(M) Ranking requirements by stability © © � © © ©
(S) Solve ambiguity © � © © � �
(S) Well-defined semantics © � © � � �
(S) Machine readable © © © � � �
(S) Correctness � � � � � �
(S) Completeness � � � � � �

30 M.d.S. Soares et al. / The Journal of S

extual, or a combination of both. Some are generic while others
re part of a specific methodology.

The most common approach is to write user requirements
sing natural language. The advantage is that natural language is
he main mean of communication between stakeholders. How-
ver, problems such as imprecision, misunderstandings, ambiguity
nd inconsistency are common when natural language is used
Kamsties, 2005).

With the purpose of giving more structure to requirements doc-
ments, structured natural language is used (Cooper and Ito, 2002).
evertheless, structured natural language is neither formal nor
raphical, and can be too much oriented to algorithms and specific
rogramming languages. Other collateral effects are that structured
pecifications may limit too early the programmers’ freedom, and
re mostly tailored towards procedural languages, being less suit-
ble for some modern languages and paradigms.

User Stories have been used as part of the eXtreme Program-
ing (XP) (Beck, 1999) agile methodology. They can be written by

he customer using non-technical terminology, in the format of sen-
ences using natural language. Although XP offers some advantages
n the Requirements Engineering process in general, such as user
nvolvement and defined formats for user requirements and tasks,
equirements are still loosely related, not graphically specified, and
riented to a specific methodology.

A well-known diagram used for requirements modeling are the
se Cases. Even before UML emerged as the main Software Engi-
eering modeling language, Use Cases were already a common
ractice for graphically representing functional requirements in
ther methodologies, such as Object-Oriented Software Engineer-
ng (OOSE) (Jacobson, 1992). Use Cases have some disadvantages
nd problems (Simons, 1999). They are applied mainly to model
unctional requirements and are not very helpful for other types of
equirements, such as non-functional ones (Soares and Vrancken,
007). Use Case diagrams lack well-defined semantics, which may

ead to differences in interpretation by stakeholders. For instance,
he include and extend relationships are considered similar, or even
he inverse of each other (Jacobson, 2004). In addition, Use Cases

ay be misused, when too much detail is added, which may incor-
ectly transform the diagrams into flowcharts or making them
ifficult to comprehend.

Two SysML diagrams are distinguished as useful mainly for
equirements Engineering activities: the SysML Requirements dia-
ram and the SysML Use Case diagram (OMG, 2008). One interesting
eature of the SysML Requirements diagram is the possibility of

odeling other type of requirements besides the functional ones,
uch as non-functional requirements. The SysML Use Case diagram
s derived from the UML Use Case diagram without important mod-
fications. In addition to these diagrams, SysML Tables can be used
o represent requirements in a tabular format. Tabular represen-
ations are often used in SysML but are not considered part of
he diagram taxonomy (OMG, 2008). Detailed explanation about
ysML diagrams and tables for Requirements Engineering are given
n Section 6.

A comparison of the aforementioned requirements modeling
pproaches is given in the next section. The objective is to identify
hortcomings of these approaches, which is used as the starting
oint of the proposed approach for a solution, in Section 5.

. Desirable requirements specification properties for
oftware-intensive systems

A list of desirable requirements for requirements modeling
anguages, together with a mapping of common languages and
echniques, is given in Table 1. This non-exhaustive list of require-

ents for requirements modeling languages is based on literature

(S) Verifiable © � © � � �
(S) Traceable � � � � � �
(S) Type of relationship requirements © © © � � �

review presented in the introduction, on the modeling languages
briefly presented in Section 3, and on specific texts about require-
ments (IEEE, 1998; Beck, 1999; Luisa et al., 2004; Robertson and
Robertson, 2006). This list uses “(M) Must have” and “(S) Should
have” for each entry of the table, according to the MoSCoW labels
(Page et al., 2003) (Must, Should, Could, Want/Won’t have).

The characteristics proposed in IEEE (1998, Section 4.3) (correct,
unambiguous, complete, consistent, ranked for importance, ranked
for stability, verifiable, modifiable, and traceable) are related to a
good Software Requirements Specification (SRS) document. In this
article, these characteristics were used in the context of require-
ments modeling languages and techniques.

The reason for each entry of the list is given as follows.

4.1. Must have requirements

The modeling languages must provide graphical means to
express requirements. Common graphical models may facilitate the
communication of models to stakeholders. Models must be human
readable, as the multiple stakeholders involved have to understand
the models. In this case a balance is necessary, as the more machine
readable requirements are, the less human readable they become.
In addition, as multiple stakeholders and designers with different
backgrounds are involved, the modeling languages should be as
methodology independent as possible.

It is well-known by Software Engineering researchers and prac-
titioners that requirements are related to each other (Robertson and
Robertson, 2006). These interactions affect various software devel-
opment activities, such as release planning, change management
and reuse. A study has shown that the majority of requirements are
related to or influence other requirements (Carlshamre et al., 2001).
Due to this fact, it is almost impossible to plan systems releases only
based on the highest priority requirements, without considering
which requirements are related to each other.

From a project management point of view, one important
characteristic of a requirement is its priority. Prioritizing require-
ments is an important activity in Requirements Engineering (Davis,
2003). The purpose is to give an indication of the order in which
requirements should be addressed. Another important property of

a requirement from the project management point of view is to
identify its risk. For instance, a manager may be interested in identi-
fying the impact for a project if a specific requirement is not fulfilled.
Risk management is basically the activity concerned with trying to

ystem

d
s

a
F
t
i
f

i
s
m
d

fi
g
T
1
(
d
o
m
c
m
s
p
i
i
m

i
d

4

a
d
s

s
t
l

r
g

e
t

r
p

d

e
i
l
d
b

4

d
S
f
U

M.d.S. Soares et al. / The Journal of S

etect previously risks in a project and preventing problems with
pecific plans.

Despite their importance, non-functional requirements are usu-
lly not properly addressed in requirements modeling languages.
or instance, UML Use Case diagrams are strong in modeling func-
ional requirements. The various types of requirements must be
dentified in order to provide better knowledge of requirements
or the stakeholders.

From the software design point of view, grouping requirements
n the early phases of software development helps in identifying
ubsystems, components, and relationships between them. As a
atter of fact, grouping requirements has a positive effect when

esigning the software architecture.
According to IEEE (1998), a SRS should be consistent and modi-

able. In this article, these two properties of a SRS are considered of
reat significance, and are grouped with “Must have” requirements.
he reason is that inconsistency between documents (Boehm,
973; Pressman, 2009) and difficulty of changing requirements
Berry, 2004) are major causes for future problems during software
evelopment. A SRS is consistent if it agrees with other documents
f the project, such as project management plans and system design
odels. Thus, the modeling language must be able to highlight

onflicting requirements and non-conformances between require-
ents and design. A SRS is modifiable if its structure and style are

uch that any changes to the requirements can be made easily, com-
letely, and consistently while retaining the structure and style. For

nstance, requirements must be expressed individually, rather than
ntermixed with other requirements. Thus, the modeling language

ust be able to describe requirements in a well-structured way.
Finally, as changing requirements is a source of problems, know-

ng how stable a requirement is, i.e., how ready it is for further
esign phases, is essential.

.2. Should have requirements

Ambiguity should be solved, as ambiguity in requirements is
major cause of misunderstandings between stakeholders and

esigners. Thus, modeling languages should provide well-defined
emantics, which increases machine readability.

According to IEEE (1998), an SRS is correct if every requirement
tated is one that the software shall meet. The user can determine if
he SRS correctly reflects his/her actual needs. Thus, the modeling
anguages should facilitate the user in this activity.

According to IEEE (1998), an SRS is complete if all significant
equirements of every type are included. Thus, the modeling lan-
uages should be able to specify all types of requirements.

According to IEEE (1998), an SRS is verifiable if there is a cost-
ffective process with which a person or machine can check that
he software meets the requirement. In general, any ambiguous
equirement is not verifiable. Thus, the modeling language should
rovide non-ambiguous constructions in order to facilitate that the
esigner can create non-ambiguous requirements models.

According to IEEE (1998), an SRS is traceable if the origin of
ach of its requirements is clear, and if it facilitates the referenc-
ng of each requirement in future development. Thus, the modeling
anguage should provide means to trace the requirement through
esign phases. In addition, the type of these relationships should
e explicit.

.3. Resulting table

Table 1 maps the list of requirements for modeling languages
iscussed in this section with the modeling languages discussed in
ection 3. In the table, NL stands for natural language, SNL stands
or structured natural language, XP stands for the XP User Stories,
C stands for both SysML and UML Use Cases, RD stands for SysML

s and Software 84 (2011) 328–339 331

Requirements diagram, and T stands for SysML Tables. We classi-
fied the entries as fully supported (�), half supported (�), or not
supported (©) (or not easily supported, or poorly supported).

From the table, it is clear that “Must have” requirements, such as
“Priority between requirements”, “Requirements risks”, “Identify
types of requirements”, and “Ranking requirements by stability”
are partially addressed or not addressed at all by most of the stud-
ied requirements modeling languages. The next section presents
the approach followed in order to try to fulfill all the given require-
ments.

Another conclusion from the table is that some “Must have”
requirements and the majority of “Should have” requirements are
fulfilled or at least partially fulfilled by a combination of the SysML
Requirements diagram and SysML Tables. Thus, a possible start-
ing pointing to address all requirements is to extend these SysML
constructions.

5. Proposed approach

With the explicit choice to use SysML, the proposal starts with
detailing SysML capacities for Requirements Engineering (Section
6). In Section 7.1, a classification for each atomic requirement is
proposed, avoiding the confusion of which type of requirement
is written in the user requirements document. The basic SysML
Requirements diagram is extended with new requirements prop-
erties such as priority. Individual requirements modeled by the
SysML Requirements diagram may be combined depending on their
semantics. This can be useful for the early discovery of subsys-
tems, in project management activities such as release planning,
and to propose the system architecture (Section 7). User require-
ments are also represented in a tabular format, which may facilitate
requirements tracing during the system life cycle. This is impor-
tant to know what happens when related requirements change
or are deleted, which improves traceability. Finally, Use Case dia-
grams are used to represent the actors involved and the scenarios
to be implemented (Section 8). Then, Use Cases are related to SysML
Requirements using one of the proposed relationships.

Although the idea in this article is to use graphical models
already in the early phases of system development, natural lan-
guage is still considered important. Despite its problems, there are
also advantages, as natural languages are the primary communica-
tion medium between people.

After being structured and graphically represented (Fig. 1) using
SysML Tables, SysML Requirements and SysML Use Case diagrams,
user requirements are detailed into system requirements, being
specified using other models, such as other UML/SysML diagrams
or using formal methods.

The SysML constructions (diagrams and tables) for modeling
user requirements are explained in detail in the following section.

6. Modeling user requirements using SysML

The SysML Requirements diagram helps in better organizing
requirements, and also shows explicitly the various kinds of rela-
tionships between different requirements. Another advantage of
using this diagram is to standardize the way of specifying require-
ments through a defined semantics. The SysML Requirements
constructs are intended to provide a bridge between traditional
requirements management specifications and the other SysML

models. When combined with UML for software design, the
requirements constructs provided by SysML can also fill the gap
between user requirements specification, normally written in nat-
ural language, and Use Case diagrams, used as initial specification
of system requirements (Soares and Vrancken, 2008a).

332 M.d.S. Soares et al. / The Journal of Systems and Software 84 (2011) 328–339

g use

s
g
s
t

6

b
c
d
R
g
c
t
r
e
f
d

r
d

Fig. 1. Approach for modelin

A SysML Requirement can also appear on other diagrams to
how its relationship to design. With the SysML Requirements dia-
ram, visualization techniques are applied from the early phases of
ystem development. The SysML Requirements diagram is a stereo-
ype of the UML class diagram, as shown in Fig. 2.

.1. Relationships between requirements with SysML

Implementing all requirements in a single system release may
e unattractive because of the high cost involved, lack of suffi-
ient staff and time, and even client and market pressures. These
ifficulties make prioritization a fundamental activity during the
equirements Engineering process. Prioritizing requirements is
iving an indication of the order in which requirements should be
onsidered for implementation. However, it is not always possible
o plan a system release based only on the set of more important
equirements due to requirements relationships. A better knowl-
dge of requirements relationships may be useful to make more

easible release plans, to reuse requirements and to drive system
esign and implementation.

The SysML Requirements diagram allows several ways to rep-
esent requirements relationships. These include relationships for
efining requirements hierarchy, deriving requirements, satisfying

<>
UML4SysML::Class

<>
Requirement

– Text: String
– Id: String

Fig. 2. Basic SysML Requirements diagram.

r requirements with SysML.

requirements, verifying requirements and refining requirements.
The relationships can improve the specification of systems, as they
can be used to model requirements. The relationships: hierar-
chy, derive, master/slave, satisfy, verify, refine and trace are briefly
explained as follows.

In large, complex systems, it is common to have a hierarchy of
requirements, and their organization into various levels helps in
dealing with system complexity. For instance, high-level business
requirements may be gradually decomposed into more detailed
software requirements, forming a hierarchy. Discovering the hier-
archy of requirements is an important design step in Requirements
Engineering. SysML allows splitting complex requirements into
more simple ones, as a hierarchy of requirements related to each
other (represented by the symbol ⊕). The advantage is that the
complexity of systems is treated from the early beginning of devel-
opment, by decomposing complex requirements.

The concept of hierarchy also permits the reuse of require-
ments. In this case, a common requirement can be shared by other
requirements. The hierarchy is built based on master and slave
requirements. The slave is a requirement whose text property is
a read-only copy of the text property of a master requirement.
The master/slave relationship is indicated by the use of the copy
keyword.

The derive relationship relates a derived requirement to its
source requirement. During Requirements Engineering activities,
new requirements are created from previous ones. Normally,
the derived requirement is under a source requirement in the
hierarchy. In a requirements diagram, the derive relationship is
represented by the keyword deriveReqt.

The satisfy requirement describes how a model satisfies one
or more requirements. It represents a dependency relationship
between a requirement and a model element, such as other SysML
diagrams, that represents that requirement. This relationship is
represented by the keyword satisfy. One example is to associate
a requirement to a SysML Block diagram.

The verify relationship defines how a test case can verify
a requirement. This includes standard verification methods for
inspection, analysis, demonstration or test. For example, given a
requirement, the steps necessary for its verification can be summa-

rized by a state-machine diagram. The keyword verify represents
this relationship.

The refine relationship provides a capability to reduce ambigu-
ity in a requirement by relating a SysML Requirement to another

M.d.S. Soares et al. / The Journal of System

Table 2
A SysML hierarchy requirements table.

m
b
c
T
r
t
o
r

s
s

a
d
a
r

6

s
t
m

c
i
r
r
o
r
w
k
i
s
c
d
m
a
t
t
g
a
t

t
t
p
i
b

6

p
T
T
t
e

e

Id Name Type

odel element. This relationship is typically used to refine a text-
ased requirement with a model. For example, how a Use Case
an represent a requirement in a SysML Requirements diagram.
he relationship is represented in the diagram by the keyword
efine. The refinement is distinguished from a derive relationship in
hat a refine relationship can exist between a requirement and any
ther model element, whereas a derive relationship is only between
equirements.

The trace relationship provides a general purpose relation-
hip between a requirement and any other model element. Its
emantics has no real constraints and is not as well-defined
s the other relationships. For instance, a generic trace depen-
ency can be used to emphasize that a pair of requirements
re related in a different way not defined by other SysML
elationships.

.2. SysML Requirements table

Requirements traceability is an important quality factor in a
ystems’s design. Basically, requirements traceability helps in iden-
ifying the origin, destination, and links between requirements and

odels created during system development.
Identifying and maintaining traces between requirements are

onsidered important activities during Requirements Engineer-
ng (Gotel and Finkelstein, 1994; Sahraoui, 2005). The activity of
equirements tracing is very useful, for example, to identify how
equirements are affected by changes. For instance, in later devel-
pment phases a requirement may be removed, and the related
equirements may also be deleted or reallocated. Another case is
hen a requirement has changed and the stakeholders need to

now how this change will affect other requirements. Traceabil-
ty also helps to ensure that all requirements are fulfilled by the
ystem and subsystem components. When requirements are not
ompletely traced to the specific design elements, there is a ten-
ency to lose focus as to the specific responsibility of each design
odel. This can lead to costly changes late in the life cycle and can

lso lead to incorrect or missing functionality in the delivered sys-
em. As a matter of fact, important decisions on requirements and
he correspondent models are better justified when traceability is
iven proper attention (Ramesh and Jarke, 2001). One way to man-
ge the requirements traceability in SysML is by using requirements
ables.

SysML allows the representation of requirements, their proper-
ies and relationships in a tabular format. One proposed table shows
he hierarchical tree of requirements from a master one. The fields
roposed for Table 2 are the requirement’s ID, name and type. There

s a table for each requirement that has child requirements related
y the relationship hierarchy.

.3. SysML Use Case diagram

The Use Case diagram shows system functionalities that are
erformed through the interaction of the system with its actors.
he idea is to represent what the system will perform, not how.

he diagrams are composed of actors, Use Cases and their rela-
ionships. Actors may correspond to users, other systems or any
xternal entity to the system.

The SysML Use Case diagram is derived without important
xtensions from the UML Use Case diagram. The main difference

s and Software 84 (2011) 328–339 333

is the wider focus, as the idea is to model complex systems that
involve not only software, but also other systems, personnel, and
hardware.

The detailed sequence of events in a use case can be represented
in different manners. It is common to describe the sequence of
events in structured language based on a pre-defined pattern, or by
using Activity diagrams (Almendros-Jimenez and Iribarne, 2005),
Sequence diagrams (Almendros-Jimenez and Iribarne, 2007), or
Petri nets (Soares and Vrancken, 2008b). Within SysML, a Use Case
may also be related to a SysML Requirements diagram. Which of
these techniques to use depends on the intended reader and the
development phase.

One important limitation of Use Cases diagrams is that their
focus is on specifying only functional requirements. Non-functional
requirements, such as performance, and external requirements,
such as interfaces, which are fundamental in software-intensive
systems, are not well-represented by Use Case diagrams.

7. Extensions to SysML Requirements diagram and tables

SysML is a highly customizable and extensible modeling lan-
guage (OMG, 2008). Organizations that develop systems for several
different domains may create a profile for each domain. Profiles
may specialize language semantics, provide new graphical icons
and domain-specific model libraries. When creating profiles, it is
not allowed to change language semantics; normally profiles may
only specialize and extend semantics and notations.

The basic SysML Requirements diagram is extended in this sec-
tion. The purpose is to try to address the identified shortcomings
presented in Table 1. The first extension is performed by cre-
ating stereotypes of stereotypes, in which case they are named
sub-stereotypes (Section 7.2). Sub-stereotypes are similar to class
inheritance in UML: they inherit any properties of their super-
stereotypes, and add their own. These stereotypes are used to
express the different types of user requirements proposed in Sec-
tion 7.1. The second extension is to add properties besides the two
default ones (Id and Text) (Section 7.3). The third extension is about
grouping related requirements (Section 7.4). The last extension is
to extend the SysML Table to provide requirements in a tabular
format (Section 7.5).

7.1. User requirements classification

A common classification proposed for requirements in the
literature is based on the level of abstraction, in which require-
ments are classified as functional or non-functional (Robertson
and Robertson, 2006). Functional requirements describe the ser-
vices that the system should provide, including the behavior of the
system in particular situations. Non-functional requirements are
related to emergent system properties such as safety, reliability
and response time. These properties cannot be attributed to a single
system component. Rather, they emerge as a result of integrating
system components. Non-functional requirements are also consid-
ered quality requirements, and are fundamental to determine the
success of a system.

A table of contents of a requirements specification with the
following requirements items: external interfaces, functions, per-
formance, logical database, design constraints, and software system
attributes, is suggested in IEEE (1998). For sake of simplicity, and as

some of the items can be considered non-functional requirements
(performance, design constraints and software system attributes),
or functional requirements (logical database), the second classifi-
cation used in this article (after user vs. system requirements) is as
follows (Soares and Vrancken, 2008a):

334 M.d.S. Soares et al. / The Journal of System

F
m


7

e
c
f
e
t

r
a
I
e
S
H
r

7

n
v
a
e
s

g
S
a
o
t
b
e

p
r

ig. 3. Extension to SysML Requirements diagram using the proposed user require-
ents classifications.

Functional: describes what the system should do, how the sys-
tem should react to particular inputs, and how the system should
behave in particular situations (the functionalities).
Non-functional: are related to emergent system properties, such
as reliability and performance. These requirements do not have
simple yes/no satisfaction criteria. Instead, it must be determined
whether a non-functional requirement has been satisfied.
External: a detailed description of all inputs into and outputs from
the software system, such as system, user, hardware, software
and communication interfaces. It is an important classification to
decompose the system into subsystems, helping in the identifi-
cation of system architecture.

.2. Types of requirements

Stereotypes are the main mechanism used to create profiles and
xtensions to the SysML metamodel. A stereotype extends a meta-
lass or another stereotype. A well-known example of a stereotype
or the UML metamodel are the classes control, entity and boundary,
ach one with its own graphical icon. When used in a class diagram,
hese stereotypes improve semantics for the diagram readers.

According to the classification proposed in Section 7.1, three
equirements stereotypes are proposed: functional, non-functional
nd external interfaces (Fig. 3). The non-functional and External
nterface requirements have the property “type” that may have sev-
ral tagged values. Examples of possible values are Performance,
ecurity and Efficiency for non-functional requirements, and User,
ardware, Software and Communication for External Interface

equirements.

.3. Additional properties

Properties add information to elements of the model, and are
ormally associated to tagged values encoded as strings. Tagged
alues add extra semantics to a model element. Constraints may
lso be used as semantic restrictions applied to elements. One
xample of a constraint is the association of the “xor constraint”
pecifying a restriction (exclusive or).

The Id and Text properties are default to the requirements dia-
ram. As an addition, the following properties are proposed: Risk,
ource, Priority, Responsible, Version/Date, and Relationship. These
dditional properties are not mandatory and may appear in any
rder. The requirements engineer may use all of them, some or just
he original properties. The following paragraphs suggest a num-

er of tagged values to be attached to each property, and also an
xplanation of each new property.

A risk is an uncertain event or condition that, if it occurs, has a
ositive or negative effect on a project’s objectives (PMI, 2008). The
isk property is related to the requirement risks. There are at least

s and Software 84 (2011) 328–339

two important values to be added that concern risks: the proba-
bility of the risk becoming real and the effects of its occurrence.
The extensions proposed attaches a tuple R = {P,I} to each require-
ment, in which P indicates the probability and I is the impact of the
effects of occurring that risk. The suggested values for P are: very
low, low, moderate, high and very high. The suggested values for
I are: insignificant, tolerable, serious, very serious or catastrophic.
Numeric values can also be assigned, but may lead to confusion.
The combination of both values can be used as input to strategies
to manage project risks.

If the requirement is derived from another requirement, it is
useful to know its source. The source property describes where the
derived requirement originated. This information is important to
trace requirements during system life cycle development.

One approach to prevent future problems, such as delays, is to
create tables considering risk, probabilities of occurrence, priority
and impact. Then, labels can be given to each requirement, as for
instance, colors, which visually help managers to know more about
the requirements. Better contingency plans can be created and
also special attention given to critical requirements (for instance,
improved testing, inspections or the use of formal methods and
tools.).

According to the PMBOK (PMI, 2008), knowing which require-
ments have high priority is useful for risk analysis and during
system development. Prioritizing requirements is giving an indi-
cation of the order in which requirements should be addressed.
A review of requirements prioritization techniques can be found
in (Greer, 2005). Some recommendations on how to prioritize
requirements (or triage) can be found in (Davis, 2003). A well-
performed prioritization provides better system release planning,
based on balancing importance vs. effort. Ranking assignment is the
simplest prioritization technique (Greer, 2005). Basically, it consists
of dividing requirements into groups, giving to each requirement
a label, such as (critical, standard, optional) or the MoSCoW labels.
The number of groups may vary, but within a group, all require-
ments have the same priority.

At least the main stakeholder directly responsible for the
requirement should be known. In case there is more than one
responsible stakeholder, the choices are to write all of them, or
just write the most important. This information is represented in
the responsible property.

The requirements version is useful to show if the require-
ment was changed. This property is fundamental, as uncontrolled
changes are a source of problems in Requirements Engineer-
ing. In addition to the version, the date of creation/change is
added.

In order to improve the activity of tracing requirements to
design models, a property that relates the specific requirement to
models of the design is added. Identifying and maintaining traces
between requirements and design are considered important activ-
ities in Requirements Engineering (Sahraoui, 2005).

The resulting SysML Requirement with the proposed extensions
is depicted in Fig. 4.

7.4. Grouping requirements

By modeling requirements with SysML, system complexity
is addressed from the early system design activities. Managing
decomposition is a crucial task in order to deal with complexity.
Requirements may be decomposed into atomic requirements, and
may later even be related in the sense that together they are capa-

ble of delivering a whole feature, i.e., they are responsible for a
well-defined subsystem.

SysML Requirements may be part of other SysML Require-
ments, as a hierarchy (Soares and Vrancken, 2008a). Related SysML
Requirements can be grouped into a single SysML Requirements

M.d.S. Soares et al. / The Journal of Systems and Software 84 (2011) 328–339 335

Fig. 4. Extension to the SysML Requirements diagram.

s
s
7

t
n
t
a
o
R
t

8

S

o

a
m
a
n
S
q
f

8
r
p

T
A

Table 4
Hierarchy requirements table—TM4.

Id Name Type

TM5 Region-wide traffic management Functional
TM6 Traffic flow managed optimally Functional

Table 5
Hierarchy requirements table—TM7.

Id Name Type

TM9 Simulation analytical models Functional
TM12 Wide range tasks scenarios Functional

Table 6
Hierarchy requirements table—TM9.

The SysML refine relationship can be used to relate requirements

Fig. 5. Grouping requirements.

ub-package (similar to the UML Package diagram, which combines
everal class diagrams), creating categories of requirements (Fig. 5).

.5. Extension to the SysML Table

Table 3 shows an example of requirements data expressed in a
abular format. The proposed table shows the requirement Id, the
ame of the requirement, to which requirement it is related (if any),
he type of relationship and the requirement type. This allows an
gile way to identify, prioritize and trace requirements. As a matter
f fact, whenever a requirement is changed or deleted, the SysML
equirements Relationship Tables (SRRT) are useful to show that
his can affect other requirements.

. Case study: RTMS user requirements modeling with
ysML

In this section, a modeling approach is applied to model the list
f user requirements presented in Section 2.

From the Software Engineering point of view, the development
nd maintenance of RTMS is a challenge due to many factors. The
ost important ones are mentioned as follows. First, requirements

re frequently changed. The major reason is that the area of road
etwork control is still largely uncharted territory (Vrancken and
oares, 2010). Thus, algorithms, techniques and methods are fre-
uently being developed by traffic engineers. In addition, policies
or transportation are often being changed as well (Eurostat, 2006).

.1. SysML Requirements diagrams

The associated SysML Requirements diagram for the list of user
equirements is depicted in Fig. 6. For the sake of simplicity, not all
roperties are included.

able 3
SysML Requirements relationship table.

Id Name RelatesTo RelatesHow Type

Id Name Type

TM10 Access statistical data Functional
TM11 Access transient data Functional

8.2. SysML Requirements tables

Tables 4–6 show SysML Requirements tables expressing hierar-
chy for requirements TM4, TM7, and TM9.

The other proposed type of table (SRRT), relating requirements
and their relationships for each SysML Requirements diagram is
presented in Table 7.

8.3. SysML Use Case diagrams

The associated Use Case diagram concerning the Traffic Manager
is depicted in Fig. 7.

8.4. Relationship between Use Cases and SysML Requirements
diagram

to other SysML models (Soares and Vrancken, 2008a). For exam-
ple, the requirements sub-package representing requirements TM5
and TM6 can be associated by the refine relationship to the Use
Case Manage region-wide traffic flow, which means that the require-

Table 7
SysML Requirements relationship table for TM.

Id Name RelatesTo RelatesHow Type

TM7 Task/scenario frames {TM5, TM6} trace Functional
TM8 Gather/interpret info. TM9 deriveReqt External
TM13 Object task/scenario TM12 deriveReqt Functional

336 M.d.S. Soares et al. / The Journal of Systems and Software 84 (2011) 328–339

m for

m
L
a
t
g
b

9. Discussion

Fig. 6. SysML Requirements diagra

ents are represented by the Use Case. Fig. 8 shows this example.
ater, this Use Case can be detailed by including other Use Cases

nd relationships, or even by using other SysML diagrams, such as
he Sequence diagram. As a result, one knows which Sequence dia-
ram models a specific SysML Requirement, which narrows the gap
etween requirements modeling and software design.

Fig. 7. Use Case diagram for Traffic Manager.

Traffic Management Stakeholders.

The list of desirable properties of requirements specification,
shown in Table 1, is used again, this time to evaluate the approach

Fig. 8. Example of the refine relationship.

M.d.S. Soares et al. / The Journal of System

Table 8
List of requirements properties and representation techniques.

List of requirements SRDE SRRT

(M) Graphical modeling � �
(M) Human readable � �
(M) Independent towards methodology � �
(M) Relationship between requirements � �
(M) Relationship requirements/design � �
(M) Requirements risks � �
(M) Identify types of requirements � �
(M) Priority between requirements � ©
(M) Non-functional requirements � �
(M) Grouping related requirements � �
(M) Consistency � �
(M) Modifiable � �
(M) Ranking requirements by stability � ©
(S) Solve ambiguity � �
(S) Well-defined semantics � �
(S) Machine readable � �
(S) Correctness � �
(S) Completeness � �
(S) Verifiable � �

p
e
f
E

S
p

e
T
m
H

w
h
p

UML and SysML present additional advantages over the graph-

(S) Traceable � �
(S) Type of relationship requirements � �

roposed in this article (see Table 8). In the list, SRDE stands for the
xtended version of the SysML Requirement diagram, and SRRT
or an extended version of SysML Tables used for Requirements
ngineering activities.

From the list, it is clear that the proposed user classification in
ection 7.1 and the extensions in Section 7 fulfill almost all the
roperties identified in Table 1. The partially fulfilled properties,
Well-defined semantics” and “Solve ambiguity”, are not fulfilled
ven when the extended SysML Requirements diagram and SysML
ables are used. These properties are solvable by increasing for-
ality for the modeling language, i.e., by using formal methods.
owever, when using formal methods, other properties, such as

human readable” may be lost.
In IEEE (1998), a list of characteristics that are expected for a soft-

are requirements document is given. To finalize the evaluation,
ow each of these characteristics are addressed by the approach
resented in this article is briefly presented as follows:

Correctness: According to IEEE (1998), no technique
can ensure correctness. However, the
SysML Requirements diagram provides
the possibility of relating requirements to
other design models, facilitating that the
user can determine if the SRS correctly
reflects the actual needs.

Unambiguity: Ambiguity can be solved with the use of
formal methods (Hinchey et al., 2008).
The issue is that natural language is
ambiguous, but unavoidable in the early
phases of Requirements Engineering.

Completeness: The proposed types for requirements
are well-described with the extensions
proposed for the SysML Requirements
diagram and tables. Thus, all types of
requirements can be modeled.

Consistency: Conflicts between requirements can be
discovered by explicitly describing their
relationships, and the type of each rela-

tionship. In addition, by grouping related
user requirements, conflicts within a
group of requirements and between
groups can be discovered.

s and Software 84 (2011) 328–339 337

Ranked by importance: Typically, not all requirements are
equally important. The approach
presented in this article fulfill this
characteristic by adding two proper-
ties to the basic SysML Requirements
diagram: Risk and Priority.

Ranked by stability: Stability can be expressed in terms of the
number of expected/performed changes
to any requirement. This is addressed
in this article by controlling version and
date of a requirement, through the addi-
tional property Version/Date.

Verifiable: As ambiguity is not solved with the appli-
cation of SysML, this characteristic is not
fully present. However, the advantage of
using SysML is the possibility of relat-
ing SysML Requirements to formal design
models that can be formally verified.

Modifiable: The requirements document is modifi-
able if its structure and style are such
that any changes to requirements can
be made completely, and consistently,
while retaining the structure and style.
Expressing each requirement separately
is highly desirable. This characteristic is
addressed in this article by modeling
requirements using a well-defined SysML
Requirements diagram, and by organiz-
ing the relationship between require-
ments.

Traceable: A requirement is traceable if its origin
is clear and if it is possible to refer
to it in future development. The solu-
tion proposed in this article is to create
SysML Tables expressing the relation-
ships between requirements and other
design models.

The proposed approach presented in this article uses two
SysML diagrams and SysML Tables. This is necessary because
multiple aspects of user requirements modeling are covered,
which is useful as multiple stakeholders are involved. Thus, the
SysML Use Case provides systems’ view of functional requirements
and actors, delimiting the system scope. Requirements relation-
ships and properties are graphically represented using the SysML
Requirements diagram, and SysML Tables gives a tabular format for
requirements.

As UML in general and Use Cases in particular do not support
goal-oriented modeling (Moody et al., 2010), future research will
focus on comparing the proposed approach presented in this article
with other techniques based on Goal-Oriented Modeling, such as
i* (Yu, 1997) and KAOS (Dardenne, 1993). Both techniques support
graphical modeling of requirements. The KAOS graphical notation
is less complex and easier to use than i* and focus more on the late
requirements phase (Quartel et al., 2009).

Goal-oriented modeling has been enthusiastically embraced by
the Requirements Engineering research community but has so far
had negligible impact on practice (Moody et al., 2010). As SysML is
a UML profile, and UML is currently the de facto modeling language
for software-intensive systems, SysML has already an advantage at
least in terms of potential use.

ical notation i*. Unlike i*, which lacks explicit design rationale for
its graphical conventions, SysML is well-defined. SysML is confor-
mant to an official metametamodel, the MOF (OMG, 2006), while i*
semantic constructs and grammatical rules are defined using nat-

3 ystem

u
i

1

m
s
a
t
U

l
f
s
m
h
s
d
r
g
a
r

T
r
f
t
s
e
t
o
R
r
o

t
s
i
s
t
m
b
a

R
A
A
A
A
A

B

B
B
B
C

38 M.d.S. Soares et al. / The Journal of S

ral language (Moody et al., 2010), which leads to problems of
nconsistency, ambiguity, and incompleteness.

0. Conclusions

It is essential to have properly structured and controlled require-
ents specifications that are consistent and understandable by

takeholders. This is addressed in this article by presenting an
pproach to model and analyze a list of user requirements using
he SysML Requirements diagram, the SysML Table, and the SysML
se Case diagram.

As usual in system development, changes in requirements are
ikely to happen, and using the SysML Requirements diagram is use-
ul for developers to manage these changes. For instance, when a
takeholder asks for a change in one specific requirement, using the
any relationship types that describe traceability between models

elps to uncover possible impacts in other models. The relation-
hips are also useful to aid in requirements prioritization in order to
ecide which requirements should be included in a certain system
elease. Another advantage of using the SysML Requirements dia-
ram is to standardize the way of specifying requirements through
defined semantics. As a direct consequence, SysML allows the

epresentation of requirements as model elements.
In this article, a classification of user requirements is proposed.

hen, the SysML Requirements diagram is introduced and the
equirements relationships are detailed. SysML Tables are use-
ul to represent decomposition in a tabular form and to improve
raceability, which is an important quality factor when designing
oftware-intensive systems. The SysML Requirements diagram is
xtended with new stereotypes including the proposed classifica-
ion, which distinguish requirements as functional, non-functional
r external. Some properties not presented in the original SysML
equirements diagram are added in order to represent important
equirements characteristics. These properties were chosen based
n an extensive literature review.

Finally, requirements are important to determine the architec-
ure. For instance, external requirements help in delimiting the
ystem context in relation with its environment. When design-
ng the architecture, at least part of the functional requirements
hould be known. In addition, the non-functional requirements that
he architecture has to conform with, such as portability, perfor-

ance, and other quality attributes (security, modifiability), should
e made explicit. Domain architecture and software architecture
re topics for future research.

eferences

bran, A., Bourque, P., Dupuis, R., Moore, J.W., Tripp, L.L. (Eds.), 2004. Guide to the
Software Engineering Body of Knowledge—SWEBO, version edition 2004. IEEE
Press, Piscataway, NJ, USA.

lmendros-Jimenez, J.M., Iribarne, L., 2005. Describing Use Cases with Activity dia-
grams. In: Proceedings of the Metainformatics Symposium, pp. 141–159.

lmendros-Jimenez, J.M., Iribarne, L., 2007. Describing Use-Case relationships with
sequence diagrams. Computer Journal 50, 116–128.

NSI/IEEE, 2000. ANSI/IEEE Std 1471 Recommended Practice for Architectural
Description of Software-Intensive Systems.

VV, Auditing on RTMS, 2006. Private document made by Adviesdienst Verkeer en
Vervoer.

eck, K., 1999. Extreme Programming Explained: Embrace Change. Addison–Wesley
Professional, Boston, MA, USA.

erry, D.M., 2004. The inevitable pain of software development: why there is no
silver bullet. In: Radical Innovations of Software and Systems Engineering in the
Future, Lecture Notes in Computer Science, pp. 50–74.

oehm, B.W., 1973. Software its impact: a quantitative assessment. Datamation 19,

48–59.

rooks, F.P., 1987. No silver bullet: essence and accidents of software engineering.
Computer 20, 10–19.

arlshamre, P., Sandahl, K., Lindvall, M., Regnell, B., Natt och Dag, J., 2001. An Indus-
trial Survey of Requirements Interdependencies in Software Product Release
Planning.

s and Software 84 (2011) 328–339

Cooper, K., Ito, M., 2002. Formalizing a structured natural language requirements
specification notation. In: Proceedings of the International Council on Systems
Engineering Symposium, Las Vegas, NV, USA, pp. 1–8.

Damian, D., Zowghi, D., Vaidyanathasamy, L., Pal, Y., 2004. An industrial case study
of immediate benefits of requirements engineering process improvement at the
Australian Center for Unisys Software. Empirical Software Engineering 9, 45–75.

Dardenne, A., van Lamsweerde, A., Fickas, S., 1993. Goal-directed requirements
acquisition. Science of Computer Programming 20, 3–50.

Davis, A.M., 2003. The art of requirements triage. Computer 36, 42–49.
Dedrick, J., Gurbaxani, V., Kraemer, K.L., 2003. Information technology economic per-

formance: a critical review of the empirical evidence. ACM Computing Surveys
35, 1–28.

Dijkstra, E.W., 2002. Cooperating Sequential Processes. Springer-Verlag, New York,
NY, USA, pp. 65–138.

Eurostat, 2006. Keep Europe Moving—Sustainable Mobility for our Continent Mid-
term Review of the European Commission’s 2001 Transport White Paper,
Technical Report. European Commission—Directorate General Energy and
Transport, Last accessed on the 22th of June, 2010.

van Genuchten, M., 1991. Why is software late? an empirical study of reasons for
delay in software development. IEEE Transactions on Software Engineering 17,
582–590.

Gomaa, H., 2000. Designing Concurrent Distributed and Real-Time Applications with
UML. Addison–Wesley, Boston, MA, USA.

Gotel, O.C.Z., Finkelstein, C.W., 1994. An analysis of the requirements traceability
problem. In: International Conference on Requirements Engineering.

Greer, D., 2005. Requirements Engineering for Sociotechnical Systems. IdeaGroup,
London, UK.

Hall, T., Beecham, S., Rainer, A., 2002. Requirements problems in twelve companies:
an empirical analysis. IEE Proceedings for Software 149, 153–160.

Hinchey, M., Jackson, M., Cousot, P., Cook, B., Bowen, J.P., Margaria, T., 2008. Software
engineering and formal methods. Communications of the ACM 51, 54–59.

Hofmann, H.F., Lehner, F., Requirements, 2001. Engineering as a success factor in
software projects. IEEE Software 18, 58–66.

IEEE, 1998. IEEE Recommended Practice for Software Requirements Specifications,
Technical Report.

Jacobson, I., 1992. Object-Oriented Software Engineering: A Use Case Driven
Approach. Addison–Wesley Professional, Reading, MA, USA.

Jacobson, I., 2004. Use cases—yesterday, today, and tomorrow. Software and System
Modeling 3, 210–220.

Juristo, N, Moreno, A.M., Silva, A., 2002. Is the European industry moving toward
solving requirements engineering problems? IEEE Software 19, 70–77.

Kamsties, E., 2005. Understanding ambiguity in requirements engineering. In:
Aurum, A., Wohlin, C. (Eds.), Engineering and Managing Software Requirements.
Springer-Verlag, Berlin, Germany, pp. 245–266.

Komi-Sirviö, S., Tihinen, M., 2003. Great challenges and opportunities of distributed
software development—an industrial survey. In: Proceedings of the Fifteenth
International Conference on Software Engineering & Knowledge Engineering
(SEKE’2003), pp. 489–496.

Laplante, P.A., 2004. Real-Time Systems Design and Analysis, 3rd ed. John Wiley &
Sons, Haboken, NJ.

Luisa, M., Mariangela, F., Pierluigi, I., 2004. Market research for requirements analysis
using linguistic tools. Requirements Engineering 9, 40–56.

Lutz, R.R., 1993. Analyzing software requirements errors in safety-critical embedded
systems. In: Proceedings of the IEEE International Symposium on Requirements
Engineering, pp. 126–133.

Minor, O., Armarego, J., 2005. Requirements, engineering: a close look at indus-
try needs and model curricula. Australian Journal of Information Systems 13,
192–208.

Moody, D.L., Heymans, P., Raimundas Matulevičius, R., 2010. Visual syntax does mat-
ter: improving the cognitive effectiveness of the i* visual notation. Requirements
Engineering 15, 141–175.

OMG, 2006. Meta-Object Facility (MOF) Core Specification—Version 2.0.
OMG, 2008. Systems Modeling Language (SysML)—Version 1.1.
Page, V., Dixon, M., Bielkowicz, P., 2003. Object-Oriented Graceful Evolution Mon-

itors, Object-Oriented Graceful Evolution Monitors, volume 2817 of Lecture
Notes in Computer Science. Springer, pp. 46–59.

Parviainen, P., Tihinen, M., Lormans, M., van Solingen, R., 2004. Requirements Engi-
neering: Dealing with the Complexity of Sociotechnical Systems Development.
IdeaGroup Inc, pp. 1–20.

PMI, 2008. A Guide to the Project Management Body of Knowledge, 4th ed. PMI,
Pennsylvania, USA.

Pressman, R.S., 2009. Software Engineering: A Practitioner’s Approach. McGraw-Hill,
Inc., New York, NY, USA.

Quartel, D., Engelsman, W., Jonkers, H., Van Sinderen, M., 2009. A goal-oriented
requirements modelling language for enterprise architecture. In: EDOC’09: Pro-
ceedings of the 13th IEEE International Conference on Enterprise Distributed
Object Computing. IEEE Press, Piscataway, NJ, USA, pp. 1–11.

Ramesh, B., Jarke, M., 2001. Toward reference models for requirements traceability.
IEEE Transactions on Software Engineering 27, 58–93.

Robertson, S., Robertson, J., 2006. Mastering the Requirements Process, 2nd ed.

Addison–Wesley Professional, New York, NY, USA.

Sahraoui, A.E.K., 2005. Requirements traceability issues: generic model, method-
ology and formal basis. International Journal of Information Technology and
Decision Making 4, 59–80.

Simons, A.J.H., 1999. Use cases considered harmful. In: TOOLS 99: Proceedings of the
Technology of Object-Oriented Languages and Systems, pp. 194–203.

ystem
S
S
S

S
T

T

V

W

Y

M
t
S
i
N

M.d.S. Soares et al. / The Journal of S

oares, M.S., Vrancken, J., 2007. Requirements specification and modeling through
SysML. In: Proceedings of the 2007 IEEE International Conference on Systems,
Man and Cybernetics, pp. 1735–1740.

oares, M.S, Vrancken, J., 2008a. Model-driven user requirements specification using
SysML. Journal of Software 3, 57–68.

oares, M.S., Vrancken, J., 2008b. Responsive traffic signals designed with time petri
nets. In: Proceedings of the 2008 IEEE International Conference on Systems, Man
and Cybernetics, pp. 1942–1947.

ommerville, I., 2010. Software Engineering, 9th ed. Addison–Wesley, Essex, UK.
he Standish Group, 2003. CHAOS Chronicles v3.0, Technical Report. The Standish

Group, Last accessed on the 20th of June, 2010.
iako, P.F., 2008. Designing Software-Intensive Systems: Methods and Principles,

1st ed. IGI Global, Hershey, NY, USA.
rancken, J., Soares, M.S., 2010. Intelligent Road Network Control. Springer, The

Netherlands, pp. 311–325.
irsing, M., Banâtre, J.P., Hölzl, M.M., Rauschmayer, A. (Eds.), 2008. Software-

Intensive Systems and New Computing Paradigms—Challenges and Visions, vol.
5380 of Lecture Notes in Computer Science. Springer.

u, E.S., 1997. Towards modelling and reasoning support for early-phase require-
ments engineering. In: Proceedings of the International Symposium on
Requirements Engineering, Annapolis, MD, pp. 226– 235.

ichel dos Santos Soares obtained a BSc degree in Computer Science from
he Federal University of São Carlos, Brazil, in 2000, a MSc degree in Computer
cience from the Federal University of Uberlândia, Brazil, in 2004, and a PhD
n Software Engineering from the Delft University of Technology, in Delft, The
etherlands. Since 2010 he is an Assistant Professor at the Federal University of

s and Software 84 (2011) 328–339 339

Uberlândia, Brazil. His research interests include Modeling and Analysis of Software-
Intensive Systems, Software Architecture, Requirements Engineering, and Software
Quality.

Jos Vrancken obtained a masters degree in Mathematics from the University of
Utrecht, in 1982, and a PhD degree in Computer Science from the University of
Amsterdam in 1991. Since 1991, he was employed by the Dutch government as a
systems architect in the fields of road traffic and water management. Since 2003,
he is an Assistant Professor at Delft University of Technology in the use of IT in the
design, maintenance and exploitation of infrastructures, with emphasis on the road
and the data communicatons infrastructures.

Alexander Verbraeck is a Full Professor of Systems and Simulation at the Fac-
ulty of Technology, Policy and Management of Delft University of Technology, The
Netherlands. In addition, he is part-time Research Professor in Supply Chain Man-
agement at the R.H. Smith School of Business of the University of Maryland in the
USA. He has a BS in Mathematics, and an MSc and PhD in Computer Science, all from
Delft University of Technology. His research focuses on discrete-event simulation,
serious gaming and training, logistics and transportation, and project management.
He is a member of the Center for Project Management at TU Delft, and he is heavily
involved in project management research and training for industry. As a co-director

of TU Delft’s serious gaming institute, he researches novel applications of interactive
simulation and virtual worlds. Many of his multi-year research projects have been
funded by national programs and by the European Union. His applied research in
simulation and gaming has been funded by many organizations. He presented over
100 refereed papers at conferences, wrote close to 20 book chapters, and published
his work in a number of international journals.

  • User requirements modeling and analysis of software-intensive systems
  • Introduction
    Research question
    Article outline
    List of requirements for RTMS
    Requirements modeling approaches
    Desirable requirements specification properties for software-intensive systems
    Must have requirements
    Should have requirements
    Resulting table
    Proposed approach
    Modeling user requirements using SysML
    Relationships between requirements with SysML
    SysML Requirements table
    SysML Use Case diagram
    Extensions to SysML Requirements diagram and tables
    User requirements classification
    Types of requirements
    Additional properties
    Grouping requirements
    Extension to the SysML Table
    Case study: RTMS user requirements modeling with SysML
    SysML Requirements diagrams
    SysML Requirements tables
    SysML Use Case diagrams
    Relationship between Use Cases and SysML Requirements diagram
    Discussion
    Conclusions
    References

Still stressed with your coursework?
Get quality coursework help from an expert!