helpp essay

 

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper
  1. List up to three ideas discussed that were new to you.
  2. Identify anything that was unclear in the paper or that you didn’t understand (regarding modularization not the application used in the example).
  3. List any ideas presented that you disagree with (and why).
  4. In one sentence state the main point made by the authors in this article?

54 C o m m u n i C at i o n s o F t h e a C m | j u Ly 2 0 1 2 | v o L . 5 5 | n o . 7

practice

i

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

l

l
u

s
t

r
a

t
i

o
n

b
y

g
a

r
y

n
e

i
l
l

A r e s o f T wA r e M e T r i C s helpful tools or a waste of time?

For every developer who treasures these

mathematical abstractions of software systems
there is a developer who thinks software metrics are
invented just to keep project managers busy. Software
metrics can be very powerful tools that help achieve
your goals but it is important to use them correctly, as
they also have the power to demotivate project teams
and steer development in the wrong direction.

For the past 11 years, the Software Improvement
Group has advised hundreds of organizations
concerning software development and risk
management on the basis of software metrics.
We have used software metrics in more than 200
investigations in which we examined a single snapshot
of a system. Additionally, we use software metrics to
track the ongoing development effort of more than
400 systems. While executing these projects, we have
learned some pitfalls to avoid when using software
metrics in a project management setting. This
article addresses the four most important of these:

˲ Metric in a bubble;
˲ Treating the metric;
˲ One-track metric; and
˲ Metrics galore.

Knowing about these pitfalls will
help you recognize them and, hopeful-
ly, avoid them, which ultimately leads
to making your project successful. As
a software engineer, your knowledge
of these pitfalls helps you understand
why project managers want to use soft-
ware metrics and helps you assist the
managers when they are applying met-
rics in an inefficient manner. As an
outside consultant, you need to take
the pitfalls into account when pre-
senting advice and proposing actions.
Finally, if you are doing research in
the area of software metrics, knowing
these pitfalls will help place your new
metric in the right context when pre-
senting it to practitioners. Before div-
ing into the pitfalls, let’s look at why
software metrics can be considered a
useful tool.

software metrics steer People
“You get what you measure.” This
phrase definitely applies to software
project teams. No matter what you de-
fine as a metric, as soon as it is used to
evaluate a team, the value of the metric
moves toward the desired value. Thus,
to reach a particular goal, you can con-
tinuously measure properties of the
desired goal and plot these measure-
ments in a place visible to the team.
Ideally, the desired goal is plotted
alongside the current measurement to
indicate the distance to the goal.

Imagine a project in which the run-
time performance of a particular use
case is of critical importance. In this
case it helps to create a test in which
the execution time of the use case is
measured daily. By plotting this daily
data point against the desired value,
and making sure the team sees this
measurement, it becomes clear to ev-
ery one whether the desired target is
being met or whether the development
actions of yesterday are leading the
team away from the goal.

Getting What
You measure

D o i : 1 0 . 1 1 4 5 / 2 2 0 9 2 4 9 . 2 2 0 9 2 6 6

Article development led by
queue.acm.org

Four common pitfalls in using software
metrics for project management.

BY eRiC BouWeRs, Joost VisseR, anD aRie Van DeuRsen

c
r

e
d

i
t

t
k

j u Ly 2 0 1 2 | v o L . 5 5 | n o . 7 | C o m m u n i C at i o n s o F t h e a C m 55

56 C o m m u n i C at i o n s o F t h e a C m | j u Ly 2 0 1 2 | v o L . 5 5 | n o . 7

practice

Even though it might seem simple,
this technique can be applied incor-
rectly in a number of subtle ways. For
example, imagine a situation in which
customers are unhappy because they
report problems in a product that are
not solved in a timely manner. To im-
prove customer satisfaction, the proj-
ect team tracks the average resolution
time for issues in a release, following
the reasoning that a lower average res-
olution time results in higher custom-
er satisfaction.

Unfortunately, reality is not so
simple. To start, solving issues faster
might lead to unwanted side effects—
for example, a quick fix now could re-
sult in longer fix times later because of
incurred technical debt. Second, solv-
ing an issue within days does not help
the customer if these fixes are released
only once a year. Finally, customers
are undoubtedly more satisfied when
no fix is required at all—that is, issues
do not end up in the product in the
first place.

Thus, using a metric allows you
to steer toward a goal, which can be
either a high-level business proposi-
tion (“the costs of maintaining this
system should not exceed $100,000
per year”) or more technically ori-
ented (“all pages should load within
10 seconds”). Unfortunately, using
metrics can also prevent you from
reaching the desired goal, depend-
ing on the pitfalls encountered. In the
remainder of this article, we discuss
some of the pitfalls we frequently en-
countered and explain how they can
be recognized and avoided.

What Does the metric mean?
Software metrics can be measured on
different views of a software system.
This article focuses on metrics calcu-
lated on a particular version of the code
base of a system, but the pitfalls also ap-
ply to metrics calculated on other views.

Assuming the code base contains
only the code of the current project,
software product metrics establish
a ground truth. Calculating only the
metrics is not enough, however. Two
more actions are needed to interpret
the value of the metric: adding context;
and establishing the relationship with
the goal.

To illustrate these points, we use
the LOC (lines of code) metric to pro-

Figure 1. the lines of code of a software system from January 2010 to July 2011.

Figure 2. measuring lines of code in two different ways.

0

25,000

50,000

75,000

100,000

125,000

150,000

175,000

200,000

225,000

250,000

275,000

300,000

325,000

350,000
375,000
400,000

Lines of code Lines

Jan
2010

Mar
2010

May
2010

Jul
2010

Sep
2010

Nov
2010

Jan
2011

Mar
2011

May
2011

Jul
2011

Figure 3. measuring number of files used.

Nr. of files

Jan
2010
Mar
2010
May
2010
Jul
2010
Sep
2010
Nov
2010
Jan
2011
Mar
2011
May
2011
Jul
2011

0
250
500
750

1,000
1,250
1,500
1,750
2,000
2,250
2,500
2,750
3,000
3,250
3,500
3,750
4,000
4,250
4,500
4,750
5,000

0
25,000
50,000
75,000
100,000
125,000
150,000
175,000
200,000
225,000
250,000
275,000
300,000
325,000

350,000
Lines of code

Jan
2010
Mar
2010
May
2010
Jul
2010
Sep
2010
Nov
2010
Jan
2011
Mar
2011
May
2011
Jul
2011

practice

j u Ly 2 0 1 2 | v o L . 5 5 | n o . 7 | C o m m u n i C at i o n s o F t h e a C m 57

vide details about the current size of
a project. Even though there are mul-
tiple definitions of what constitutes
a line of code, such a metric can be
used to reason about whether the ex-
amined code base is complete or con-
tains extraneous code such as copied-
in libraries. To do this, however, the
metric should be placed in context,
bringing us to our first pitfall.

Metric in a bubble. Using a metric
without proper interpretation. Recog-
nized by not being able to explain what
a given value of a metric means. Can be
solved by placing the metric inside a con-
text with respect to a goal.

The usefulness of a single data
point of a metric is limited. Knowing
that a system is 100,000 LOC is mean-
ingless by itself, since the number
alone does not explain if the system is
large or small. To be useful, the value
of the metric should, for example, be
compared against data points taken
from the history of the project or from
a benchmark of other projects. In the
first scenario, you can discover trends
that should be explained by external
events. For example, the graph in Fig-
ure 1 shows the LOC of a software sys-
tem from January 2010 to July 2011.

The first question that comes to
mind here is: “Why did the size of the
system drop so much in July 2010?”
If the answer to this question is, “We
removed a lot of open source code
we copied in earlier,” then there is
no problem (other than the inclusion
of this code in the first place). If the
answer is, “We accidentally deleted
part of our code base,” then it might
be wise to introduce a different pro-
cess of source-code version manage-
ment. In this case the answer is that
an action was scheduled to drastically
reduce the amount of configuration
needed; given the amount of code that
was removed, this action was appar-
ently successful.

Note that one of the benefits of plac-
ing metrics in context is that it allows
you to focus on the important part of
the graph. Questions regarding what
happened at a certain point in time
or why the value significantly deviates
from other systems become more im-
portant than the specific details about
how the metric is measured. Often
people, either on purpose or by acci-
dent, try to steer a discussion toward

“How is this metric measured?” in-
stead of “What do these data points
tell me?” In most cases the exact con-
struction of a metric is not important
for the conclusion drawn from the data.
For example, consider the three plots
shown in figures 2 and 3 represent-
ing different ways of computing the
volume of a system. Figure 2 shows
the lines of code counted as every
line containing at least one character
that is not a comment or white space
(blue) and lines of code counted as all
new line characters (orange). Figure 3
shows the number of files used.

The trend lines indicate that, even
though the scale differs, these vol-
ume metrics all show the same events.
This means that each of these met-
rics is a good candidate to compare
the volume of a system against other
systems. As long as the volume of the
other systems is measured in the same
manner, the conclusions drawn from
the data will be very similar.

The different trend lines bring up
a second question: “Why does the vol-
ume decrease after a period in which
the volume increased?” The answer
can be found in the normal way in
which alterations are made to this
particular system. When the volume
of the system increases, an action is
scheduled to determine whether new
abstractions are possible, which is
usually the case. This type of refac-
toring can significantly decrease the
size of the code base, which results in
lower maintenance effort and easier
ways to add functionality to the sys-
tem. Thus, the goal here is to reduce
maintenance effort by (among others)
keeping the size of the code base rela-
tively small.

In the ideal situation a direct rela-
tionship exists between a desired goal
(such as, reduced maintenance effort)
and a metric (such as, a small code
base). In some cases this relationship
is based on informal reasoning (for ex-
ample, when the code base of a system
is small it is easier to analyze what the
system does); in other cases scientific
research has shown that the relation-
ship exists. What is important here is
that you determine both the nature of
the relationship between the metric
and the goal (direct/indirect) and the
strength of this relationship (informal
reasoning/empirically validated).

to be useful,
the value of
the metric should
be compared
against data
points taken from
the history
of the project or
from a benchmark
of other projects.

58 C o m m u n i C at i o n s o F t h e a C m | j u Ly 2 0 1 2 | v o L . 5 5 | n o . 7

practice

Thus, a metric in isolation will not
help you reach your goal. On the other
hand, assigning too much meaning to
a metric leads to a different pitfall.

Treating the metric. Making altera-
tions just to improve the value of a met-
ric. Recognized when changes made to
the software are purely cosmetic. Can be
solved by determining the root cause of
the value of a metric.

The most common pitfall is mak-
ing changes to a system just to improve
the value of a metric, instead of trying
to reach a particular goal. At this point,
the value of the metric has become
a goal in itself, instead of a means
of reaching a larger goal. This situa-
tion leads to refactorings that simply
“please the metric,” which is a waste
of precious resources. You know this
has happened when, for example, one
developer explains to another develop-
er that a refactoring needs to be done
because “the duplication percentage
is too high,” instead of explaining that
multiple copies of a piece of code can
cause problems for maintaining the
code later on. It is never a problem that
the value of a metric is too high or too
low: the fact this value is not in line
with your goal should be the reason to
perform a refactoring.

Consider a project in which the
number of parameters for methods
is high compared with a benchmark.
When a method has a relatively large
number of parameters (for example,
more than seven) it can indicate that
this method is implementing dif-
ferent functionalities. Splitting the
method into smaller methods would
make it easier to understand each
function separately.

A second problem that could be sur-
facing through this metric is the lack of
a grouping of related data objects. For
example, consider a method that takes
as parameters a Date object called
startDate and another called end-
Date. The names suggest that these
two parameters together form a Date-
Period object in which startDate
will need to be before endDate. When
multiple methods take these two pa-
rameters as input, introducing such a
DatePeriod object to make this ex-
plicit in the model could be beneficial,
reducing both future maintenance ef-
fort, as well as the number of param-
eters being passed to methods.

Sometimes, however, parameters
are, for example, moved to the fields
of the surrounding class or replaced by
a map in which a (String,Object)
pair represents the different param-
eters. Although both strategies reduce
the number of parameters inside
methods, it is clear that if the goal is to
improve readability and reduce future
maintenance effort, then these solu-
tions are not helping. It could be that
this type of refactoring is done because
the developers simply do not under-
stand the goal and thus are treating the
symptoms. There are also situations,
however, in which these non-goal-ori-
ented refactorings are done to game
the system. In both situations it is im-
portant to make the developers aware
of the underlying goals to ensure that
effort is spent wisely.

Thus a metric should never be used
as-is, but it should be placed inside
a context that enables a meaningful
comparison. Additionally, the rela-
tionship between the metric and de-
sired property of your goal should be
clear; this enables you to use the met-
ric to schedule specific actions that
will help reach your goal. Make sure
the scheduled actions are targeted
toward reaching the underlying goal
instead of only improving the value of
the metric.

how many metrics Do You need?
Each metric provides a specific view-
point of your system. Therefore, com-
bining multiple metrics leads to a bal-
anced overview of the current state of
your system. The number of metrics to
be used leads to two pitfalls, we start
with using only a single metric.

One-track metric. Focusing on only a
single metric. Recognized by seeing only
one (or just a few) metrics on display.
Can be solved by adding metrics relevant
to the goal.

Using only a single software metric
to measure whether you are on track
toward your goal reduces that goal to
a single dimension (that is, the metric
that is currently being measured). A
goal is never one dimensional, how-
ever. Software projects experience
constant trade-offs between delivering
desired functionality and nonfunc-
tional requirements such as security,
performance, scalability, and main-
tainability. Therefore, multiple met-

the most common
pitfall is making
changes to
a system just
to improve
the value
of a metric,
instead of trying
to reach
a particular goal.

practice

j u ly 2 0 1 2 | v o l . 5 5 | n o . 7 | c o m m u n i c at i o n s o f t h e a c m 59

rics are necessary to ensure that your
goal, including specified trade-offs,
is reached. For example, a small code
base might be easier to analyze, but if
this code base is made of highly com-
plex code, then it can still be difficult
to make changes.

In addition to providing a more bal-
anced view of your goal, using multiple
metrics also assists you in finding the
root cause of a problem. A single met-
ric usually shows only a single symp-
tom, while a combination of metrics
can help diagnose the actual disease
within a project.

For example, in one project the
equals and hashCode methods
(those used to implement equality
for objects in Java) were among the
longest and most complex methods
within the system. Additionally, a rela-
tively large percentage of duplication
occurred in these methods. Since they
use all the fields of a class, the metrics
indicate that multiple classes have a
relatively large number of fields that
are also duplicated. Based on this ob-
servation, we reasoned the duplicated
fields form an object that was miss-
ing from the model. In this case we
advised looking into the model of the
system to determine whether extend-
ing the model with a new object would
be beneficial.

In this example, examining the met-
rics in isolation would not have led to
this conclusion, but by combining sev-
eral unit-level metrics, we were able to
detect a design flaw.

Metrics galore. Focusing on too many
metrics. Recognized when the team ig-
nores all metrics. Can be solved by reduc-
ing the number of metrics used.

Although using a single metric over-
simplifies the goal, using too many
metrics makes it difficult (or even
impossible) to reach your goal. Apart
from making it hard to find the right
balance among a large set of metrics,
it is not motivating for a team to see
that every change they make results in
the decline of at least one metric. Ad-
ditionally, when the value of a metric
is far off the desired goal, then a team
can start to think, “We will never get
there, anyway,” and simply ignore the
metrics altogether.

For example, there have been mul-
tiple projects that deployed a static-
analysis tool without critically exam-

If you are already using metrics in
your daily work, try to link them to
specific goals. If you are not using any
metrics at this time but would like to
see their effects, we suggest you start
small: define a small goal (methods
should be simple to understand for
new personnel); define a small set of
metrics (for example, length and com-
plexity of methods); define a target
measurement (at least 90% of the code
should be simple); and install a tool
that can measure the metric. Commu-
nicate both the goal and the trend of
the metric to your colleagues and ex-
perience the influence of metrics.

Related articles
on queue.acm.org

Making a Case for Efficient Supercomputing
Wu-chun Feng
http://queue.acm.org/detail.cfm?id=957772

Power-Efficient Software
Eric Saxe
http://queue.acm.org/detail.cfm?id=1698225

Sifting Through the Software
Sandbox: SCM Meets QA
William W. White
http://queue.acm.org/detail.cfm?id=1046945

Eric Bouwers (at e.bouwers@sig.eu ) is a software
engineer and technical consultant at the Software
Improvement Group in Amsterdam, The Netherlands.
He is a part-time Ph.D. student at Delft University of
Technology. He is interested in how software metrics
can assist in quantifying the architectural aspects of
software quality.

Joost Visser (j.visser@sig.eu ) is head of research
at the Software Improvement Group in Amsterdam,
The Netherlands, where he is responsible for innovation
of tools and services, academic relations, and general
research. He also holds a part-time position as professor
of large-scale software systems at the Radboud
University Nijmegen,The Netherlands.

Arie van Deursen (Arie.vanDeursen@tudelft.nl) is a
full professor in software engineering at Delft University
of Technology, The Netherlands, where he leads the
Software Engineering Research Group. His research
topics include software testing, software architecture,
and collaborative software development.

© 2012 ACM 0001-0782/12/07 $15.00

ining the default configuration. When
the tool in question contains, for ex-
ample, a check that flags the use of a
tab character instead of spaces, the
first run of the tool can report an enor-
mous number of violations for each
check (running into the hundreds of
thousands). Without proper inter-
pretation of this number, it is easy to
conclude that reaching zero violations
cannot be done within any reasonable
amount of time (even though some
problems can easily be solved by a
simple formatting action). Such an in-
correct assessment sometimes results
in the tool being considered useless
by the team, which then decides to ig-
nore the tool.

Fortunately, in other cases the
team adapts the configuration to suit
the specific situation by limiting the
number of checks (for example, by
removing checks that measure highly
related properties, can be solved auto-
matically, or are not related to the cur-
rent goals) and instantiating proper
default values. By using such a specific
configuration, the tool reports a lower
number of violations that can be fixed
in a reasonable amount of time.

To ensure all violations are fixed
eventually, the configuration can
be extended to include other types
of checks or more strict versions of
checks. This will increase the to-
tal number of violations found, but
when done correctly the number of
reported violations does not demo-
tivate the developers too much. This
process can be repeated to extend the
set of checks slowly toward all desired
checks without overwhelming the de-
velopers with a large number of viola-
tions at once.

conclusion
Software metrics are useful tools for
project managers and developers
alike. To benefit from the full potential
of metrics, keep the following recom-
mendations in mind:

˲ Attach meaning to each metric by
placing it in context and defining the
relationship between the metric and
your goal, while at the same time avoid
making the metric a goal in itself.

˲ Use multiple metrics to track dif-
ferent dimensions of your goal, but
avoid demotivating a team by using too
many metrics.

Still stressed with your coursework?
Get quality coursework help from an expert!