11/7/22, 6:11 PMDiscussion Participation Scoring Guide
Print
Discussion Participation Scoring Guide
Due Date: Weekly.
Percentage of Course Grade: 30%.
Discussion Participation Grading Rubric
Criteria
Non-performance
Basic
Proficient
Distinguished
Applies relevant course
Does not explain relevant
Explains relevant course concepts,
concepts, theories, or materials course concepts, theories, or theories, or materials.
correctly.
materials.
Applies relevant course
concepts, theories, or materials
correctly.
Analyzes course concepts, theories, or
materials correctly, using examples or
supporting evidence.
Collaborates with fellow
Does not collaborate with
learners, relating the discussion fellow learners.
to relevant course concepts.
Collaborates with fellow learners
without relating discussion to the
relevant course concepts.
Collaborates with fellow
learners, relating the discussion
to relevant course concepts.
Collaborates with fellow learners, relating
the discussion to relevant course concepts
and extending the dialogue.
Applies relevant professional,
personal, or other real-world
experiences.
Does not contribute
professional, personal, or
other real-world
experiences.
Contributes professional, personal, Applies relevant professional,
or other real-world experiences, but personal, or other real-world
lacks relevance.
experiences.
Applies relevant professional, personal, or
other real-world experiences to extend the
dialogue.
Supports position with
applicable knowledge.
Does not establish relevant
position.
Establishes relevant position.
Validates position with applicable
knowledge.
Supports position with
applicable knowledge.
Participation Guidelines
Actively participate in discussions. To do this you should create a substantive post for each of the discussion
topics. Each post should demonstrate your achievement of the participation criteria. In addition, you should also
respond to the posts of at least two of your fellow learners for each discussion question-unless the discussion
instructions state otherwise. These responses to other learners should also be substantive posts that contribute to the
conversation by asking questions, respectfully debating positions, and presenting supporting information relevant
to the topic. Also, respond to any follow-up questions the instructor directs to you in the discussion area.
To allow other learners time to respond, you are encouraged to post your initial responses in the discussion area by
midweek. Comment to other learners’ posts are due by Sunday at 11:59 p.m. (Central time zone).
https://courserooma.capella.edu/bbcswebdav/institution/DHA/DHA8042/190100/Scoring_Guides/discussion_participation_scoring_guide.html
1/1
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/0263-2772.htm
Evaluation of building
performance for strategic
facilities management in
healthcare
A case study of a public hospital in Australia
Yuhainis Talib, Priyadarsini Rajagopalan and Rebecca Jing Yang
Evaluation of
building
performance
681
Received 8 June 2012
Revised 26 September 2012
13 November 2012
Accepted 18 November 2012
Deakin University, Geelong, Australia
Abstract
Purpose – The purpose of this paper is to evaluate the main elements of building performance,
namely, building function, building impact and building quality in order to promote strategic facilities
management in healthcare organisation to improve core (health) business activities.
Design/methodology/approach – Based on current available toolkits, a questionnaire is issued to
healthcare users (staff) in a public hospital about their level of agreement in relation to these elements.
Statistical analysis is conducted to regroup the elements. These regrouped elements and their
inter-relationships are used to develop a framework for measuring building performance in healthcare
buildings.
Findings – The analysis helped to clarify the understanding and agreement of users in Australian
healthcare organisation with regards to building performance. Based on the survey results, 11 new
elements were regrouped into three groups. These new regrouped elements will be used to develop a
reliable framework for measuring performance of Australian healthcare buildings.
Originality/value – Currently there is no building performance toolkit available for Australian
healthcare organisation. The framework developed in this paper will help healthcare organisations
with a reliable performance tool for their buildings and this will promote strategic facilities
management
Keywords Facilities management, Building performance, Healthcare building, Hospitals, Australia
Paper type Research paper
Introduction
The study of building performance efficiency is regarded as an effective way to
identify the essentials for strategic facilities management (Shohet, 2006; Atkin and
Brooks, 2009; Noor and Pitt, 2009). These authors consider performance measurement
as a central and necessary tool in evaluating the efficiency of facilities management. A
number of studies (Wauters, 2005; Boussabaine and Kirkham, 2006; Shohet, 2006)
identified the importance of the current status of building performance in saving core
activities. However, many critical problems of facilities management service delivery
Facilities
are unidentified since benchmarking is not undertaken periodically. Amaratunga and
Vol. 31 No. 13/14, 2013
Baldry (2002) urged that performance measurement be meticulous in a way that is
pp. 681-701
Limited
practical to facilities management (FM) operations and techniques, otherwise it become q Emerald Group Publishing0263-2772
to be superficial.
DOI 10.1108/f-06-2012-0042
F
31,13/14
682
Building performance can be evaluated in terms of three components namely
building functionality, building impact and build quality. Building functionality deals
with how well the building serves the primary purposes and the extent to which it
facilities the activities of the people. Functionality of the built environment can be
achieved by integrating people, place, process and technology (International Facilities
Management Association, 1986). Building impact is the extent to which the building
creates a sense of place and contributes positively to the lives of those who use it
(BREEAM, 2010). Build Quality is the fitness for the designated performance of the
building. Building quality can be understood in terms of “economics of the design
quality” or “economics of the conformance quality” (Kazaz and Birgonul, 2005)
To identify the essentials of building performance, a toolkit called Achieving
Excellence Design Evaluation Toolkit (AEDET) was developed by the United
Kingdom National Health Service (UK NHS). This toolkit is used for a comprehensive
evaluation of the design of healthcare environments. It is one of Building Research
Establishment Environmental Assessment Method (BREEAM) toolkits developed by
the Department of Health, United Kingdom. BREEAM replaced NHS Environment
Assessment Tool (NEAT) which was considered to be below the credible standard for
the NHS as a public sector body.
A review of the literature on healthcare buildings suggests that there are numerous
building performance assessment types, such as space (Alalouch and Aspinall, 2007),
social network (Boyer et al., 2010), and performance of facilities management (Draper
and Hill, 1996; Featherstone and Baldry, 2000; Okoroh et al., 2002; Edum-Fotwe et al.,
2003; Okoroh et al., 2003; Duckett and Ward, 2008). Few studies are conducted on POE
(post occupancy evaluation) such as on service delivery by Brackertz and Kenley
(2002), integrated FM by Shohet and Lavy (2004), building maintenance by Lavy and
Shohet (2007b), data employment by Boussabaine and Kirkham (2006), computer aided
for maintenance by Madritsch (2009), strategic decision-making process (Lavy and
Shohet, 2007a) and climate change by Carthey et al.(2009).
However, there is no comprehensive approach developed to explore the efficiency of
healthcare building performance in regards to quality, impact and function. This is
crucial as facilities management effectiveness cannot be achieved if the building is not
performing well. Understanding and periodic assessment of building performance can
promote strategic facilities management in healthcare organizations and improve core
(health) business activities. In Australia, there is no authority in charge of periodically
reviewing public hospital building performance and no standard benchmark has been
developed to evaluate building performance. This paper evaluates the element
affecting building function, building impact and building quality. Based on current
available toolkits, a questionnaire is issued to healthcare staff about their level of
agreement in relation to three main elements of building performance, namely building
function, building impact, and building quality.
Literature review
Building performance evaluation
Good buildings should be adaptive, durable, energy efficient and habitable (Douglas,
1996). Therefore, many researchers have proposed various measurement methods to
evaluate the performance of buildings and related factors. It is crucial to examine the
similarities and differences in the performance of building in regards to functionality,
impact and quality, based on different types of occupants within the organization.
Douglas (1996) also concludes that building performance should be considered as a
potential “success factor” by facilities managers. On the other hand, objectives should
be periodically reviewed hand in hand with performance measurement (PM) in order to
ensure validity. Amaratunga and Baldry (2002) advised that long term PM is needed
for effectiveness. Explicit statements of performance requirements and effective
performance management can support the changing needs of the building (Robathan,
1996). The PM is crucial in commercial building such as healthcare facilities as they
have the highest rates of change in terms of design compared to institutional and
residential buildings (Douglas, 1996) and good performance is critical to their success
in meeting the requirements of the designated use. Chan et al. (2001) report that
performance measurement in healthcare is subject to sensitive users’ requirements and
the ability of the engineering system to perform due to the dynamic and complex
nature of the different healthcare areas supported. A valid performance measurement
would enhance the facilities management operation. Lavy and Shohet (2007a)
developed a model called the Integrated Healthcare Facility Management Model
(IHFMM) for measuring the effectiveness and efficiency of performance and the
operations of facilities.
In order to ensure excellent facilities, performance measurement is a vital tool; yet
an integrated performance measurement system suitable for different types of
organisations has yet to be developed (Zairi and Sinclair, 1995), as well as a tool for
balancing multiple measures across multiple levels (Hlavacka et al., 2001). Moreover,
periodic tasks by experts may be crucial to ensure the validity and continuation of PM.
These tasks are important if organizations merge, enlarge or undergo extension or
renovation works to their buildings.
Performance measurement in healthcare
The review of the literature suggests that building performance benchmarking is
identified as being crucial to the successful implementation of strategic facilities
management in healthcare. Seven key performance indicators (KPIs) on Australian
hospitals were found to be business oriented where none of the physical performance
was included (Pullen (2000) as cited in Shohet, 2003). It seems that few investigations
on building performance preference on healthcare buildings have been undertaken.
However, the KPIs provide real-time maintenance and predictable physical
performance (Shohet, 2003) with good impact on higher-level organizational
objectives (Pati et al., 2010). Providing efficiency in maintenance will also enable
strategic level decision making in FM (Lavy and Shohet, 2009) and space adaptability
(Harvey Jr and Pati, 2008).
In a study about KPIs, Shohet (2006) evaluated 11 KPIs on development,
organization and management of healthcare building maintenance efficiency
parameters. The KPIs are useful for providing a guideline for a strategic facility
plan of healthcare facilities. Ornstein et al. (2009) evaluated the performance of physical
accessibility and fire safety using post occupancy evaluation (POE). It was found that
health care facilities have certain characteristics that are intrinsic to their use, and
careful planning is therefore required if the Pre-Design Evaluation (PDE) and the POE
are to be applied correctly.
Research done by Carthey et al. (2009) explored climate change to the impact of
extreme weather on healthcare in the New South Wales infrastructure. The healthcare
infrastructure performances in regards to climate change were measured using ROMS
Evaluation of
building
performance
683
F
31,13/14
684
(Risk and Opportunity Management framework) providing a potential disclosure for
future needs in the event of further climate changes. This research found that there is a
crucial need to integrate disaster planning and management strategies in order to
enable the healthcare services in Australia to be equipped to face extreme weather
events. The needs of periodic evaluation on building performance such as building
quality, functionality and impact may be beneficial and would lead to significant
savings on maintenance costs related to climate change.
May and Clark (2009) studied patient experiences in the NHS (UK) on promoting
Agenda of Change. The study involved maintenance, engineering, building, gardening
and general office estates management and found that, overall, estates staff did
consider their job/service to be important to the patient experience. However, senior
staff appeared to have more confidence in their contribution to the patient experience.
The building performance measurement in facilities management
FM divisions must implement rigorous and disciplined measurement (Rogers, 2004).
However, unclear objectives occur when external benchmarking does not fit internal
organizational processes (Tucker and Pitt, 2009). FM is not about reducing running
costs of buildings or maintaining costs, but business as a whole. Tucker and Pitt (2009)
believed that the scarcity of a holistic approach to FM in strategic customer
performance measurement may cause ineffective application of performance
management. However, with effective utilisation of all corporate resources, the FM
function emerged as an important corporate discipline (Edum-Fotwe et al., 2003). Goyal
and Pitt (2007) suggest that innovation in FM is an enabler, adding value to the
organization. FM has streamlined a core focused approach to integrated service
management. Integrated FM tends to produce its own innovative solutions in line with
changes in business organizations. On the other hand, Tucker and Pitt (2009) perceived
PM tools to not only apply to FM but also to establish strategic business processes that
should be embedded into organizational business culture. However, these researchers
commented that accessibility of FM benchmarks within any organization remain
scarce.
Lavy and Shohet (2007a) developed an Integrated Healthcare Facility Management
Model (IHFMM) to integrate the tactical and strategic decision making processes on the
life cycle perspective. The authors claimed that the integration between the different
parameters of the facility have not yet been thoroughly researched, and as such the
study is focused on the identification of principal variables affecting the performance
and maintenance of facilities throughout their service life. Lavy and Shohet (2007b)
also found that the annual maintenance expenditure in built facilities depends
significantly on the type of environment that the facility is located in, on the occupancy
in the facility and on its actual service life. This study concluded that standard
allocation of resources for maintenance is not adequate for the resources needed, unless
it is based on the age of the building, its occupancy and environment.
Literature shows that there is no “building performance” toolkit available for
Australian healthcare (Department of Health and Ageing, 2011). The only tool
available specifically for healthcare buildings is a toolkit from BREEAM called
AEDET toolkit. This building performance measurement tool developed in the UK
takes into consideration the complexity of healthcare design, which is difficult to
measure and evaluate, and is used to assist in measuring and managing the design of
healthcare facilities in regards to the building performance such as functionality,
impact and build quality. This toolkit will enable the user to evaluate the design by
posing a series of clear, non-technical statements, encompassing the three key areas of
impact, build quality and functionality (Department of Health, 2005).
However, this toolkit may not be applicable to Australian healthcare services due to
the different climatic conditions and adaptive-reuse of buildings as healthcare
facilities. This study hypothesizes that the existing toolkit from UK NHS is not suitable
for building performance evaluation in public healthcare buildings in Australia; hence
the existing toolkit will have to be adapted to suit Australian healthcare buildings and
FM operation.
Research methods
To fill the gaps mentioned above, this study employed the AEDET tool for
performance evaluation. The objective was to evaluate healthcare users’ level of
agreement related to three dimensions – namely functionality, impact and quality. A
public hospital under one of the healthcare organisations in Victoria, Australia is
selected as a case study. This organization has never undertaken performance
measurement particularly on the built environment.
The organization studied is one of the large regional health providers in Victoria,
which serves over 500,000 people through the efforts of over 6,000 people across
21 sites. In addition to serving the needs of the permanent population, this organization
also provides care to visitors to the region which in peak seasons can expand the
population by over 70 percent. A total of 1,016 beds and health services cover the full
spectrum from emergency and acute, to mental health, primary care, community
services, aged care and sub-acute/rehabilitation. Figure 1 shows a public healthcare
organisation chart, that distinguishes core and non-core division.
As a pilot study, a questionnaire was designed based on the AEDET toolkit. The
questionnaires were tested within non-core and core divisions, comprising
11 departments in the public hospital and involved 11 executive directors (100%
response rate), to check its applicability with respect to the nature of healthcare in
Australia. With the use of Figure 1, selected staff from the top management
participated in the pilot study. Table I shows the main elements and sub elements of
the questionnaire. The staff were asked to rate the degree of relevance and level of
importance of each of the building performance criteria. The finding from this survey
will assist in developing new categories for building performance that can be used for
benchmarking building performance. The result of the descriptive analysis in Table I
shows that this toolkit is well understood and is relevant, as the mean score and
acceptance score of all items are well above 3.00 (Table I). Subsequently, the next
questionnaire survey was sent to 225 staff at various levels of core and non-core
services. The numbers of responses are 166-192 respondents, equivalent to
74 percent-85 percent response rate (Table III-V)
Data analysis
Table II describes three statistical methods used. The research adopts the procedures
used in Yeung et al. (2007) and Yang et al. (2009).
4.1 Ranking of Building Performance elements
The analysis of the survey response data produced the means for three main elements of
Building Performance, namely functionality, impact and quality. Ranking and Kendall’s
Evaluation of
building
performance
685
F
31,13/14
686
Figure 1.
Public healthcare
organisation (core and
non-core division)
Space
Use
Access
Character and innovation
Form and material
Staff environment
Urban and social integration
Performance
Engineering
Construction
Building Functionality (total items ¼ 20 items)
Build Quality (total items ¼ 16 items)
Building Impact (total items ¼ 22 items)
Sub elements
Main elements (total items ¼ 58 items)
6
7
7
5
5
8
4
4
5
7
n
3.17
4.15
4.11
3.16
3.22
4.56
3.36
4.35
4.08
4.69
Relevant (mean)
0.934
0.405
0.467
0.674
1.079
0.522
0.894
0.522
0.522
0.647
SD ðsÞ
SD ðs)
0.522
0.522
0.688
1.136
0.647
1.036
1.027
0.522
0.688
0.505
Acceptance (mean)
4.23
4.18
3.22
3.31
3.89
3.45
3.56
4.12
4.77
4.61
Evaluation of
building
performance
687
Table I.
AEDET sample on the
healthcare organisation
F
31,13/14
688
Methods
Purpose
Mean value
To identify level of agreement in terms of Ranking of building performance
elements
building functionality, impact and
quality
Kendall’s
coefficient
To obtain general agreement level on
ranking
Factor analysis To acquire more suitable grouping in
regards to healthcare buildings
Table II.
Statistical descriptive
analysis methods
Outcomes
Achieving level of significance
Revise and regroup based on the level of
agreement in building performance. This
framework is to be used in the later stage
to further evaluate the current state of
healthcare building performance
coefficient of concordance for building functionality, building impact and built quality are
shown in Tables III, IV and V respectively. The analysis using mean values found that
building functionality and building impact ranged from 2.12 to 3.27 and 2.31 to 3.42
respectively. Built quality ranged from 2.75 to 3.27, which indicate that more than half of
the overall items are in good performance. The highest ranking by all respondents are
“access 1” for building functionality, “urban and social integration 1” for building impact
and “engineering 4” for built quality. The highest ranking by all respondents for building
functionality was “There is good access from and within the building to another building”
with a mean of 3.27, for building impact this was “The height, volume and skyline of the
building relate well to the surrounding environment” with a mean of 3.42 and for built
quality this was “There are emergency backup systems that are designed to minimise
disruption” with a mean of 3.27. From Table III, the highest six elements are about access
to the building. For building impact (Table IV) the highest ranked elements are all of the
items included under “urban and social integration”. This is due to the vast range of
healthcare users which often involve the general public. The lowest scored item was the
last item in environment (Env8) that covers the generic issue of basic facilities. Table V
shows that more than half the items (ten out of 16 items) are performing well and it
comprises all ranges of sub topics, namely construction, engineering and performance; the
items that carry scores of less than three are quite technical, and involve the building’s
future issues, nevertheless those items scored close to 3.
In order to examine whether the respondents ranked the building performance items
in similar order, Kendall’s coefficient of concordance were calculated. Respondents’
rank on the level of performance is identical if the concordance of coefficient is equal to
1, and totally different if equal to 0 (Yeung et al., 2007). The Kendall’s coefficient of
concordance for ranking the building performance in Table III, Table IV and Table V
are 0.111, 0.101 and 0.060 respectively, which are statistically significant at 1 percent
level. This suggests that there was a general agreement among respondents on ranking
the performance; that is, the respondents shared similar views about the relative
importance of these performance items.
Factor analysis of the Building Performance elements
Factor analysis is a technique for exploring data and then collating them it into smaller
groups that are relevant. “It is a data exploration technique, it is then up to the writer to
judge” (Pallant, 2011). According to Li et al. (2005), by identifying small numbers of
Items
Building Functionality items
Mean
Rank
Access1
There is good access from and within the building to
another building
The approach and access for ambulances is
appropriately provided
Goods and waste disposal vehicle circulation is good
and segregated from public and staff access where
appropriate
The ratio of usable area against total area is
acceptable
The fire planning strategy allows for ready access
and egress
Outdoor spaces are provided with appropriate and
safe lighting indicating paths, ramps and steps
Pedestrian access routes are obvious, pleasant and
suitable for wheelchair users and people with other
disabilities
Any necessary isolation and segregation of spaces is
achieved
Design of the building achieves appropriate space
standards
Overall this building meets staff requirements
The circulation distances travelled are minimised by
the layout
The space makes appropriates provision for gender
segregation
The design facilitates the care model of the hospital
Work flows and logistics are arranged optimally
Overall the building is capable of handling the
projected throughput
The layout facilitates both security and supervision
Where possible, spaces are standardised and flexible
in use patterns
There is adequate storage space
The building is sufficiently adaptable to respond to
change and to enable expansion
There is adequate access space for visitors and staff
cars with appropriate provision for disabled people
3.27
1
3.26
2
3.15
3
3.11
4
3.07
5
3.04
6
2.98
7
2.91
8
2.90
9
2.88
2.83
10
11
2.81
12
2.80
2.75
2.67
13
14
15
2.67
2.59
16
17
2.35
2.25
18
19
2.12
20
Access3
Access4
Space2
Access7
Access6
Access5
Space4
Space1
Use1
Space3
Space5
Use2
Use4
Use3
Use7
Use6
Space6
Use5
Access2
Notes: Kendall’s coefficient of concordance ¼ 0.111. Level of significant: 0.000. For “Mean scores”:
1 ¼ strongly disagree and 5 ¼ strongly agree
grouping factors using “factor analysis”, relationships among sets of many
inter-related variables can be represented. To determine whether a data set is
suitable for factor analysis, sample size and strength of the relationship among the
factors are two important factors to be considered, with the minimum number of
samples suggested as 150 (Pallant, 2011). In this study, the number of respondents
range from 166 to 192. These numbers vary because of the on-line nature of the survey
where respondents could choose not to answer certain questions. A correlation matrix
is then formed to measure the strength of the relationship among the factors. Most
values in the correlation matrix are larger than 0.3. The Barlett’s test of Sphericity is
Evaluation of
building
performance
689
Table III.
Ranking of the building
functionality items based
on the level of
significance
F
31,13/14
UnSI1
Env8
690
UnSI2
Env2
UnSI3
UnSI4
FnM5
Env6
FnM1
CnI5
FnM4
Env3
CnI4
Env5
CnI2
CnI3
CnI1
FnM2
Env1
Env4
Env7
Table IV.
Ranking of the building
impact items based on the
level of significance
FnM3
Building Impact items
Mean
Rank
The height, volume and skyline of the building
relate well to the surrounding environment
Basic facilities (example pantry, toilet, etc.) are
within the reach of the staff
The building contributes positively to its locality
There are good views inside and to the outside of
the building
The hard and soft landscape around/within the
building contribute positively to the locality
The building looks pleasingly designed to
neighbours and passers-by
The external colour and textures seem appropriate
and attractive
The interior of the building is attractive in
appearance
The building has a human scale and looks
welcoming
The layout of the building is likely to influence
future designs
The external materials and detailing appear to be
of high quality
The signage is well understood and easy to follow
The layout of the building appropriately expresses
the values of the health care
The building is clearly understandable
The layout of the building is interesting and
portrays an image of the hospital
The layout of the building represents a caring and
reassuring image
The layout of the building is well-designed for the
clinical activities
The design takes advantage of available sunlight
and provides shelter from prevailing winds
The building allows for appropriate levels of
privacy of the staff
There are high levels of both comfort and control
of comfort
There are good facilities for staff, including
convenient places to work and relax without being
on demand
The building allows minimum absorption of heat
in summer and retains maximum heat in winter
3.42
1
3.34
2
3.32
3.22
3
4
3.19
5
3.10
6
2.92
7
2.96
8
2.92
9
2.87
10
2.86
11
2.86
2.82
12
13
2.80
2.79
14
15
2.78
16
2.77
17
2.67
18
2.66
19
2.66
20
2.36
21
2.31
22
Notes: Kendall’s coefficient of concordance ¼ 0.101. Level of significant: 0.000; For “Mean scores”:
1 ¼ strongly disagree and 5 ¼ strongly agree
used to test the significance ( p # 0.05). Kaiser-Meyer-Olkin (KMO) index shown in
Table VI is 0.090 for building function, 0.921 for building impact and 0.891 for building
quality. KMO above 0.6 is recommended by Field (2009) in order to test if the data were
suitable for factor analysis. All three elements, i.e. building functionality, building
impact and built quality have passed both tests (Table VI).
Item
Built Quality items
Eng4
There are emergency backup systems that are
designed to minimise disruption
The construction is robust
If phased planning and construction are necessary
the various stages are well organized
During construction disruption to essential
services is minimized
The impact of the building process on continuing
healthcare provision is minimized
Temporary construction work is minimized
The building has appropriately durable finishes
(internal finishes)
The building is easy to clean
The building can be readily maintained
The building is easy to operate
The building will weather and age well (external
finishes)
The construction allows easy access to engineering
systems for maintenance, replacement and
expansion
The construction exploits any benefits from
standardisation and prefabrication where relevant
The engineering systems exploit any benefits from
standardization and prefabrication where relevant
The engineering systems are well designed,
flexible and efficient in use
The engineering systems are energy efficient
Const5
Const1
Eng5
Const3
Const2
Perf3
Perf2
Const4
Perf1
Perf4
Const6
Const7
Eng2
Eng1
Eng3
Mean
Rank
3.27
3.20
1
2
3.19
3
3.17
4
3.16
3.15
5
6
3.13
3.12
3.11
3.09
7
8
9
10
2.98
11
2.96
12
2.95
13
2.89
14
2.78
2.75
15
16
Notes: Kendall’s coefficient of concordance ¼ 0.060. Level of significant: 0.000. For “Mean scores”:
1 ¼ strongly disagree and 5 ¼ strongly agree
Barlett’s test of
sphericity
Approx. chisquare
Df
Sig.
Kaiser-Meyer-Olkin measure of sampling
adequacy
Building
Functionality
1,737.237
190
0.000
0.909
Building
Impact
2,386.090
3,231
0.000
0.921
Evaluation of
building
performance
691
Table V.
Ranking of the built
quality items based on
the level of significance
Building
Quality
1,441.502
120
0.000
0.891
In total, 11 regrouping components were produced based on Varimax rotation of the
principal component (Tables VII-IX). These 11 grouped factors with eigenvalues
greater than 1.000 explain 55.97 percent of the variance for Building functionality
(Table VII), 63.98 percent of the variance for building impact and 68.91 percent of the
variance for build quality (Table IX). Each of the building performance items belongs
to only one of the groupings after rotated components matrixa has been completed.
Each of the items belonged to only one of the groupings, with most of the value of
factor loading exceeding 0.50 (Li et al., 2005; Aksorn and Hadikusumo, 2008; Norušis,
Table VI.
Bartlett’s test and KMO
for the Building
Functionality, Building
Impact and Building
Quality
F
31,13/14
Components Eigenvalue
% of
variance
Name of
componentsa
Building Functionality
itemsb
Factor
loading
1
40.102
Functionality
design-8
Space2
0.825
Space1
Space4
Use1
Use2
Use3
Space3
Space5
0.773
0.743
0.654
0.622
0.611
0.606
0.568
Use5
0.724
Use7
Use4
Access2
Use6
Space6
0.637
0.573
0.573
0.537
0.508
Access5
0.744
Access4
Access6
Access3
Access7
Access1
0.732
0.722
0.656
0.570
0.395
8.020
692
2
3
Table VII.
Result of factor analysis
for Building
Functionality
2.011
1.163
10.055
5.813
Functionality
utility-6
Functionality
access-6
Notes: acomponents were named based on the characteristics of its building performance (building
functionality) in that group; bthe meaning of the items are given in the list of Building Functionality in
Table 3
2010). The value of all the items mostly exceeds 0.50 except for the following items:
access1 in Building Functionality (value of 0.395), Env6 (0.419), Env2 (0.358) and Env5
(0.482) in Building Impact. None of the items in Built Quality is less than 0.50. The total
of 58 items from three sub elements are regrouped into 11 sub items, namely: Space
design, Space utility, Space access, Impact outlook, Impact core activities, Impact
facility, Impact future design, Quality energy, Quality engineering, Quality
performance and Quality building. These names are given based on their
characteristics (Tables III, IV and V).
1. Building functionality (component 1): functionality of design. This component,
which accounted for 40.10 percent (Table VII) of the total variances between building
functionality, has relatively more agreement than the other two components. It
indicates that healthcare users in the public hospital studied consider space design to
be significant. To enhance space design, standard space, useable area, circulation
provision, isolation and segregation have to be assessed. In terms of use, requirement,
care model and handling of overall design have to be assessed too. The performance of
the space design is important to achieve building functionality. Therefore, these
components could be illustrated by Space1, Space2, Space3 Space4, Space5, Use1, Use2
and Use3. Various space design patterns of healthcare have been studied by Alalouch
Components Eigenvalue
% of
variance
Name of
componentsa
Building Impact
itemsb
Factor
loading
1
45.041
Impact outlook
UNSI4
UnSI3
UnSI2
UnSI1
FnM5
FnM1
0.827
0.811
0.810
0.783
0.586
0.547
CnL4
0.785
CnL3
CnL1
CnL2
Env1
0.777
0.742
0.702
0.562
2
9.909
1.884
8.565
Impact core
activities
3
1.246
5.662
Impact facility
Env4
Env8
Env7
Env3
Env2
Env6
0.661
0.661
0.609
0.586
0.358
0.419
4
1.037
4.712
Impact future
design
FnM3
0.649
CnL5
FnM4
FnM2
Env5
0.627
0.611
0.569
0.482
Notes: acomponents were named based on the characteristics of its building performance (building
impact) in that group; bthe meaning of the items are given in the list of Building Impact in Table 4
and Aspinall (2007). They are nightingale, corridor or continental, duplex or Nuffield,
racetrack or double corridor, cruciform and radial ward. Results show that there was a
high agreement level on spatial attributes of the layout.
2. Building functionality (component 2): functionality utility. This component ranked
second out of three components (Table VII); six of the space design items of this
component are related to use, access and space. Use contains four items; change and
expansion (use5), layout (use7), flow (use4) and pattern (use6). For access and space, the
items are access provision (access2) and storage (space6) respectively. The
performance for space utility can be achieved if these items are highly considered.
Alalouch and Aspinall (2007) concluded that types of hospital design have systematic
relationship to spatial properties of the layout showing that higher integration can
cause less privacy associated effect of visible co-presence. The items in functionality
utility are important in terms of the levels of integration and privacy.
3. Building functionality (component 3): functionality access. This component was
ranked the least among three components (Table VII). The items here are all about access.
The sub criteria surround access within the building (access1), access emergency
(access3), disposal and good circulation (access4), pedestrian (access 5), outdoor (access 6)
and fire planning (access 7). Item “access1” which scored below 0.50 is not taken out from
Evaluation of
building
performance
693
Table VIII.
Result of factor analysis
for Building Impact
F
31,13/14
Components
Built quality
itemsb
Factor
loading
Quality building-4
Const2
Const1
Const3
Eng5
0.838
0.790
0.761
0.680
9.895
Quality engineering-4
Const6
Const5
Const7
Const4
0.813
0.675
0.670
0.629
1.192
7.449
Quality performance-4
Perf1
Perf2
Perf3
Perf4
0.820
0.811
0.709
0.531
1.144
7.150
Quality energy-4
Eng1
Eng2
Eng4
Eng3
0.726
0.722
0.684
0.654
Eigenvalue
% of variance
1
7.106
44.416
2
1.583
3
4
Name of componentsa
694
Table IX.
Result of factor analysis
for Built Quality
Notes: acomponents were named based on the characteristics of its building performance (built
quality) in that group; bthe meaning of the items are given in the list of Built Quality in Table 5
this group, because this item has the highest ranking amongst the building functionality
items (Table III). This item is also related to other items in the same group. These
components are crucial to the building functionality. Physical accessibility is one of the
important features in an earlier study by Ornstein et al. (2009).
4. Building impact (component 1): impact outlook. This component, which accounted
for 45.04 percent (Table VIII) of the total variances between building impacts, was
significantly more important than the other three components. After factor analysis
regrouping, this component was renamed as impact outlook. These components are
assessed to ensure performance in the general outlook of the healthcare building. These
items are height (UnSI1), locality (UnSI2), landscape (UnSI3), pleasant design (UnSI4),
human scale (FnM1) and colour (FnM5). Codinhoto et al. (2009) mentioned that fabric
and ambient space are characteristics which determine the envelope that sets the
boundaries of the physical space i.e. gardens and green spaces.
5. Building impact (component 2): impact core activities. This component ranked
second out of four components (Table VIII). Five regrouping items have been renamed
as clinical activities (CnL1), image of hospital (CnL2), caring image (CnL3), value of
heath care (CnL4) and level of privacy (Env1). The performance of these items may
enhance the core operation activities. Room occupancy and privacy leading to better
well-being seem to be one of the most investigated areas (Codinhoto et al., 2009).
6. Building impact (component 3): impact facility. Impact facility is the third
component for building impact. Facility is one of the important components that
enhance performance. Items in this component are all from Environment sub element.
Six items comprise of signage (Env3), comfort (Env4), design understandable (Env5),
interior (Env6), convenient place (Env7) and basic facilities (Env8). Assessing these
items can ensure performance that gives good impact to the facility. Codinhoto et al.
(2009) recommend better physical facilities, healthcare staff needs, a trustful
atmosphere and better basic facilities to improve the therapeutic service quality. For
this sub-element, two items were omitted, namely Env2 (view inside and outside) and
Env6 (design understandable). Both items score less than 0.50. These items also show
that there are no connections with “impact facility”
7. Building impact (component 4): impact future design. This component ranked the
least out of four components (Table VIII). It consists of five items: Environment (Env5),
Form and Material (FnM2, FnM3 and FnM4) and Character and Innovation (CnI5). These
items are regrouped and named as “Impact Future Design”. These items are important in
terms of performance in relation to future design. In order to enhance the performance of
future design, these items have to be assessed, namely layout (CnL5), sunlight (FnM2),
heat (FnM3), material (FnM4) and views (Env2). Ayas et al.(2008) recommend an
innovative design to cope with future needs such as lighting, healthcare settings such as
colour and quality space (view) that bring important factors such as security and safety.
8. Built quality (component 1): quality building. This component, which accounted for
44.42 percent (Table IX) of the total variances between built-quality, was significantly
more important than the other three components. It shows that healthcare users in a
public hospital in Australia consider “quality building” as one of the most significant
categories. To enhance quality of building, minimal disruption (Eng5), organised
construction (Const1), minimal construction work (Const2) and minimal impact on
process (Const3) have to be taken care of. Therefore, these components could be illustrated
by Eng5 (construction disruption), Const1 (planning), Const2 (temporary building) and
Const3 (building process). Proper planning to avoid disruption during renovation or
erection of new building is important in healthcare construction. It is crucial to take proper
care during construction and renovation in order to minimize the infection risk from
exposure to infectious agents (Carter and Barr, 1997). Items listed on Build Quality can
promote lean construction that can lead to reduced risk of infection caused by
construction hazards.
9. Built quality (component 2): quality engineering. This component ranked second
out of the four components (Table IX). Items in this component are from the
Construction sub element. Four quality engineering items of this component are in
regard to readily maintained (Const4), robust in construction (Const5), easy access
(Cosnt6) and relevant design (Eng7). The performance for quality engineering can be
achieved if these items are highly considered and well assessed. Design with good
quality and suitable attributes can promote a positive psychological feeling towards
healthcare (Ayas et al., 2008)
10. Built quality (component 3): quality performance. Quality performance is the
third component in built quality. Performance quality is one of the most important
components to enhance performance as a whole. Items in this component are all from
the performance sub element. It consists of four items – easy to operate (Perf1), clean
(Perf2), durable finishes (Perf3) and last but not least, durability (Perf4). Assessing
these items can encourage quality performance in healthcare, especially in critical
areas such as wards and operation theatres. Stable environment conditions would
avoid disturbance to the healing process of the patients (Codinhoto et al., 2009).
11. Built quality (component 4): quality energy. This component ranked least among
the four components (Table IX). The items are all related to engineering. The sub
criteria are in regards to system (Eng1), relevant (Eng2), efficient (Eng3) and backup
system (Eng4). These components are crucial to the built quality. Ornstein et al. (2009)
Evaluation of
building
performance
695
F
31,13/14
found that a pleasant setting with good lighting in and around healthcare buildings
would inspire health and well-being.
696
Validation of the building performance elements
Testing Cronbach’s coefficient alpha
Testing for reliability of a scale Cronbch’s coefficient alpha was used to examine
internal consistency of the scales under the item Building performance. Alpha values
greater than 0.7 are regarded as sufficient (Pallant, 2011). The results of Cronbach’s
coefficient alpha in this survey were in the range of 0.918 to 0.962. This provides
evidence that all the factors have high internal consistency and are reliable.
Testing for content validity
The questions asked in this research explore agreement levels in regards to Impact,
Build quality and Functionality. These items are based on the NHS toolkit, the AEDET
questionnaire survey. It is a clear, non-technical statement, encompassing the three key
elements. This toolkit has been revised several times and has been used to determine
the building performance efficiency in healthcare in the United Kingdom. This survey
assessment is monitored by BREEAM for standard best practise. Such methods don’t
exist in Australia. For the purpose of this study, the AEDET toolkit is used as an
assessment tool to evaluate the building performance in a public healthcare building in
Australia. From the result of the two tests, all the factors were found to have high
internal consistency, the whole questionnaire has valid contents, and the building
performances regrouping in this study are reliable and valid. The result from factor
analysis and regrouping will be used to further clarify the importance of building
performance to specific cases studies.
Discussions of survey results
The results indicate that for Building Functionality, “access1” ranked first among the
20 sub-elements (Table III). This means that good access is a top priority in the
building performance in regards to the functionality of a healthcare building. However,
according to the results of factor analysis, this factor scored the lowest in the 20 items
listed. As discussed in the literature review, this finding was in line with several
researchers’ statements regarding good access (e.g. Ornstein et al., 2009). For this
reason, “access1” will be considered. For the Building impact element, “UnSI1” scored
the highest ranking of 3.42. This item corresponds to the volume of the building. The
healthcare building normally requires a special ceiling height due to the special needs
of vertical healthcare equipment, which may adversely affect the construction cost.
After the factor analysis, this item is regrouped into the element called “impact
outlook”. For built quality, the highest ranking is for “eng4” (3.27). This item is under
the group named “quality energy” and relates to emergency backup. Emergency
backup is part of energy efficiency that will enhance healthcare operations in order to
ensure smooth organizational management. Energy management can also become part
of an overall quality control strategy – part of cleaner production and asset
management. If it is integrated into the company’s core programs, it is more likely to
become part of the organizational culture (Davies, 2009a,). Financial savings through
energy efficiency will improve patient outcomes, better staff health and reduced staff
turnover (Mellon, 2009), while Davies (2009b) and Kluske (2009) highlighted the
suitable basic material that will bring energy saving.
The building performance has been proved to be important for healthcare buildings
(Reiling, 2006; Alalouch and Aspinall, 2007; Duckett and Ward, 2008; Codinhoto et al.,
2009; Sapountzis et al., 2009). With a focus on building design and its elements, every
aspect (all 58 items) for building functionality, building impact and built quality has
been assessed in achieving a comprehensive excellence in building performance. The
11 factors regrouped will form a new set of questionnaires in the second phase of the
study. The developed framework considers 11 sub-sets covered in three dimensions of
the built environment: built quality, building impact and building functionality.
Although more comprehensive approaches to linking FM are needed in terms of
strategic facilities management or other appropriate output, findings from this
analysis will be valuable to enhance building performance and set building criteria that
need further attention. Indeed, lack of assessment in building performance is one of the
key issues strategic management (Douglas, 1996).
The framework in Figure 2 is developed as a result of this. It represents
11 regrouped factors, which contribute to the excellence in building performance.
Added value can be achieved if two of the three elements are integrated. Moreover, the
integration between the three elements can lead to excellence in building performance
as a whole. These three elements are interrelated but need to be assessed independently
for linking them together to achieve holistic improvement in building performance for
healthcare building. The new survey will be handed to all staff in the organisation
comprising of one public hospital, six community health centres and one aged care
centre. This framework will help the respondents to understand the scope of questions
that are related to their buildings. This finding will ease the evaluation in the later
stage and help further in analysing the relationship with strategic management and
facilities management service delivery.
The result of the current study confirmed the hypothesis and that the existing
toolkit has to be adapted to suit Australian public healthcare, hence a new conceptual
framework has to be developed.
Evaluation of
building
performance
697
Figure 2.
A framework for
excellence in building
performance for
healthcare building
F
31,13/14
698
Conclusions
The importance of building performance in facilities management has been recognised
by many scholars and professionals. With a focus on healthcare buildings, this study
hopes to fill a gap that has been explored in section two of this paper. It is crucial to
explore the current state of building performance and to identify significant items in
relation to building performance. This study explored the importance of building
performance in terms of function, impact and quality. Further to that, building
performance ranking and their underlying relationship have been explored.
The process of managing facilities is more challenging due to the complexity of
healthcare buildings. The main contribution of this study is that an ordered and
grouped set of building performance indicators are identified through a survey in one
of the hospitals under a public healthcare organisation. Amongst the 11 re-grouped
factors, five of them remain unchanged but reduced to smaller items. The groups are
“functionality access” for Building Functionality’s third element and “impact facility”
for Building Impact’s third element. For Build Quality, the elements are; the second,
third and fourth elements namely “quality engineering”, “quality performance” and
“quality energy” respectively. Those 11 sub elements are regrouped to more relevant
and significant categories. These findings will be used as an assessment tool to
evaluate the performance of other buildings under the same health organisation and
thus help identify areas of improvement. The findings may also be applied to other
public healthcare organisations in Australia. Since the results of the survey are from a
specific site, the respondents may have different understandings about the statements
in the survey. This may have affected the scoring of the building performance.
Therefore, the findings should be further validated by observation of the same site.
Finally, it is hoped that the findings from this research may be able to provide
stakeholders and healthcare facilities with insights on the approaches available for
measuring the performance of healthcare buildings as well as identify valid
benchmarking that can be applied on a wider scale.
References
Aksorn, T. and Hadikusumo, B.H.W. (2008), “Critical success factors influencing safety program
performance in Thai construction projects”, Safety Science, Vol. 46 No. 4, pp. 709-727.
Alalouch, C. and Aspinall, P. (2007), “Spatial attributes of hospital multi-bed wards and
preferences for privacy”, Facilities, Vol. 25 Nos 9/10, pp. 345-362.
Amaratunga, D. and Baldry, D. (2002), “Moving from performance measurement to performance
management”, Facilities, Vol. 20 Nos 5/6, pp. 217-223.
Atkin, B. and Brooks, A. (2009), Total Facilities Management, 3rd ed., Wiley-Blackwell,
Chichester.
Ayas, E., Eklund, J. and Ishihara, S. (2008), “Affective design of waiting areas in primary
healthcare”, The TQM Journal, Vol. 20 No. 4, pp. 389-408.
Boussabaine, A.H. and Kirkham, R.J. (2006), “Whole life cycle performance measurement
re-engineering for the UK National Health Service estate”, Facilities, Vol. 24 Nos 9/10,
pp. 324-342.
Boyer, D.L., Belzeaux, D.R., Maurel, M.O., Baumstarck, D.K. and Samuelian, D.J.-C. (2010),
“A social network analysis of health care professional relationships in a French hospital”,
International Journal of Health Care Quality Assurance, Vol. 23 No. 5, pp. 460-469.
Brackertz, N. and Kenley, R. (2002), “A service delivery approach to measuring facility
performance in local government”, Facilities, Vol. 20 Nos 3/4, pp. 127-135.
BREEAM (2010), Building Research Establishment Environmental Assessment Method,
BREEAM, Watford, UK, available at: www.breeam.org/
Carter, C.D. and Barr, B.A. (1997), “Infection control issues in construction and renovation”,
Infection Control and Hospital Epidemiology, Vol. 18 No. 8, pp. 587-596.
Carthey, J., Chandra, V. and Loosemore, M. (2009), “Adapting Australian health facilities to cope
with climate-related extreme weather events”, Journal of Facilities Management, Vol. 7
No. 1, pp. 36-51.
Chan, K.T., Lee, R.H.K. and Burnett, J. (2001), “Maintenance performance: a case study of
hospitality engineering systems”, Facilities, Vol. 19 Nos 13/14, pp. 494-504.
Codinhoto, R., Tzortzopoulos, P., Kagioglou, M., Aouad, G. and Cooper, R. (2009), “The impacts of
the built environment on health outcomes”, Facilities, Vol. 27 Nos 3/4, pp. 138-151.
Davies, H. (2009a), “Energy management – it’s not rocket science”, Facility Management,
December/January, pp. 56-57.
Davies, H. (2009b), “Just how ‘green’ is your building?”, Facility Management, August/September,
pp. 36-37.
Department of Health (2005), Publications Policy and Guidance, Department of Health, UK,
available at: www.dh.gov.uk/en/Publicationsandstatistics/Publications/Publications
PolicyAndGuidance/DH_082089
Department of Health and Ageing (2011), Publications, Statistics and Resources, Department of
Health and Ageing, Canberra, available at: www.health.gov.au/
Douglas, J. (1996), “Building performance and its relevance to facilities management”, Facilities,
Vol. 4 Nos 3/4, pp. 23-32.
Draper, M. and Hill, S. (1996), “Feasibility of national benchmarking of patient satisfaction with
Australian hospitals”, Journal for Quality in Health Care, Vol. 8 No. 5, pp. 457-466.
Duckett, S. and Ward, M. (2008), “Developing ‘robust performance benchmarks’ for the next
Australian Health Care Agreement: the need for a new framework”, Australia and New
Zealand Health Policy, Vol. 5 No. 1, pp. 1-8.
Edum-Fotwe, F.T., Egbu, C. and Gibb, A.G.F. (2003), “Designing facilities management needs
into infrastructure projects; case from a major hospital”, Journal of Performance of
Constructed Facilities, Vol. 17 No. 1, pp. 43-50.
Eley, J. (2001), “How do post-occupancy evaluation and the facilities manager meet?”, Building
Research & Information, Vol. 29 No. 2, pp. 164-167.
Featherstone, P. and Baldry, D. (2000), “The value of the facilities management function in the
UK NHS community health-care sector”, Journal of Management in Medicine, Vol. 14
Nos 5/6, pp. 326-338.
Field, A.P. (2009), Discovering Statistics Using SPSS: and Sex and Drugs and Rock ‘n’ Roll, 3rd
ed., Introducing Statistical Methods, Sage, Los Angeles, CA and London.
Goyal, S. and Pitt, M. (2007), “Determining the role of innovation management in facilities
management”, Facilities, Vol. 25 Nos 1/2, pp. 48-60.
Harvey, T.E. Jr and Pati, D. (2008), “Functional flexibility”, Health Facilities Management, Vol. 21
No. 2, pp. 29-34.
Hlavacka, S., Bacharova, L., Rusnakova, V. and Wagner, R. (2001), “Performance implications of
Porter’s generic strategies in Slovak hospitals”, Journal of Management in Medicine,
Vol. 15 No. 1, pp. 44-66.
Evaluation of
building
performance
699
F
31,13/14
700
International Facilities Management Association (1986), Official Statement on Facility
Management, IFMA, Houston, TX.
Kazaz, A. and Birgonul, M.T. (2005), “The evidence of poor quality in high rise and medium rise
housing units: a case study of mass housing projects in Turkey”, Building and
Environment, Vol. 40 No. 11, pp. 1548-1556.
Kluske, R. (2009), “Towards zero carbon healthcare”, Facility Management, August/September,
pp. 52-54.
Lavy, S. and Shohet, I.M. (2007a), “A strategic integrated healthcare facility management model”,
International Journal of Strategic Property Management, Vol. 11 No. 3, pp. 125-142.
Lavy, S. and Shohet, I.M. (2007b), “On the effect of service life conditions on the maintenance
costs of healthcare facilities”, Construction Management and Economics, Vol. 25 No. 10,
pp. 1087-1098.
Lavy, S. and Shohet, I.M. (2009), “Integrated healthcare facilities maintenance management
model: case studies”, Facilities, Vol. 27 Nos 3/4, pp. 107-119.
Li, B., Akintoye, A., Edwards, P.J. and Hardcastle, C. (2005), “Critical success factors for PPP/PFI
projects in the UK construction industry”, Construction Management and Economics,
Vol. 23 No. 5, pp. 459-471.
Madritsch, T. (2009), “Best practice benchmarking in order to analyze operating costs in the
health care sector”, Journal of Facilities Management, Vol. 7 No. 1, pp. 61-73.
May, D. and Clark, L. (2009), “Achieving patient-focused maintenance services/systems”, Journal
of Facilities Management, Vol. 7 No. 2, pp. 128-141.
Mellon, R. (2009), “Hospitals now eligible for Green Star ratings”, Facility Management,
August/September, pp. 50-51.
Noor, M.N.M. and Pitt, M. (2009), “A critical review on innovation in facilities management
service delivery”, Facilities, Vol. 27 Nos 5/6, pp. 211-228.
Norušis, M.J. (2010), PASW Statistics 18 Advanced Statistical Procedures Companion, Prentice
Hall, Upper Saddle River, NJ.
Okoroh, M.I., Gombera, P.P. and Ilozor, B.D. (2002), “Managing FM (support services): business
risks in the healthcare sector”, Facilities, Vol. 20 Nos 1/2, pp. 41-51.
Okoroh, M.I., Jones, C.M. and Ilozor, B.D. (2003), “Adding value to constructed facilities: facilities
management hospitality case study”, Journal of Performance of Constructed Facilities,
Vol. 17 No. 1, pp. 24-33.
Ornstein, S.W., Ono, R., Lopes, P.A., França, A.J.G.L., Kawakita, C.Y., Machado, M.D., Robles,
L.V.L., Tamashiro, S.H. and Fernandes, P.R. (2009), “Performance evaluation of a
psychiatric facility in São Paulo, Brasil”, Facilities, Vol. 27 Nos 3/4, pp. 152-167.
Pallant, J. (2011), SPSS Survival Manual: A Step by Step Guide to Data Analysis Using SPSS,
4th ed., Allen & Unwin, Crows Nest, NSW.
Pati, D., Park, C.-S. and Augenbroe, G. (2010), “Facility maintenance performance perspective to
target strategic organizational objectives”, Journal of Performance of Constructed
Facilities, Vol. 24 No. 2, pp. 180-187.
Reiling, J. (2006), “Safe design of healthcare facilities”, Qual Saf Health Care, Vol. 15 No. 1,
pp. i34-i40.
Robathan, P. (1996), “Intelligent building performance”, in Alexander, K. (Ed.), Facilities
Management: Theory and Practice, E & FN Spon, London and New York, NY.
Rogers, P.A. (2004), “Performance matters: how the high performance business unit leverages
facilities management effectiveness”, Journal of Facilities Management, Vol. 2 No. 4,
pp. 371-381.
Sapountzis, S., Yates, K., Kagioglou, M. and Aouad, G. (2009), “Realising benefits in primary
healthcare infrastructures”, Facilities, Vol. 27 Nos 3/4, pp. 74-87.
Shohet, I.M. (2003), “Building evaluation methodology for setting maintenance priorities in
hospital buildings”, Construction Management and Economics, Vol. 21 No. 7, pp. 681-692.
Shohet, I.M. (2006), “Key performance indicators for strategic healthcare facilities maintenance”,
Journal of Construction Engineering and Management, Vol. 132 No. 4, pp. 345-352.
Shohet, I.M. (2003), “Key performance indicators for maintenance of health-care facilities”,
Facilities, Vol. 21 Nos 1/2, pp. 5-12.
Shohet, I.M. and Lavy, S. (2004), “Healthcare facilities management: state of the art review”,
Facilities, Vol. 22 Nos 7/8, pp. 210-220.
Tucker, M. and Pitt, M. (2009), “Customer performance measurement in facilities management:
a strategic approach”, International Journal of Productivity and Performance Management,
Vol. 58 No. 5, pp. 407-422.
Wauters, B. (2005), “The added value of facilities management: benchmarking work processes”,
Facilities, Vol. 23 Nos 3/4, pp. 142-151.
Yang, J., Shen, G.Q., Ho, M., Drew, D.S. and Chan, A.P.C. (2009), “Exploring critical success
factors for stakeholder management in construction projects”, Journal of Civil Engineering
and Management, Vol. 15 No. 4, pp. 337-348.
Yeung, J.F.Y., Chan, A.P.C., Chan, D.W.M. and Li, L. (2007), “Development of a partnering
performance index (PPI) for construction projects in Hong Kong: a Delphi study”,
Construction Management and Economics, Vol. 25 No. 12, pp. 1219-1237.
Zairi, M. and Sinclair, D. (1995), “Business process re-engineering and process management:
a survey of current practice and future trends in integrated management”, Business
Process Management Journal, Vol. 1 No. 1, pp. 8-30.
Corresponding author
Yuhainis Talib can be contacted at: yab@deakin.edu.au
To purchase reprints of this article please e-mail: reprints@emeraldinsight.com
Or visit our web site for further details: www.emeraldinsight.com/reprints
Evaluation of
building
performance
701
Reproduced with permission of the copyright owner. Further reproduction prohibited without
permission.
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/1477-7266.htm
Improvement
Framing quality improvement Leaders’
Guides
tools and techniques in healthcare
The case of Improvement Leaders’ Guides
Ross Millar
209
Health Services Management Centre, University of Birmingham,
Birmingham, UK
Abstract
Purpose – The purpose of this paper is to present a study of how quality improvement tools and
techniques are framed within healthcare settings.
Design/methodology/approach – The paper employs an interpretive approach to understand how
quality improvement tools and techniques are mobilised and legitimated. It does so using a case study
of the NHS Modernisation Agency Improvement Leaders’ Guides in England.
Findings – Improvement Leaders’ Guides were framed within a service improvement approach
encouraging the use of quality improvement tools and techniques within healthcare settings. Their use
formed part of enacting tools and techniques across different contexts. Whilst this enactment was
believed to support the mobilisation of tools and techniques, the experience also illustrated the
challenges in distributing such approaches.
Originality/value – The paper provides an important contribution in furthering our understanding
of framing the “social act” of quality improvement. Given the ongoing emphasis on quality
improvement in health systems and the persistent challenges involved, it also provides important
information for healthcare leaders globally in seeking to develop, implement or modify similar tools
and distribute leadership within health and social care settings.
Keywords Quality improvement, Organizational development, Organizational change, Health services,
Qualitative techniques, Leadership
Paper type Research paper
Introduction
Healthcare systems have turned to a variety of “improvement strategies” aimed at
promoting, enabling and encouraging change to happen (Walshe, 2003). Quality
improvement has been one such effort to achieve better patient outcomes, better system
performance and better professional development (Batalden and Davidoff, 2007, p. 2).
Rather than effort alone, it is based on the improvement of systems and processes
(Berwick, 1996; Institute of Medicine, 2001) through a variety of tools and techniques. Dale
and McQuater (1998) suggest these tools and techniques provide a means and a starting
point for analysing problems, identifying and diagnosing gaps in performance and
measuring whether implemented change is producing desired improvements. They
include flow diagrams to understand processes; run charts and control charts to
understand variation and measurement within these processes; and learning cycles (or
“Plan Do Study Act” cycles) to carry out small tests of change that lead to improvements
(Batalden and Davidoff, 2007; Langley et al., 1996; Plsek, 1990; Dale and McQuater, 1998).
The author would like to thank both reviewers for their extremely helpful and constructive
comments in developing the paper.
Journal of Health Organization and
Management
Vol. 27 No. 2, 2013
pp. 209-224
q Emerald Group Publishing Limited
1477-7266
DOI 10.1108/14777261311321789
JHOM
27,2
210
A variety of formative and summative research has analysed the effects of quality
improvement interventions. These include total quality management (Joss and Kogan,
1995), continuous quality improvement (Shortell et al., 1998), business process
reengineering (McNulty and Ferlie, 2002), clinical microsystems (Williams et al., 2009)
and Lean thinking (Waring and Bishop, 2010). Across these varying initiatives and
organisational contexts, what tends to unite this research is that despite “pockets of
improvement” showing benefits to patient care and resource utilisation, quality
improvement initiatives tend to be limited by their construction as a “bolted on”
managerial intervention and by a general lack of interest or compliance from
healthcare professional staff.
Based on this “patchy” evidence base, what we have seen more recently are calls for
new approaches that study the contextual and contingent features of quality
improvement interventions (Bate et al., 2008; Berwick, 2008; Walshe, 2007; Greenhalgh
et al., 2004; Ovretveit and Gustafson, 2002). This call was captured by Batalden et al.
(2011) who suggested that understanding quality improvement required a change in
thinking with greater concentration on the “social act”. In contrast with “biological
wizardry” and “technical fixes”, Batalden et al. (2011, p. 103) suggest improvement lay
on “mastering the complex realities that drive, and that inhibit, human performance,
professional behaviour and social change”. It included a greater understanding of
organisations as political systems (Langley and Dennis, 2011) and intergroup
relationships and dynamics (Bartenuk, 2011). Epistemological issues related to
improvement also required greater consideration (Perla and Parry, 2011). Knowledge
for improvement required an acceptance of both “homogeneity” and “heterogeneity”
with greater attention to language, categories, methods and rules of inference
(Davidoff, 2011). At a practical level, it meant developing and appointing leaders
capable of using the sciences of improvement.
The purpose of the following paper is to analyse how a collection of quality
improvement tools and techniques called the Improvement Leaders’ Guides (ILGs)
were interpreted and framed within English healthcare settings. It builds on other
research by presenting a critical and theoretical understanding of how quality
improvement interventions interact with pre-existing healthcare practices (Waring and
Bishop, 2010; Joosten et al., 2009; Timmermans and Berg, 2003) and how tools and
techniques are characterised by “interpretative flexibility” in the sense that they are
imbued with social and cultural meaning (Waring and Bishop, 2010). Interpretive
flexibility expresses the idea that technological artefacts such as improvement tools
and techniques are both constructed and interpreted (Doherty et al., 2006). They
represent “different things to different actors” (Law and Callon, 1992, p. 24) as various
social groupings associate different meanings to them. In doing so, the paper also
documents a significant development in the quality improvement agenda within the
UK and beyond – that being the work of the NHS Modernisation Agency (MA). It has
relevance to all quality improvement researchers and practitioners by raising
important questions about our understanding of quality improvement tools and
techniques and distributing leadership across healthcare settings.
Quality improvement in the English NHS
The healthcare system in England has introduced a variety of policy measures aiming
to reform its organisation and delivery. These overlapping strategies have aimed to
“modernise” infrastructure, improve efficiency, quality, and responsiveness to patients’
preferences (Stevens, 2004; Ham, 2009). As part of its policy goal to redesign healthcare
around the patient (DoH, 2000), the New Labour government (1997-2010) introduced a
number of quality improvement interventions to support continuous learning and
improvement of health services. These included NHS Collaborative programmes, the
NHS Modernisation Agency, the National Patient Safety Agency and the NHS Institute
for Innovation and Improvement.
What united these initiatives and institutions was the view that to build capacity
and capability in relation to improving healthcare organisations required a greater
emphasis on quality improvement methods and principles. The approach formed part
of an international preoccupation with healthcare redesign techniques to improve
healthcare systems (Locock, 2003). Locock (2003) suggests healthcare redesign blended
the methods and principles of continuous quality improvement (CQI) and business
process reengineering (BPR) in “thinking through from scratch the best process to
achieve speedy and effective care from a patient perspective” (Locock, 2003, p. 54;
Locock, 2001). The approach emphasises the importance of continually reflecting upon,
measuring and changing work processes in an effort to improve workflow, reduce
waste and add value (Waring and Bishop, 2010).
From 2001 until 2005, the NHS Modernisation Agency (MA) was established to train
and support healthcare organisations in local service redesign and the spread of best
practice. It provided range of improvement programmes and initiatives that promoted
whole systems approaches by “rethinking the way that services are organised” and
“taking out frustrating waits and delays in the patient journey”. A key feature of these
initiatives was the “horizontal spread” of reengineering and service redesign
techniques (Stevens, 2004, p. 39), particularly those advocated by the Institute for
Healthcare Improvement in the US. These included the “breakthrough” collaborative
method, the PDSA learning cycle and the “Model for Improvement” (Langley et al.,
1996). Alongside tools and techniques, the MA also promoted the role of leadership
within local improvement efforts by encouraging individuals with “good ideas,
entrepreneurial flair and expertise” to lead and inspire others (MA, 2002a, p. 15).
One of the innovations produced by the MA in its attempt to blend systems thinking
and leadership development was the production of Improvement Leaders’ Guides
(ILGs). ILGs were developed following feedback from NHS Collaborative programmes
that more guidance was needed to support the application of tools and techniques at a
local level (Millar, 2009). They were produced to help teams understand “the basic
principles” of improvement and provide existing improvement leaders with support
when mapping and planning training and development programmes that used
improvement topics, tools and techniques (see Table I) (MA, 2002b, pp. 1-3). The
cumulative effect of this production was a “Body of Knowledge” covering the “harder”
side of systems and project management and the “softer” people side of improvement
in areas of personal and organisational development (Penny, 2003, p. 3).
What was particularly innovative about this collection of quality improvement tools
and techniques was their attempt to overcome the previous shortcomings of quality
improvement in healthcare settings. The experience of NHS Collaborative programmes
found that tools and techniques such as process mapping and capacity and demand
training did provide “key levers for change” as did the emphasis on multi disciplinary
working and networking (Robert et al., 2003, pp. 425-427). However, such tools and
Improvement
Leaders’ Guides
211
JHOM
27,2
Improvement Leaders’ Guide to. . .
What the guide has to offer?
Process mapping, analysis and redesign
212
Table I.
National Health Service
Modernisation Agency
Improvement Leaders’
Guides
Advice on setting aims and identifying measures to
show how changes have made an improvement
Measurement for Improvement
Advice on how to measure the impact of the
changes made and knowing when a change is an
improvement
Matching capacity and demand
Advice on understanding ‘bottlenecks’ in the
system, eliminating queues and waiting lists
Involving patients and carers
Advice on how to involve patients in improvement
programmes and projects
Managing the human dimensions of change
Advice on how to ensure the best possible outcome
when working with different people
Sustainability and spread
Advice for sustaining and spreading good ideas
Setting up a collaborative programme
Advice on using a collaborative methodology to
innovate and test new models of delivery
Working in systems
Advice on finding ways to develop long term
sustainable improvements
Building and nurturing an improvement culture Advice on innovation, learning, team working,
communication and trust
Working with groups
Advice on leading and facilitating an improvement
group meeting
Redesigning roles
Advice on creating a motivated and skilled
workforce that works together to provide high
quality care
methods were often aggregated into time limited projects as “off the shelf”
programmatic methods rather than creating generative change or networked learning
communities (Bate et al., 2002, p. vii). Clinicians tended to be less convinced by the
value of the Plan-Do-Study-Act (PDSA) cycle approaches or the sustainability of
improvements made (Ham, 2003, pp. 2-3; Robert et al., 2003, p. 433). Where pockets of
improvement existed, these tended to rely on “highly committed and competent”
individuals.
Such findings resonate with more recent research studying quality improvement
methods in the English Safer Patients Initiative (Health Foundation, 2011). This found
that staff experience of process measurement was very positive as real time
information helped people understand cause and effect and engender local ownership
of data for improvement. However, it also found that contexts lacked the appropriate
measurement systems to define and implement the improvements made. The dominant
paradigm centred on data for performance management rather than measurement for
improvement (Health Foundation, 2011). Staff engagement also proved to be an issue
as medical staff generally did not feel as engaged in the work.
The production of ILGs formed part of an approach to encourage greater spread and
sustainability of improvement tools and techniques (see MA, 2004; Matrix RHA, 2003a,
2003b). They are underpinned by the view that although the production of “mass
media” can create awareness for improvement, the method for diffusing innovation is
more likely to be through interpersonal influence, social networks and horizontal peer
influence (Greenhalgh et al., 2004; Gollop et al., 2004; Fraser, 2002; Jones, 2005). To
nurture organisational and professional cultures in relation to quality improvement
requires a combination of macro framing and micro individualising of quality through
team building and learning (Bate et al., 2008, p. 33; Shortell et al., 1998).
Also connected to ILGs is a more de-individualised concept of leadership as
something that can be “distributed” between different layers within organisations. The
role of local leaders is to enable, facilitate and support these different learning
communities and networks by engaging in a collaborative approach with local
“activists” in order to nurture a critical mass of support and facilitate a “movement
mentality”. Leaders do so by paying greater attention to aligning and framing words
and language to capture people’s attention and invest emotional energy (Bate et al.,
2004, p. 65; Bate and Robert, 2002). If successful, spontaneous collaboration occurs as
previous “followers” take on and enact leadership roles (Currie and Lockett, 2011; NHS
Institute for Innovation and Improvement, 2007; Bate et al., 2008).
Empirical evidence about the application of these ideas and theories about
improving healthcare is relatively underdeveloped. Some notable evidence does come
from Mowles et al. (2010) who studied the application of methods to support
complexity thinking within the NHS. This found that complexity thinking did not
translate easily in contexts characterised by “a tradition of linear cause and effect”.
However, staff using such methods pointed to improved skills and some observable
improvements in service provision. A literature review of distributed leadership in
public sector by Currie and Lockett (2011) suggested that approaches emphasising
teamwork and collaboration resonated with health and social care contexts where
change and improvement maybe required. That said, this review also suggested that
the complexity of professional and policy institutions may render attempts to enact
such distributed leadership difficult as the approach remained largely abstracted from
the professional and policy constraints upon leadership influence in public service
settings (Currie et al., 2009).
ILGs can be seen as part of a shift from quality improvement built on “rational
planned” change approaches associated with TQM and BPR towards a view of leading
change implicitly focused on meaning making as the central medium and target for
changing mindsets and consciousness (Marshak and Grant, 2008, pp. 10-11; Van de
Ven et al., 1999; Fitzgerald et al., 1999). Empirical research focusing on the application
of quality improvement tools and techniques in this area is largely underdeveloped
with very little research about the work of the MA and the ILGs in particular. As a
result, any research that looks to understand how these tools and techniques and the
assumptions underpinning them interact with existing practices provides a new and
important contribution to field, both theoretically and methodologically. As Marshak
and Grant (2008) suggest, new organisation development (OD) practices like ILGs draw
attention to the potential of an organisational discourse perspective where the central
focus is language and discursively mediated experience. The nature of the subject
matter requires an interpretive approach to understand how ILGs were framed within
organisational settings (e.g. Yanow and Schwartz Shea, 2006).
Methodology
The concept of an interpretive framework or “frame” has been used by scholars across
a variety of disciplines (see Schön and Rein, 1994; Benford and Snow, 2000) but most
famously explored empirically by Goffman (1974). Goffman defines framing as the
“schemata of interpretation” that enable individuals “to locate, perceive, identify, and
Improvement
Leaders’ Guides
213
JHOM
27,2
214
label” occurrences within their life space and the world at large (Goffman, 1974, p. 21).
Benford and Snow (2000) suggest that frames perform an interpretive function by
simplifying and condensing aspects of the “world out there”, but in ways that are
“intended to mobilize potential adherents and constituents, to garner bystander
support, and to demobilize antagonists” (Snow and Benford, 1988, p. 198). Benford and
Snow (2000) suggest the result of this activity is “collective action frames” defined as
action-oriented sets of beliefs and meanings that inspire and legitimate the activities of
organisation. Collective action frames begin by taking as problematic “meaning work”:
the struggle over the production of mobilizing and counter mobilizing ideas and
meanings. From this perspective, the study of ILGs does not merely view them as
carriers of quality improvement ideas and meanings. Rather the actors using them are
viewed as signifying agents actively engaged in the production and maintenance of
meaning for constituents, antagonists, and bystanders or observers (Snow and
Benford, 1988).
Our research interest was in identifying a purposive sample of actors (or “signifying
agents”) who was centrally involved in framing ILGs. This focused on actors and
networks where ILGs were “active” in the sense that they resonated and were
considered part of delivery. It did so by contacting designated service improvement
leads within each regional Health Authority in England (Strategic Health Authorities).
Prior research identified these as useful and insightful perspectives about the ILGs as
these particular organisational roles were established to encourage the quality
improvement tools and techniques and draw on material from the Modernisation
Agency.
A selection of these improvement leads responded to the research request and
agreed to participate in the study. Alongside these regional actors, the research sample
then “snowballed” from regional to local levels by making contact with local managers
and facilitators using ILGs. A total of 31 interviews were carried out with actors using
ILGs. These were split between 12 regional and 19 local actors. These roles included
service improvement managers and leads, workforce developers, specialty (e.g. cardiac)
network managers and primary care development managers and leads. A
semi-structured interview guide was produced that looked to cover a number of
areas associated with ILGs. Questions looked to encourage a conversation about the
decision to use the ILGs, how the content and production of ILGs was understood, how
they were being used, the experience of using them, and the facilitators and barriers
associated with using them. Interviews were all face-to-face; tape-recorded and lasted
an average length of 45 minutes.
Data analysis paid attention to what Benford and Snow (2000) describe as the “core
framing tasks” associated with problem identification and action mobilisation related
to ILGs. To operationalise this interest it focused on the discursive and narrative
processes that were generative of these frames. This analysis of the language and
stories associated with ILGs particularly looked at the narratives being formed. These
are loosely defined as a sequence of events, experiences, or actions making ILGs into a
meaningful whole (Czarniawska, 1998; Boje et al., 2004). Like others (e.g. Feldman et al.,
2004) we believed this “frame articulation” of narrative in connecting and aligning
events and experiences was important as its structure reveals what is significant to
people about various practices, ideas, places and symbols. Coding this transcribed
interview data was both inductive and iterative in focusing on passages of text that
illuminated this narrative focusing particularly on decision, use, experience and
reflections on facilitators and barriers associated with ILGs (Strauss and Corbin, 1990).
Such analysis allowed the theory to emerge from the data through rounds of analysis
and interim explanation building, rather than beginning with a pre-existing set of
theoretical propositions. Although we were familiar with the literature on quality
improvement, the research did not choose a theoretical model a priori but, instead, built
one from the data. As with Feldman et al. (2004), our insights were grounded in theory
without testing any predetermined set of hypotheses about what we would find.
Findings
ILGs were associated with a variety of frames that actors used to organize experience
and guide action. Our analysis identified three core framing tasks associated with
them. First, they were condensed and situated within a service improvement approach
that encouraged quality improvement tools and techniques within healthcare settings.
Second, they were mobilized to garner support in the enactment of tools and techniques
across different contexts. Third, they were problematised by actors as they reflected on
the struggle over the production of mobilizing and counter mobilizing ideas and
meanings.
Improvement Leaders’ Guides and “service improvement” activity
ILGs were framed by actors as part of the support and development of a “service
improvement” approach across organisational settings. The approach encouraged a
system based approach to changing healthcare processes that built on a variety of
quality improvement tools and techniques that included process mapping, matching
capacity and demand and the use of PDSA cycles. ILGs were used on the basis that they
provided an innovative product that “packaged” improvement tools and techniques in a
way that was accessible to all staff. They were an empowering resource to diffuse and
get people “switched on” to using tools and techniques within local contexts.
ILGs formed part of these service improvement efforts in different ways. They were
understood as a personal reference or resource for actors when working across
different organisational contexts. When “out in the field”, actors described
crosschecking against the ILGs to make sure their “message” was consistent. They
provided a reference when putting presentations together and a “backup” for situations
where people posed questions. For example, a cardiac network manager described how
they sought to “mirror” the content of ILGs as they were perceived as containing an
authoritative perspective on service improvement tools and methods. The quote
describes how a service improvement manager used them as the “backbone” for
working with others:
Because everyone will take their own interpretation of the tools and techniques, I use the
guides as a backbone for what I’m telling other people, so they can go away and read them
and actually put some into practice . . . I use them to check I’ve got the right information, that
nothing has been missed or any glaring anomalies were present about a particular training
session topic, tool or technique (Service Improvement Manager 3).
ILGs were also used to support the delivery of service improvement training and
development programmes. At both regional and local levels ILGs provided “modules”
to structure training and development programmes. An example of this was a local
clinical micro systems programme in cardiac services who tailored training around
Improvement
Leaders’ Guides
215
JHOM
27,2
216
PDSA cycles, the measurement of improvement, workforce development, capacity and
demand, and creativity and innovation:
What’s good about them is they fit as a resource for tooling people up and empowering them
to work on an issue when they want. We are there to help facilitate and support . . . but we try
and deliver ILGs in a more productive and creative way to complement the training (Cardiac
Network Manager 1).
ILGs were also framed as a catalyst for collaborative service improvement efforts.
They supported the idea of building capacity and capability in providing people with
the ability to spread improvement knowledge and enable individuals and teams to
work with tools and techniques at the ground. The next quote from a service
improvement director is illustrative of this idea that ILGs could support and enable the
collaboration that it was intending to achieve:
One of the things we’re trying to do is to give these out to people already out there doing it,
where it would be up to them to build capacity and capability as they go back and put this
stuff into their organizations . . . we’re trying to spread that existing good practice down to
the local level. We want this kind of stuff becoming part of the day job so hopefully one day
we will do ourselves out of the job (Service Improvement Director 1).
Improvement Leaders’ Guides and the enactment of service improvement
ILGs provided an innovative product to support service improvement and spread
improvement tools and techniques. That said, what also emerged from actors
interpretations of ILGs was an awareness of their limitations as mass media. They
believed that prior to the use and application of ILGs, further communication and
enactment about the tools and techniques was required to make sense of their content.
This is captured in the workforce developer perspective:
[ILGs] are a tool that allows quick, easily digested information to be imparted to people.
However, people will then need support because this is all sounds like a good idea but what
does it mean in practice?. . . The role of the workforce developer is to support people in
developing ILG skills in the initial stages, by putting it into local context and demonstrating
how it could help you solve your problem. We direct them to the ILG specifics but it’s up to
them to go away and find out if that it works hopefully with the knock-on effect of them
getting others interested and inspiring them to go onto a project management or leadership
course. ILGs would become embedded in their knowledge and enthused to other people about
how useful they have been (Workforce Developer 1).
The application of tools and techniques meant bringing them into existence through
various interpretive schemes. This was particularly the case for those working at the
local level with organizations and teams. Actors referred to changing their
communication style to different individuals and personality types in marketing and
“selling” tools and techniques. For example, a practitioner described changing the
language of service improvement. She mentioned how, when working with clinicians
on process mapping, the terminology would change to “understanding things more
thoroughly” (Service Improvement Facilitator 2). A different example is presented next
from a head of hospital improvement:
Process mapping was about “analysing what’s going on, so let’s have a look at what’s
happening on a day to day basis? How is it done? How did that get from there to there?”. . .
Measurement for improvement was sometimes “where are we now”, PDSA’s would be called
something like “running a pilot” (Head of Improvement 1).
Improvement
Leaders’ Guides
Changing the language of improvement tools and techniques also took the form of
simplifying or “demystifying” tools and techniques, as this cardiac network manager
illustrates:
It’s about people sitting down and saying why we have the problems we have, getting all the
right people to say this is what I do and respond “really? I didn’t know that” writing it down,
agreeing on it and moving forward” . . . basically what are we going to do is to get you to chat
about what you do and write it on a post it note and stick it on a piece of paper (Cardiac
Network Manager 1).
In addition to this change in language, the use of ILGs needed to have local relevance.
They required “live examples”, preferably examples participants had been involved in
themselves:
They have to be seen as relevant as not just a model in itself but something that makes sense
to situations in their own environment . . . getting people to use them won’t work if people
can’t see what’s in it for them (Assistant Director 2).
Translation of tools and techniques into everyday contexts was also helped by the
training and development environment in providing the space for learning to occur.
Furthermore, identifying opinion leaders with the potential to mobilise other
individuals, preferably at boardroom level, increased the chances of successful
adoption.
Improvement Leaders’ Guides and critical frames of reference
The sections presented previously show how ILGs were used and enacted by actors in
their quest to translate a service improvement approach into organisational settings. In
the following section we present alternative framings of ILGs that revealed important
boundaries and barriers to their application. Whilst actors supported a grass roots
approach to diffusing knowledge about tools and techniques, they were aware of limits
to their approach. Most notably, some suggested that use of ILGs was limited to those
already involved in service improvement and familiar with improvement tools and
techniques:
The problem with them is that they are attracting the converted. You know, the enthusiastic
ones attending courses or those people already making it happen (Cardiac Network
Manager 1).
What reinforced this deficit was the language associated with service improvement.
The way in which service improvement was framed was limited to “pockets of
interested people” (Service Improvement Manager 3) and a “service improvement
bubble”:
For someone reading these for the first time you would need a glossary for some of the
language . . . I doubt they were aimed at the ordinary frontline individuals expected to pick
these up and use them in a practical way. They are more aimed at us already involved in
modernisation (Service Improvement Facilitator 3).
Alongside these language difficulties, actors pointed to the wider implementation
issues in relation to ILGs. One notable problem with ILGs was the association with the
217
JHOM
27,2
218
MA. Rather than associated with bottom up organization development, actors had
encountered alternative frames that connected the MA with a top-down approach built
on performance targets and performance measurement. An example of this was a
Service Improvement Lead who described how ILGs were seen as being associated
with a “specialist group” who were “parachuted into challenged organizations to roll
out tool kits around the access agenda”. Service improvement was also associated
centralised performance targets:
[. . .] if it’s a government driven target it will probably get done and you’ll probably get
someone like me coming in to help and support people to get it done (Service Improvement
Manager 3).
Also connected to these top-down frames of reference was a view of “service
improvement” associated with modernisation in terms of “getting more for less” and
“efficiency savings” (Service Improvement Lead 1). Such initiatives were not met with
a developmental ethos but associated with job cuts and redundancies.
Reflecting on their experiences of delivering tools and techniques, actors described
organisational culture issues in relation to organising around tools and techniques.
They were often associated with a “programmatic” approach to change, with
innovations like ILGs seen as “a project to be completed rather than a state of being”
(Cardiac Network Manager 1). Methods such as process mapping and PDSA cycles
also proved difficult in contexts not conducive to continuous evaluation and
measurement required of these methods:
[. . .] people to pick out the big numbers in relation to Statistical Process Control, rather than
run chart measurements over time (Service Improvement Lead 2).
[. . .] you try and introduce something and it’s often met with “we’ll need so many people to do
that” or “we need x number of nurses” without thinking about where do you get those nurses
from” (Workforce Developer 1).
Alongside these organizational issues, actors highlighted a number of professional
issues in relation to the ILGs. Clinical groups in particular were singled out as a
problematic group as knowledge and understanding of systems and processes had
proven to be a “blind spot”. Interviewees recalled a number of instances where
communicating the service improvement approach was equated with “management”
activity, a distraction from getting on “with the real business of seeing patients”
(Assistant Director 2).
Discussion
The findings presented previously show how ILGs were framed in the delivery of
service improvement as carriers of ideas about improvement tools and techniques.
They also show how the actors using ILGs represented “signifying agents” who were
actively engaged in putting tools and techniques into practice (Snow and Benford,
1988).
The implication of these findings suggests that ILGs were supporting leaders and
teams to understand “the basic principles” of improvement. They had the potential to
enable, facilitate and support different service improvement learning communities and
networks (Bate and Robert, 2002) as part of the “interpretive support” for tools and
techniques (Lave and Wenger, 1991, p. 98). In addition, this evidence draws attention to
the limits of “mass media” and the importance of interpersonal influence and the
psychological and social dimensions of change. They attempted to move beyond
technical fixes and frame quality improvement as a “social act” (Batalden et al., 2011).
The examples related to “changing the language” were illustrative of the enactment of
tools and techniques “on the ground”. By tailoring different strategies using
appropriate styles, imagery and communication channels (Greenhalgh et al., 2004) this
enactment was also illustrative of an attempt to distribute leadership around quality
improvement tools and techniques (Currie and Lockett, 2011). Actors’ attempts to
mobilise “followers” to take on and enact leadership roles built on the assumption that
if ILGs were combined with their action mobilisation approaches spontaneous
collaboration was more likely to occur.
However, the framing of ILGs also reflects the struggle associated with mobilizing
ideas and meanings associated with quality improvement. The reference to ILGs
operating within a “service improvement bubble” was illustrated of how tools and
techniques were associated with a particular managerial group, something akin to
what Ferlie et al. (2005) describe as a distinctive “paradigm” that limited the spread of
improvement efforts. The connection made between tools and techniques and
“management” activity was further illustration of the professional boundaries
associated with quality improvement tools and techniques (McNulty and Ferlie, 2002;
Ham et al., 2003). As with other research, it seems that clinical and operational staff did
not feel as engaged or convinced by improvement methods (Health Foundation, 2011).
Organisational boundaries provided further challenges. Enacting tools and
techniques as a “state of being” was in tension with existing assumptions that
characterised tools and techniques as “programmatic” approaches to change limited to
short term projects and what Mowes et al. (2010) describe as the “linear cause and
effect” approach. Such findings resonate with elsewhere (Health Foundation, 2011) that
healthcare contexts still lack the appropriate measurement systems for tools and
techniques to resonate. These findings show the ongoing challenge to overcome what
Batalden and Stoltz (1993) described as the “trad…