Beruflich Dokumente
Kultur Dokumente
Measurement
Literature Review
30 June 2014
Contents
This paper reviews the existing literature on customer satisfaction measurement and
provides the theoretical background for the development of a number of tools to help the
community services industry in Queensland measure customer satisfaction.
In the context of community service delivery, there are a range of issues to be considered
when designing and using satisfaction measurement tools. There is a growing expectation,
in the literature that individuals and their family should be at the centre of service design,
delivery and review. Tools and processes for measuring satisfaction need to accommodate
individual needs and preferences around literacy, timing and form. There is also an
expectation that people who have contributed to such processes will receive information on
the broader outcomes from their feedback and ideas.
At a broader policy level, contestability and a move to self-directed and in some cases, self-
managed funding means the people who use services may be doing so under market or
market-like conditions. This requires that people shift from being consumers to “discerning
customers” which will bring challenges and opportunities for both organisations and the
people who use their services.
The paper investigates the main reasons why measuring customer satisfaction is important.
Through the review of the literature it is shown that customer satisfaction measurement
provides a means to better understand the needs of social service customers and to
empower customers by creating customer-centred services. It is also argued that customer
satisfaction measurement provides a means of creating ongoing service improvement by
identifying areas of improvement. Lastly it is argued that customer satisfaction measurement
provides a performance management tool that can be used to generate data to meet
compliance and reporting requirements, provide customers with information about service
performance and provide evidence for future funding proposals.
The paper also discusses how customer satisfaction is measured by analysing the literature
on key drivers or determinants of satisfaction. This section of the report demonstrates the
importance of understanding satisfaction from the point of view of the customer. It argues
that the drivers or determinants of satisfaction will differ in different service contexts and
discusses the importance of including service customers in the design of customer
satisfaction surveys. Doing so, ensures that customer satisfaction processes are able to
accurately reflect the needs and values of customers and can be effective in driving service
improvement.
The paper provides a starting point for social service organisations in developing more
rigorous customer satisfaction processes and will be augmented by the development of tools
that can be used to assist in the measurement of customer satisfaction. As part of the
development of these tools, QCOSS will be undertaking consultation with the social service
sector to ascertain current practice and capacity. This will include consultation with
customers to better understand how customer satisfaction measurement processes can be
engaged in the development of customer satisfaction processes.
This paper examines a range of existing national and international literature on the
development and use of customer satisfaction measurement. It suggests that customer
satisfaction measurement is potentially a useful mechanism for identifying and verifying
customer needs and preferences which can, in turn, inform product and service design and
improvement. It begins with an overview of why customer satisfaction is important as both
an engagement and service improvement tool, and where it can contribute to performance
management and meeting compliance requirements for organisations. It explores the
relative merits of a range of approaches and discusses a range of methodological issues
associated with the administration of customer satisfaction measurement processes.
This paper uses the term ‘customer’ to refer to the various populations and contexts in which
satisfaction is measured. In some instances people using public or social services may be
more appropriately referred to as ‘patients’, ‘consumers’, ‘users’, ‘citizens’ or ‘clients’
depending on the type of service being offered. Much work has been undertaken, for
example, to empower ‘clients’ by reconceptualising them as ‘consumers’ in the mental health
and disability service areas. We acknowledge that the use of the term ‘customer’ can be
problematic as it refers to a situation of empowered choice that does not necessarily reflect
the reality of people using many public services, some of which are not voluntary. We use
the term ‘customer’ for simplicity acknowledging that it may be more appropriate to use more
specific terms when referring to specific groups or populations.
While there are many different models used within the literature to conceptualise customer
satisfaction measurement, at its most basic level customer satisfaction measurement
involves an assessment of the difference between a customer’s expectation of a product or
service and a customer’s experience of a product or service.
Given that customer satisfaction measurement emerged in the fields of business and
marketing it has become well established as a tool within the commercial sector. In
competitive markets, customer satisfaction measurement is a key marketing tool used to
understand and drive business performance. In marketing, customer satisfaction is viewed
as the ultimate goal of any business because satisfied customers are more likely to become
repeat customers and to recommend a business to other potential customers.
While customer satisfaction measurement processes were developed originally for use in
competitive markets, they are increasingly being applied to public sector settings as a means
of monitoring performance and improving service quality. Customer satisfaction
measurement is being more commonly used in a range of public sector areas, including
transport, health and disability, to measure performance in a range of customer service
settings.
In this regard, processes to assess customer satisfaction are not just about gaining
information from customers, they can also be an effective tool to promote customer
empowerment6. Empowerment is particularly important for parents and children marginalised
The literature on customer satisfaction measurement emerging from the United Kingdom
Government, for example, views customer satisfaction measurement as a means of focusing
on the customer and the customer experience8. As such, the process of customer
satisfaction measurement can be viewed as a method for reflecting upon the needs of the
customer.
While service based organisations involve intensive interaction with customers on a daily
basis this does not mean that information about customer needs and values are
automatically absorbed into the service operation and culture. Customer satisfaction
measurement provides a structured tool for actively engaging with customers; seeking out
information about how they view the services being offered to them; and enabling them to
have input into the delivery of these services. This includes the involvement of customers in
the process of designing the methods used and the questions asked to elicit information
from customers.
It is important to note that the measurement of customer satisfaction is not the same as
measuring overall service quality but one distinct part of an integrated framework for
analysing service quality and efficacy. A broader quality improvement framework would likely
include methods and processes to measure unmet service demand, customer outcomes,
evaluation of external programs impacting on customers and support for continuous quality
improvement10.
While customer satisfaction measurement at its most basic level generally involves some
form of survey to elicit this information about customer satisfaction, this is only one part of an
ongoing service improvement cycle. It should be seen as a means to an end, in which the
measurement of customer satisfaction forms one part of an ongoing process of ‘insight,
measurement and improvement’11.
According to this example, it is critical to conduct initial scoping and research before
undertaking satisfaction surveys in order to understand what is valuable to measure from the
perspective of the service and the customer. While it may be easier to develop a survey
based on staff knowledge of the program and the customer group, it is useful to gauge
customer’s own level of understanding to ascertain expectations about what they may view
as being most important to them.
Equally, it is critical to take steps to develop an action plan that guides the process of service
improvement ensuring the information gathered from customers is actually put to use. As a
cycle this process would be repeated to learn the impact that improvements have on
customer satisfaction and to continue the service improvement process over time.
Aside from providing a structured tool for engagement and information gathering and acting
as part of the process to promote service improvement, customer satisfaction measurement
is also a useful tool for performance management. It provides a method for collecting useful
data that can be used to meet contract reporting and accountability requirements, provide
customers with information about service performance, create opportunities to compare and
contrast performance and demonstrate effectiveness when tendering for new funding.
Customer satisfaction data is also commonly used as an accountability and compliance tool.
The collection of information about the level of satisfaction with a particular service is
commonly used as a performance indicator by government to demonstrate the performance
of funded activities. There are a number of examples of the data being used in this way.
In health care, one of the motivations for administering patient surveys in hospitals in
Australia was the need to meet accreditation guidelines under the Australian Council on
Healthcare Standards (ACHS). ACHS accreditation requires all public and private hospitals
to undertake patient experience and satisfaction surveys13. In health, performance data has
historically been used as an internal accountability and quality control tool but is increasingly
reported publicly to stimulate quality improvement and cost efficiency and empower
consumers with knowledge to navigate the health system14.
Compliance with quality standards has also driven the uptake of customer satisfaction
measurement in the human and social services. Human service organisations delivering
services to the community on behalf of the Queensland Government are required to
demonstrate service quality as a part of their contract arrangements. Customer satisfaction
surveys are one of the methods that can be used to demonstrate continuous improvement
under the Human Services Quality Framework (HSQF)15.
The data collected though customer satisfaction measurement can provide useful
information that can be used by customers to assess the quality of a service offering. This is
especially useful if benchmarking allows comparison between organisations offering similar
services.
The impetus for customer satisfaction has been driven in part by moves to create greater
choice for consumers. In the United States the Hospital-Consumer Assessment of Health
Plans Survey (H-CAHPS) was initiated as a direct result of requests from the Centres for
Medicare and Medicaid, which saw patient surveys as a means of encouraging greater
accountability and choice for consumers22. The development of standardised instruments to
Customer satisfaction measurement is a useful tool for eliciting information that can be used
in developing funding proposals. The measurement of customer satisfaction can
demonstrate to a potential funding body if a service is meeting the expectations of
customers. When used as part of a service improvement cycle it demonstrates to potential
funding bodies the organisations commitment to ongoing service improvement.
There are many examples of the different ways that customer satisfaction measurement is
being applied in the public and community sectors, both nationally and internationally.
Describing these provides an opportunity to gain a clearer picture of the different contexts in
which satisfaction measurement has been applied; a better understanding of the different
ways that customer satisfaction can be collected and communicated; and more information
about the different reasons customer satisfaction measurement is pursued.
In the UK and Australia the Patient Opinion website provides patients of various health
services with an opportunity to post comments about their experience online and have these
comments sent to staff in the hope that this will result in positive changes to practice or
reinforce existing good practice. Patient Opinion is a subscription service which operates on
funds from participating health service providers. Initiatives, such as Patient Opinion, provide
a novel approach to collecting and communicating customer satisfaction and feedback using
web based technologies31.
Victoria developed its first mental health consumer and carer satisfaction surveys in 1996.
Following a review of national and international practice, stakeholder consultations and the
administration of a pilot study, a new survey was implemented in 2003-04. This survey
shifted from measuring satisfaction to measuring perspectives on service quality to better
reflect the quality of care provided and to more readily facilitate quality improvement.32
The Queensland Government have also conducted regular surveys of consumers (and
carers) of disability services funded by the Department of Communities, Child Safety and
Disability Services since 1999. The Department has collected information about consumer
and carer satisfaction with services delivered by non-government organisations as part of
requirements to monitor the quality and performance of funded services. Surveys were
initially conducted through the administration of a centralised survey and more recently by
aggregating the data collected from surveys administered by individual service providers.
One of the impetuses behind the measurement of consumer and carer satisfaction in this
context is to meet a range of annual reporting obligations, such as is found in the Service
Delivery Statement (SDS) Budget Paper 533.
Satisfaction surveys have also been used by community service organisations providing a
range of family support, drug and alcohol and aged care services. For example, Mercy
Community Services (MCS) in New South Wales have conducted regular customer
satisfaction surveys to ascertain the level of satisfaction with a range of programs. The
survey is administered to clients receiving alcohol and other drugs counselling, parenting
support or aged/disability support across the Newcastle, Lake Macquarie and the Lower
Hunter regions34.
A national survey was developed in 2003 to gather information about customer satisfaction
with emergency accommodation services provided through the Supported Accommodation
Assistance Program (SAAP). The survey was administered by an external organisation as
part of a broader evaluation process. It was developed on the basis of recommendations
made in a preliminary report, which highlighted a number of considerations for the
development of customer satisfaction processes and outlined the conditions under which
customer satisfaction measurement should occur37.
The examples also demonstrate the variety of ways that customer satisfaction is applied
from the more traditional use of surveys to the more novel use of web based technologies. It
also highlights that customer satisfaction measurement can be administered by
organisations internally or, where the resources permit, can be outsourced to dedicated
organisations with expertise and skills.
A large part of the customer satisfaction literature is preoccupied with understanding the key
drivers or determinants of satisfaction in different service contexts. The following section
discusses the literature on the key drivers or determinants of satisfaction before moving on
to discuss the implications this preoccupation has for developing customer satisfaction
measurement processes.
One of the key considerations in the customer satisfaction literature is identifying aspects of
a service which are most important in determining a customer’s overall satisfaction. Because
customer satisfaction is defined by the questions used in the surveys38 it is important to
ensure that these reflect what customer’s think is most important. If not it is likely the data
will not give an accurate indication of satisfaction. As Johnston (1997) has noted ‘the
identification of the determinants of service quality is necessary to be able to specify,
measure, control and improve customer perceived service quality’39.
• access to care
• respect for patient values, preferences and expressed needs
• coordination and integration of care
• information, communication and education
• physical comfort
• emotional support and alleviation of fear and anxiety
• involvement of family and friends
• transition and continuity.
This model has been widely adopted in a number of health and patient care surveys,
including the National Health Service patient survey in England42.
In the field of marketing, a number of attempts have been made to define the determinants
of satisfaction that can be applied across service types. One of the more well known of these
is the SERVQUAL/RATER instruments developed by Parasuraman, Zeithaml and Berry.
Initially, Parasuraman et al developed a list of 10 determinants of service quality as a result
of focus group studies with service providers and customers43. The 10 determinants of
service quality have since been refined to five key dimensions, which are used to structure
the process of understanding customer satisfaction and improving service quality. The five
key dimensions of the refined RATER instrument include44:
It is argued that these determinants of service quality apply to a range of service types and
that, regardless of the service being studied, reliability is the most important dimension in
predicting overall customer satisfaction, followed by responsiveness, assurance and
empathy, with tangibles being of least concern to service customers45. From these
dimensions Parasuraman et al have developed a set of standardised survey questions,
which are used to determine if a customer is satisfied with a particular service.
• attentiveness/helpfulness
• responsiveness
• care
• availability
• reliability
• integrity
• friendliness
• courtesy
• communication
• competence
• functionality
• commitment
• access
• flexibility
• aesthetics
• cleanliness/tidiness
• comfort
• security.
Given the conjecture about the applicability of generic instruments for assessing customer
satisfaction to different service settings, it should be noted that attempts have been made to
develop frameworks specifically for public services. According to research conducted in the
UK into customer satisfaction in the public sector, there are five key drivers of satisfaction
and dissatisfaction, which account for 67 per cent of variation in overall satisfaction48. These
include:
• delivery
• timeliness
• professionalism
• information
• staff attitude.
In Canada, the Institute for Citizen-Centred Service is the custodian of The Common
Measurement Tool (CMT), which is used by Canadian public service organisations to
measure customer satisfaction. A number of key drivers of service quality have been
identified through a national survey, which are said to account for the overwhelming majority
of variance in satisfaction amongst users of public services. While these have gone through
refinement over time, the key drivers of satisfaction have been relatively stable and currently
include49:
• timeliness
• ease of access
• outcome
Amongst this list of key drivers, timeliness was found to be the single most important driver
across all public services in Canada. As with the SERVQUAL/RATER instrument a set of
commonly worded questions have been developed which elicit information about satisfaction
relating to the key drivers of satisfaction. Because the CMT provides users with standard
questions that can used by different organisations it offers organisations an opportunity to
benchmark their performance against peers over time.
Other generic tools for examining customer satisfaction which are more specific to the social
services setting include the Client Satisfaction Questionnaire (CSQ-8), the Reid-Gundlach
Social Service Satisfaction Scale (R-GSSSS) and the Client Satisfaction Inventory (CSI).
CSQ-8 provides a standard set of eight questions originally developed to use in mental
health programs but is now applied in a variety of social service areas. R-GSSSS includes
34 items made up of three subscales and was designed for use in social work, however it
has had limited application and questionable validity. CSI consists of 25 items and a short-
form version (CSI-SF), which includes nine items and is designed to be useful in a variety of
services by a range of clients50. The downside to these three models is that they must be
purchased, which may make their use financially restrictive.
Kapp and Vela (2004) developed the Parent Satisfaction with Foster Care Services Scale
(PSFCSS) to assess the satisfaction of parents who had a child removed from their care. As
part of this research the authors undertook psychometric testing to determine which of the
factors within the PSFCSS best predicted overall satisfaction. These include:
• the worker was working within them to get their child back
• the worker had clear expectations of them
• the worker prepared them for meetings
• the worker stood up for them in meetings
• the worker respected their cultural background
• the agency had realistic expectations of them
• willingness to recommend their agency to others
• willingness to recommend worker to others.
Kapp and Vela claim that these predictors can be used to focus resources and craft quality
interventions and greater levels of satisfaction for parents of children in foster care51.
Essex et al (1981) developed a satisfaction survey for consumers of mental health services,
which they argue could be applied to a range of mental health settings. The research
identified four factors which correlated with overall satisfaction. These include:
The authors contend that it is important to isolate the key determinants of satisfaction as this
will result in shorter questionnaires that are less burdensome for survey respondents52.
A review of satisfaction surveys for people with disabilities conducted by the Productivity
Commission in 1998 has found that satisfaction with services is multi-dimensional and best
measured using questionnaires that address different determinants. Nevertheless, the report
claimed that consumer satisfaction was likely to be primarily influenced by staff interaction
with a consumer.53 The importance of the client / therapist relationship on satisfaction has
also been found in satisfaction studies of mental health treatment54; in studies of family
preservation programs55; and research into the satisfaction of parents of children in foster
care56.
Research conducted into the development of satisfaction measurement processes for the
Supported Accommodation and Assistance Program (SAAP), for example, elicited a number
of key determinants of satisfaction. Through consultations and pilots it was found that four
main factors underpinned satisfaction and should be the focus of surveys with service users.
These included:
In another study, undertaken by Relationships Australia, research identified that the factors
impacting on overall satisfaction in the delivery of relationship counselling services differed
by gender. Research showed that women are more influenced by the nature of the
experience, whereas men are more influenced by the outcome57.
5.2. What does the drivers or determinants of satisfaction literature tell us?
The literature presented above raises a number of key issues for the development of tools to
measure customer satisfaction. As it can be noted by examining the lists of key drivers,
despite the preoccupation with determining common sets of drivers or determinants of
satisfaction there is clearly significant variation in the types of drivers impacting overall
satisfaction across different service settings.
There is a significant difference in the key drivers between private sector and public sector
services. Because public and private sector services differ, this can make customer
satisfaction models designed for the private sector less than optimal. Models aimed at
increasing consumption or maintaining loyalty are not necessarily relevant in a situation
where a customer has little choice, as it is with many public sector goods and services. In
some instances, customer satisfaction may be optimal when the level of consumption of or
contact with public services is actually minimised58. Equally, there is a significant difference
between the various services delivered by publicly managed institutions and those delivered
by non-government organisations on behalf of government. In essence, what this shows is
that the number and the type of drivers of satisfaction relate to the particular service being
offered59.
While there have been some attempts to develop generic customer satisfaction survey
instruments specifically for the human services, these tools may have limited use for
organisations with limited resources to purchase the licences to use them. Furthermore,
there are obvious limitations in using generic tools, even if they have been developed
specifically for a social service context, given the wide variance in service settings within the
social services.
This has been noted by Hsieh (2012) with regard to the development of satisfaction survey
tools in the social services. Many of the studies of customer satisfaction in the social
services are not context specific, instead they use generic survey instruments, which are not
able to account for the unique nature of specific service settings. This, it is argued, leads to
the collection of overall satisfaction scores, which provide limited guidance on how to
improve services because they do not pinpoint the sources of satisfaction or
dissatisfaction.60
While the various dimensions noted above can provide a useful starting point for developing
surveys and questionnaires used to measure customer satisfaction, a level of adaption is
required to ensure that customer satisfaction surveys are relevant for specific service
contexts. Essentially work must be undertaken to ascertain, from the point of view of service
customers, the dimensions of service delivery that most contribute to satisfaction.
Despite these constraints it is clear from the discussion above that the development of
processes to collect customer satisfaction data requires consideration of the specific context
and input from customers to ensure that methods used to collect satisfaction data are tuned
to the issues of importance for the specific customer group.
6. Methodological considerations
There are a number of tools, which can be used to gauge customer satisfaction. These
range from informal conversations with customers during service activities, complaint forms,
formal written questionnaires, face-to-face and telephone interviews and focus groups
amongst others. There are a number of issues to consider when choosing the most
appropriate methods for eliciting feedback about satisfaction from customers. These include
the timing of the administration of a survey, sampling bias, customer benefit, confidentiality,
customer expectation and experiences, social and cultural background, capacity to respond,
carer involvement and response bias.
These factors can affect either actual participation in customer satisfaction surveys or
influence the way that responses are given by participants. While this provides some
guidance to service operators in developing customer satisfaction processes and tools that
maximise participation and response quality, it is preferable to gain input from customers
about which satisfaction measurement methods work best for customers. Doing so can have
a positive impact on participation and ensure that responses more accurately reflect
customer sentiment.
6.1. Timing
The timing of the delivery of a customer satisfaction questionnaire or survey can influence
whether a customer chooses to provide feedback to an organisation.
Delivering a survey to customers at the point of contact can significantly reduce the costs
associated as there is no need to pay for mail outs or to employ external consultants to
administer a survey on behalf of the organisation.
Some organisations choose to administer satisfaction surveys at the point of service delivery
at a set time to provide a sample of responses that can be used as a proxy for overall
satisfaction over a longer period. Anglicare Victoria, for example, administer a survey to all
people accessing services across a range of programs and program areas over a one week
period in September, which is designed to be able to be administered at any point of a
service intervention, not just at the end. In doing so, Anglicare Victoria hoped to reduce the
burden of administering the satisfaction survey on staff65.
Unfortunately, there are drawbacks in administering surveys at the point of contact with the
customer. The delivery of a survey to customers at the point of contact may not be the most
customer friendly or appropriate method. This is particularly so if the customer has low-
Administering a customer satisfaction survey at the point of contact may also be problematic
in certain situations or contexts. While it may be practical, for example, to ask customers to
participate in satisfaction surveys at the point of first contact, this may be inappropriate when
they have immediate crises issues to deal with, such as when accessing accommodation
services. In such situations it is preferable not to attempt to gain satisfaction information or
consent at this time but to administer surveys at a later point in time using mail or telephone
survey techniques66.
Another interconnected issue is the problem of sampling bias. Any kind of research, whether
it is measuring customer satisfaction or otherwise, can suffer from a bias in results because
the sample of survey respondents inadequately reflects the population being investigated67.
According to Harris and Poertner (1998) customer satisfaction data is plagued by low
response rates, which calls into question the representativeness of satisfaction results and
the ability of the results to be generalised to the rest of the population.
Baker (2007) has noted, with regard to child welfare clients, that sampling bias can occur as
a result of the timing of the administration of surveys. If a satisfaction survey is administered
on exit or using a point-of-time methodology this can over- or under-represent participants
based on the length of time in the system. This can be problematic because length of
participation may be correlated with satisfaction, such that customers who continue to be
engaged in a service are more likely to be satisfied with the quality of that service. This could
result in a bias towards higher levels of satisfaction in survey results, which may not reflect
the whole population68.
Sampling bias can also occur because of the method adopted to elicit responses from
customers. While mail and telephone surveys overcome the issue of sample bias from using
exit or point-of-time methods because they can include customers that exit the service
prematurely these can have a response rates as low as 40 per cent for mail and 43 per cent
for telephone surveys69. This low response rate is problematic because evidence suggests
that those who choose not to respond may be more dissatisfied with the service they have
received than those who are willing to respond, biasing the outcome of any satisfaction
survey70,71. Both service providers and funders should be cognizant of the implications this
has for achieving a representative sample of customer satisfaction72.
Customers may choose not to participate in satisfaction surveys simply because it is not in
their interest. Participation in surveys requires time and effort and often the only reward for
6.4. Confidentiality
Confidentiality is a significant issue, which can have important ramifications for participation.
Baker has argued that collecting feedback in sensitive areas such as child welfare, for
example, requires both confidentiality and efforts to convince survey participants that there
will be no way for their responses to be linked to them otherwise they may be unwilling to
participate75.
Confidentiality concerns can make it difficult to gain the consent of some customers. It is
possible that customers willing to participate in satisfaction surveys may be more satisfied, in
part because those that are most unsatisfied are sceptical that their input would be kept
confidential or would indeed change anything76. This could skew the results of any
satisfaction survey and provide an overly positive assessment of the service.
As it was noted earlier customer satisfaction is determined by measuring the gap between
expectations and perceptions of performance. This raises a number of important
considerations when undertaking customer satisfaction research.
As Johnston (1995) has duly noted a customer’s overall feeling of satisfaction with a service
could in truth be influenced by a customer’s personal disposition on entering the system78
and therefore largely be beyond the influence of the service provider.
A service provider should be cognizant of the fact they may not view the service in the same
way that the customer does. This is important, because the way that the customer defines
the service, shapes their expectations, their experience and ultimately the level of
satisfaction. If a customer needs something specific, for example, that the service simply
cannot deliver then this will ultimately lead to a lack of satisfaction. Equally if a customer
disagrees with the focus of an intervention, for example child-centred versus family-focused,
this may result in higher levels of dissatisfaction79. It is important, therefore, that the
‘customer defines the ‘service’ in the same way as the organisation’80.
With this in mind it is important to recognise what can be done to align expectations with
reality. While it is impossible to control customer expectations, an organisation can control
how it communicates about itself through staff and in promotional literature. This includes
communicating accurately the type and level of service provided by the service without
making any undue promises about what can be obtained or provided.
Another important consideration is the impact that a customer’s unique social or cultural
background has on the way that customer satisfaction is measured. These differences may
influence how satisfaction is measured, data is collected, results are interpreted and actions
are taken to improve service quality.
Both the age and socio-economic background of survey respondents can affect the way that
a person responds to survey questions. Research into patient satisfaction in health care has
shown, for example, that older patients are generally more satisfied with their hospital
experience than younger patients.82 Similarly patients from lower socio-economic
backgrounds have been shown to be more likely to be satisfied than wealthier patients.83
Another important consideration is the capacity of customers from culturally and linguistically
diverse (CALD) backgrounds to participate in satisfaction surveys and questionnaires which
are in English. It is beneficial to have customer satisfaction surveys administered in the
Equally, though, consideration should be given to the literacy level and communication style
of the survey respondent. It may be inappropriate to administer a written questionnaire when
survey respondent’s literacy or numeracy levels are low85 or if oral communication is a
preferred means of communication. Consideration should also be given to the issue of
acquiescence bias, which can result in survey respondents only providing positive feedback.
As one report has noted, standard satisfaction surveys may not be useful when administered
to some CALD respondents because they may elicit positive feedback even though they
have had a negative experience of the service86. Each of these considerations will have
important implications for the choice of survey method, which will in turn impact on the cost
of collecting customer satisfaction data. Face-to-face interviews are also useful for
respondents with literacy and numeracy problems87.
Apart from language, there are other instances where the choice of survey methodology may
impact on the capacity of a customer to respond. Traditional survey methods using
numbered Likert scales may be inappropriate for people with different competency levels.
Survey methods need to be adapted to ensure that people with an intellectual disability, for
example, are able to participate in customer satisfaction processes. To ensure this, it is
important that survey processes and questions are developed based on feedback from
customers before they are administered. This can be done by holding focus groups and by
conducting pilots and trials to get advice and feedback from customers
This might require the development of different approaches to asking questions, which are
better suited to the customer. As it has been argued in a Productivity Commission report it
may be better to measure satisfaction by using direct questions rather than satisfaction
ratings, for example do you wish to move house or change your job88.
While satisfaction surveys administered in the disability area generally collect information
about carer satisfaction, it has been argued that it is not appropriate to use the responses of
family members and carers as a proxy or substitute for collecting satisfaction directly from
people with a disability89.
Similarly, in studies of family and significant others who play a role in supporting clients
receiving mental health services it was found that there are differences in reported
It has been well noted in the literature that satisfaction surveys tend to be biased towards
positive results. Surveys relying on self-reporting have a tendency to elicit positive
responses from survey respondents due to a reluctance to express negative opinions of
services or service providers. This is defined in the broader customer satisfaction literature
as social desirability or courtesy bias. Social desirability bias is particularly acute when
information is gathered at the site of the service or using face-to-face methods91.
In a study of customer satisfaction of a family planning clinic in Africa, the issue of courtesy
bias was overcome by designing a survey methodology that focused on areas for
improvement rather than levels of satisfaction. This was done by asking yes / no questions
about whether a service user was satisfied or unsatisfied with an aspect of the service and
then selecting those aspects where a high proportion of dissatisfied responses exceeded a
predetermined threshold for improvement92.
Even something as simple as the wording used in the survey can have a significant impact
on the results from a customer satisfaction survey. A similar issue with surveys is the
tendency for respondents to respond in the direction of the question. This acquiescence bias
may skew reported levels of satisfaction in the direction of the wording, such that negatively
worded questions induce a negative response and positively worded questions are more
likely to induce a positive response93.
The difficulty getting honest responses from service users can also occur as a result of the
fear that there may be repercussions if negative or critical responses are provided. Justice
and McBee (1978) have argued that people receiving treatment for mental illness may have
a tendency to express satisfaction with services for fear that these service may be withdrawn
now or in the future94. This is also the case in child safety, where clients may perceive
extreme power imbalances due to the removal of children, which make them reluctant to
provide negative feedback for fear of reprisal95. This same fear has been reported amongst
clients receiving crises accommodation and support96. In each of these cases it is critical that
the people involved in collecting satisfaction surveys be viewed as impartial and create a
safe space for eliciting honest responses without fear of retribution97.
7. Conclusion
This paper has provided an overview and introduction to customer satisfaction measurement
through an examination of a range of literature, including academic peer reviewed studies
and organisational and project reports. It has shown that customer satisfaction
measurement, at its heart, involves assessment of the difference between a customer’s
expectation of a service and a customer’s experience of a service. But it has also shown that
customer satisfaction measurement can be a complex and involved process in which
organisations use customer satisfaction surveys as part of an ongoing service improvement
cycle.
1 HM Government 2007. Promoting Customer Satisfaction: Guidance on improving the customer experience in Public
Services. http://www.tns-bmrb.co.uk/uploads/files/iips-insight-customer-satisfaction-guidance.pdf
2 Buttle, F. 1996. ‘SERVQUAL: review, critique, research agenda’ in European Journal of Marketing. 30,1, pp. 8-32.
3 Clinton, A. and Wellington. T. A Theoretical Framework of Users’ Satisfaction/Dissatisfaction Theories and Models. 2nd
International Conference on Arts, Behavioral Sciences and Economics Issues Dec. 17-18, 2013 Pattaya (Thailand)
http://psrcentre.org/images/extraimages/1213003.pdf
4 Rapp, C. and Poertner, J. 1987. “Moving Clients Center Stage Through the Use of Client Outcomes” in Administration in
http://academy.extensiondlc.net/file.php/1/resources/LR-FamilyEngagement.pdf
7 Kapp, S. and Propp, J. ‘Client Satisfaction Methods: Inputs from Parents with Children in Foster Care’ in Child and
Services. http://www.tns-bmrb.co.uk/uploads/files/iips-insight-customer-satisfaction-guidance.pdf
9 HM Government 2007. How to measure customer satisfaction: A tool to improve the experience of customers.
http://www.ccas.min-financas.pt/documentacao/how-to-measure-customer-satisfaction
10 Australian Federation of Homelessness Organisations. 2003. Measurement of Client Satisfaction in the Supported
Services. http://www.tns-bmrb.co.uk/uploads/files/iips-insight-customer-satisfaction-guidance.pdf
12 King County 2007. Measuring Customer Satisfaction: Improving the experience of King County's customers
http://www.kingcounty.gov/~/media/CustomerService/files/1101CustomerSatisfactionGuide.ashx
13 Productivity Commission 2005. Review of patient satisfaction and experience surveys conducted for public hospitals in
Australia. A Research Paper for the Steering Committee for the Review of Government Service Provision
http://www.pc.gov.au/__data/assets/pdf_file/0016/62116/patientsatisfaction.pdf
14 Deeble Institute for Health Policy 2014. Can we improve the health system with performance reporting? Deeble Institute
http://www.communities.qld.gov.au/resources/funding/human-services-quality-framework/user-guide.pdf
16 Department of Communities 2011. Disability Service Users and Carers Satisfaction Survey 2011: Key Findings
http://www.communities.qld.gov.au/resources/disability/community-involvement/satisfaction-
survey/documents/consumer-satisfaction-survey-2011.pdf
17 Productivity Commission 2005. Review of patient satisfaction and experience surveys conducted for public hospitals in
Australia. A Research Paper for the Steering Committee for the Review of Government Service Provision
http://www.pc.gov.au/__data/assets/pdf_file/0016/62116/patientsatisfaction.pdf
18 LaSala, M. 1997. Client Satisfaction: Consideration of correlates and response bias’ in Families and Society. 78, 1, pp. 54 -
64
19 Buttle, F. 1996. ‘SERVQUAL: review, critique, research agenda’ in European Journal of Marketing. 30, 1, pp. 8-32.
20 Department of Family and Community Services, Ageing, Disability and Home Care 2010. Measuring outcomes in
Services. http://www.tns-bmrb.co.uk/uploads/files/iips-insight-customer-satisfaction-guidance.pdf
22 Productivity Commission 2005. Review of patient satisfaction and experience surveys conducted for public hospitals in
Australia. A Research Paper for the Steering Committee for the Review of Government Service Provision
http://www.pc.gov.au/__data/assets/pdf_file/0016/62116/patientsatisfaction.pdf
23 McMurty, S. & Hudson, W. 2000. ‘The Client Satisfaction Inventory: Results of an Initial Validation Study’ in Research on
Services. http://www.tns-bmrb.co.uk/uploads/files/iips-insight-customer-satisfaction-guidance.pdf
25 HM Government 2007. Promoting Customer Satisfaction: Guidance on improving the customer experience in Public
Services. http://www.tns-bmrb.co.uk/uploads/files/iips-insight-customer-satisfaction-guidance.pdf
http://www.ccas.min-financas.pt/documentacao/how-to-measure-customer-satisfaction
28 Productivity Commission 2005. Review of patient satisfaction and experience surveys conducted for public hospitals in
Australia. A Research Paper for the Steering Committee for the Review of Government Service Provision
http://www.pc.gov.au/__data/assets/pdf_file/0016/62116/patientsatisfaction.pdf
29 Queensland Health 2013. Emergency Department Patient Experience Survey 2013.
http://www.health.qld.gov.au/psu/health-experience/docs/edpes-2013-report.pdf
30 Department of Health 2013. 2013 Statewide Emergency Department Patient Experience Survey Bulletin — 1st Edition
http://www.health.qld.gov.au/psu/health-experience/docs/edpes-bulletin1.pdf
31 Patient Opinion UK 2014. ‘About Patient Opinion’ https://www.patientopinion.org.uk/info/about Accessed online 18
June 2014; Patient Opinion Australia 2014. ‘About Patient Opinion’ https://www.patientopinion.org.au/info/about
Accessed online 18 June 2014
32 Victorian Government Department of Human Services 2005. Review of the 2003–04 Victorian surveys of consumer and
carer experience of public mental health services: Recommendations for future approaches.
http://www.health.vic.gov.au/mentalhealth/quality/consumer/review.pdf
33 Department of Communities, Child Safety and Disability Services 2014. ‘Measuring Satisfaction’
http://www.communities.qld.gov.au/disability/community-involvement/measuring-satisfaction
34 Mercy Community Services 2011. Mercy Community Services - Client Satisfaction Survey Summary Report 2011.
http://mercyservices.org.au/download/corporate%20documents/All%20MCS%20Client%20Satisfaction%20report%20su
mmary%202011.pdf
35 Anglicare Victoria 2013. “They do it with their heart” Satisfaction September 2012.
http://www.anglicarevic.org.au/index.php?action=filemanager&doc_form_name=download&folder_id=806&doc_id=13
629
36 BaptCare 2012. Family and Community Services Client Satisfaction Survey Disability Gateway Services: Summary Report –
Australia. A Research Paper for the Steering Committee for the Review of Government Service Provision
http://www.pc.gov.au/__data/assets/pdf_file/0016/62116/patientsatisfaction.pdf
41 Productivity Commission 2005. Review of patient satisfaction and experience surveys conducted for public hospitals in
Australia. A Research Paper for the Steering Committee for the Review of Government Service Provision
http://www.pc.gov.au/__data/assets/pdf_file/0016/62116/patientsatisfaction.pdf
42 Department of Family and Community Services, Ageing, Disability and Home Care 2010. Measuring outcomes in
http://www.ipsos.com/public-affairs/sites/www.ipsos.com.public-
affairs/files/documents/measuring_and_understanding_customer_satisfaction.pdf
45 Johnston, R. 1995. ‘The determinants of service quality: satisfiers and dissatisfiers’ in International Journal of Service
http://www.ipsos.com/public-affairs/sites/www.ipsos.com.public-
affairs/files/documents/measuring_and_understanding_customer_satisfaction.pdf
49 Institute for Citizen-Centred Service 2014. ‘The Common Measurement Tool’ http://www.iccs-isac.org/en/cmt/ Accessed
20 April 2014.
50 McMurty, S. & Hudson, W. 2000. ‘The Client Satisfaction Inventory: Results of an Initial Validation Study’ in Research on
http://www.pc.gov.au/gsp/reports/consultancy/?a=62345
54 Baronet, A-M. and Gerber, G. 1997. ‘Client Satisfaction in a Community Crises Center’ in Education and Program
http://familyservices.squarespace.com/storage/2012-conference/presentation-
slides/Predictors%20of%20client%20satisfaction.pdf
58 MORI Social Research Institute 2002. Public Service Reform: Measuring & Understanding customer satisfaction.
http://www.ipsos.com/public-affairs/sites/www.ipsos.com.public-
affairs/files/documents/measuring_and_understanding_customer_satisfaction.pdf
59 Buttle, F. 1996. ‘SERVQUAL: review, critique, research agenda’ in European Journal of Marketing. 30, 1, pp. 8-32.
60 Hsieh, C-M. 2012. ‘Incorporating Perceived Importance of Service Elements into Client Satisfaction Measures’ in Research
Methodology in Collaboration with Young Adults with Down Syndrome’ in Australian Social Work. 63, 1, pp. 35 – 50;
Heyer, K. 2007. ‘A disability lens on Sociological Research: reading Rights of Inclusion from a disability studies
perspective’ in Law and Social Inquiry. 32, 1, pp. 261 – 293.
63 Nind, M. 2011. ‘Participatory data analysis: a step too far?’ in Qualitative Research. 11, 4, pp. 349 – 363.
64 Australian Federation of Homelessness Organisations. 2003. Measurement of Client Satisfaction in the Supported
http://www.anglicarevic.org.au/index.php?action=filemanager&doc_form_name=download&folder_id=806&doc_id=13
629
66 Australian Federation of Homelessness Organisations. 2003. Measurement of Client Satisfaction in the Supported
64.
70 LaSala, M. 1997. ‘Client Satisfaction: Consideration of correalates and response bias’ in Families in Society. 78, 1, pp. 54 –
64.
71 Baronet, A-M. and Gerber, G. 1997. ‘Client Satisfaction in a Community Crises Center’ in Education and Program
64.
73 Anglicare Victoria 2012. “They do it with their heart” Satisfaction September 2012.
http://www.anglicarevic.org.au/index.php?action=filemanager&doc_form_name=download&folder_id=806&doc_id=13
629
74 Kapp, S. and Propp, J. ‘Client Satisfaction Methods: Inputs from Parents with Children in Foster Care’ in Child and
64.
Services. http://www.tns-bmrb.co.uk/uploads/files/iips-insight-customer-satisfaction-guidance.pdf
82 Productivity Commission 2005. Review of patient satisfaction and experience surveys conducted for public hospitals in
Australia. A Research Paper for the Steering Committee for the Review of Government Service Provision
http://www.pc.gov.au/__data/assets/pdf_file/0016/62116/patientsatisfaction.pdf
83 Productivity Commission 2005. Review of patient satisfaction and experience surveys conducted for public hospitals in
Australia. A Research Paper for the Steering Committee for the Review of Government Service Provision
http://www.pc.gov.au/__data/assets/pdf_file/0016/62116/patientsatisfaction.pdf
84 Harzing, A-W 2006. ‘Response styles in cross-national survey research: a 26-country study in International Journal of
http://www.pc.gov.au/gsp/reports/consultancy/?a=62345
89 Productivity Commission 1998. Review of Approaches to Satisfaction Surveys of Clients of Disability Services
http://www.pc.gov.au/gsp/reports/consultancy/?a=62345
90 Baronet, A-M. and Gerber, G. 1997. ‘Client Satisfaction in a Community Crises Center’ in Education and Program