Beruflich Dokumente
Kultur Dokumente
Manik Madhikermia, Sylvain Kublerb,∗, Jérémy Robertb , Andrea Budaa , Kary Främlinga
a
Aalto University, School of Science
P.O. Box 15400, FI-00076 Aalto, Finland
b University of Luxembourg, Interdisciplinary Centre for Security, Reliability & Trust
Abstract
Today’s largest and fastest growing companies’ assets are no longer physical, but rather digital (software, al-
gorithms. . . ). This is all the more true in the manufacturing, and particularly in the maintenance sector where
quality of enterprise maintenance services are closely linked to the quality of maintenance data reporting pro-
cedures. If quality of the reported data is too low, it can results in wrong decision-making and loss of money.
Furthermore, various maintenance experts are involved and directly concerned about the quality of enterprises’
daily maintenance data reporting (e.g., maintenance planners, plant managers. . . ), each one having specific needs
and responsibilities. To address this Multi-Criteria Decision Making (MCDM) problem, and since data quality is
hardly considered in existing expert maintenance systems, this paper develops a Maintenance Reporting Quality
Assessment (MRQA) dashboard that enables any company stakeholder to easily – and in real-time – assess/rank
company branch offices in terms of maintenance reporting quality. From a theoretical standpoint, AHP is used
to integrate various data quality dimensions as well as expert preferences. A use case describes how the pro-
posed MRQA dashboard is being used by a Finnish multinational equipment manufacturer to assess and enhance
reporting practices in a specific or a group of branch offices.
Keywords: Data Quality, Information Quality, Multi-Criteria Decision Making, Analytic Hierarchy Process,
Decision Support Systems, Maintenance.
Tools and companies Se- (Bertolini et al., (Durán, 2011) (Ha et al., 2008) (dGonçalves et al., (Kuo et al., 2012; de
lection 2004; Garcı́a- 2014; Gomez et al., et al., 2015a)
Cascales et al., 2009; 2011)
Ha et al., 2008;
Triantaphyllou et al.,
1997)
Cost Estimation (Chou , 2009) (Chen et al., 2005)
3
Tactical Level
Maintenance prioritiza- (Farhan & Fwa, (Ouma et al., 2015) (Wakchaure & Jha, (Liu et al., 2012; (Monte & de
tion 2009; Moazami et 2011) Cafiso et al., 2002; Almeida-Filho,
al., 2011; Taghipour Hankach et al., 2011; 2016; de et al.,
et al., 2011) Trojan & Morais, 2015b)
2012b)
Resource Planning (Azadeh et al., 2013) (Cavalcante et al., (Garmabaki et al.,
2010; Almeida et al., 2016; de Almeida.,
2013) 2001; Liu & Fran-
gopol, 2006)
Maintenance Scheduling (Coulter et al., 2006; (Van et al., 2013) (Certa et al., 2013) (Almeida, A. T.,
Eslami et al., 2014) 2012)
Operational Level
Believability (C B ) C
Work Log Conflict (CB2 ) Work description conflict among different form fields related to a IvarB2
same report.
C
Technician Log Variation (C B3 ) Technical log variation among a set of reports related to a same IvarB3
maintenance work order.
C
Asset Location reported (CC1 ) Asset location (in the product) where maintenance was performed. IfillC1
C
Description reported (CC2 ) Description of work to be done in particular maintenance work. IfillC2
C
Start & Finish Date reported (CC3 ) Actual Start and Finish dates and times of work completed. IfillC3
C
Completeness (CC ) Target Start Date reported (CC4 ) Targeted start date of the maintenance work. IfillC4
C
Target Finish Date reported (CC5 ) Targeted finish date of the maintenance work. IfillC5
C
DLC Code reported (CC6 ) Actual location of the defect within product (DLC standing for IfillC6
“Defect Location Code”).
C
Schedule Start Date reported (CC7 ) Scheduled start date of the maintenance work. IfillC7
C
Schedule Finish Date reported (CC8 ) Scheduled Finish date of the maintenance work. IfillC8
Timeliness (CT ) This is average delay of reporting on individual site ICavgT
Reports 1A was made 1h after the task, while CT : Average Delay of Reporting Both Reports zA and zD have been
report 1D was made with a delay of 3 weeks made with a delay inferior to 2h
... ... ...
Field “Asset Location” filled out in report 1A CC1 : Asset Location Reported Field “Asset Location” filled out in
as well as in report 1D report zA but not in report zD. . .
High number of conflict/variation in the CB2 : Work Log Conflict Conflicts/Variations in content of the
set of reports available on site 1 reports rarely occur
One world (”Done”) is too short to properly CB1 : Length of Work Description The description seems to be long enough
describe the maintenance opration in reports zA & zD
are no better or worse techniques but some techniques 4. AHP-based MRQA framework
are better suited to particular decision-making prob-
lems than others. For example, AHP only deals with The hierarchical structure defined using AHP con-
linear preferences and not with contextual preferences sists consists of four levels, as depicted in Figure 3,
where values of one or several criteria may affect the namely:
importance or utility of other criteria. In this study,
• Level 1: the overall goal of the study is to rank
AHP is used (and combined with TOPSIS) for two
the different OEM company sites in terms of
reasons: i) we only deal with linear preferences and
maintenance reporting quality;
ii) AHP provides a powerful impartial, logical, and
easy-to-use grading system, thus reducing personal • Levels 2 and 3: the set of data quality dimensions
biases and allowing for comparing dissimilar alterna- (criteria) and sub-criteria defined in Table 2;
tives. Those characteristics are probably the main rea-
sons for its success. According to a recent survey on • Level 4 the OEM company sites representing the
MCDM techniques (Mardani et al., 2015), AHP is the alternatives.
second most used technique (applied in 16% of the
Given this hierarchy, AHP does perform the follow-
reviewed literature1 ) after Hybrid MCDM (19.89%).
ing computation steps for identifying the final ranking
of the alternatives with respect to the overall goal:
Level 3 CB1 CB2 CB3 CC1 CC2 CC3 CC4 CC5 CC6 CC7 CC8 CT
Level 4 Site 1 Site 2 Site 3 Site 4 ... ... ... ... ... Site 54
requires n(n−1)
2 pairwise comparisons, where n is “pairwise comparisons as ratios” approach, which is
the number of elements (diagonal elements be- used at level 4.
ing equal to “1” and the other elements being the Regarding pairwise comparison based preference
reciprocal of the earlier comparisons); measurement, decision makers have to evaluate the
importance of one criterion (or sub-criterion) with re-
2. Perform calculation to find the maximum eigen- spect to the others. To this end, OEM stakehold-
value, consistency index (CI), consistency ratio ers perform pairwise comparisons among the iden-
(CR), and normalized values; tified criteria as formalized in Eq. 1, with m the
3. If the computed eigenvalue, CI and CR are sat- number of criteria to be compared (e.g., at level 2:
isfactory, then decision/ranking is done based on m = |{CB , CC , CT }| = 3). The expert evaluation is
the normalized values. carried out based on the 1- to 9-point Saaty’s scale:
{1, 3, 5, 7, 9}; wi j = 1 meaning that Ci and C j are
These three stages 1, 2 and 3 are detailed in the of equal importance and wi j = 9 meaning that Ci is
following sections. In an effort to facilitate the un- strongly favored over C j . Note that all variables used
derstanding, a scenario is considered in the following, in this paper are summarized in Table 3.
whose parts are preceded by the symbol “➫”.
C1 ... Cm
4.1. Pairwise comparison based preference measure- C1 w11 ... w1m
ment .. .. .. ..
P=. . . . (1)
According to Blumenthal (1977), two types of
Cm
wm1 . . . wmm
judgment exist, namely:
The computation of the normalized eigenvector of
i) “Comparative judgment, which is the identifica- P then enables to turn qualitative data into crisp ra-
tion of some relations between two stimuli both tios. Although several approaches exist in the lit-
present to the observer”; erature for normalized eigenvector computation, the
ii) “Absolute judgment, which involves the relations Simple Additive Weighting (SAW) method (Hwang &
between a single stimuli and some information Yoon, 1981) is used in this study, namely:
held in short term memory about some former Pm
j=1 wi j 1 i= j
comparison stimuli, or about some previously ex- WCi = Pm Pm
, w ji = (2)
1
perienced measurement scale using which the ob- j=1 wk j i, j
k=1
w
ij
server rates the single stimulus.” WC = [WC1 , .., WCi , .., WCm ]
In a comparative/relative measurement, each alterna-
P is characterized as consistent if, and only if Eq. 3
tive is compared with many other alternatives, that
is respected. However, it is not that simple to fulfill
is why this is also referred in the AHP literature
this prerequisite when dealing with real expert prefer-
to as “pairwise comparisons as ratio measurement”
ences, or when the number of criteria increases. Saaty
(Mumpower et al., 2012). In an absolute measure-
(1980) proved that for consistent reciprocal matrix,
ment, each alternative is compared with an ideal alter-
the largest eigenvalue is equal to the size of the com-
native the expert knows of or can imagine, that is why
parison matrix (or λmax = m) and, accordingly, intro-
this is referred to as “pairwise comparison based pref-
duced CI as the deviation or degree of consistency2
erence measurement”. This section details the “pair-
wise comparison based preference measurement” ap-
proach, which is used at level 2 and 3 of the AHP 2 RI
corresponds to the consistency index of a pairwise matrix
structure (cf. Figure 3), while section 4.2 details the generated randomly.
8
Table 3: Variable definitions
Variables Description
Cx abbreviation for criterion x with x = {1, 2, .., m}. Three criteria are defined at level 2 of the hierarchy structure, namely:
CB , CC , CT (cf. Table 2).
Cxh abbreviation for a sub-criterion of criterion x with h = {1, 2, .., y}. In this study, h = {1..3} for x = B (i.e., three sub-criteria
of CB ), h = {1..8} for x = C (i.e., three sub-criteria of CC ) and h = ∅ for x = T (i.e., CT does not have sub-criteria).
P abbreviation for “Pairwise Comparison matrix”, whether at level 2, 3 or 4 of the AHP structure.
wi j crisp value of a pairwise comparison matrix located at row i, column j of P.
Al represents an alternative l in the AHP structure, with l = {1, 2, .., z}. In this study, l is the set of OEM sites to be
assessed/ranked in terms of maintenance reporting quality.
WCx , WCxh represents the eigenvalue of criterion C x or sub-criterion C xh (the eigenvector results from the P’s weight derivation
process). In practice, it indicates the importance of one (sub)criterion over the others.
C
Iφ xh (Al ) represents a digital indicator used for computing pairwise comparisons as ratios (i.e., measurable elements). Two indica-
tors are defined, namely φ = {fill, avg, var} (see Eqs. 10, 14 and 16).
A
WC l represents the eigenvalue of alternative Al with respect to sub-criterion C xh . In practice, it indicates how good (or bad)
xh
the data quality is, regarding alternative/site l, with respect to C xh .
Rep Total(Al ) function returning the total number of reports available on Site l.
Size Delay(r, Al ) function returning either (i) the reporting delay value or (ii) description length value of a given report (denoted by r)
available on Site l.
Rep filled(Al ) function returning the number of reports (on Site l) for which a given “form field” has been filled out.
Rep var(Al ) function returning the number of reports/work orders (on Site l) containing content variation/conflicts.
λmax − m CI
CI = CR = (4) Similarly, the OEM officer carries out pair-
m−1 RI
➫ In this scenario, pairwise comparisons are filled wise comparisons between sub-criteria CCx | x =
out by the OEM’s executive officer. Eq. 5 shows the {1, 2, .., 8}, as detailed in Eq. 8. According to the re-
officer preference specifications regarding criteria at sulting eigenvector, CC1 is deemed as the most impor-
Level 2 of the AHP structure. The computed normal- tant sub-criterion (WCC1 ), followed by CC3 and CC2 re-
ized eigenvector shows that the officer judges all cri- spectively. Since CT does not have any sub-criterion,
teria (at this level) of equal importance. Eq. 6 shows no pairwise comparison is required.
the pairwise comparisons carried out at Level 3 of the
AHP structure, with regard to sub-criteria C Bx | x =
{1, 2, 3} (see Eq. 7 for the details of WCB1 computa-
tion). The eigenvector (cf. Eq. 6) emphasizes that
the officer judges “Length of Work Description” (i.e., 4.2. Pairwise comparisons as ratio measurement
C B1 ) slightly more important than “Work Log Con-
flict” (i.e., C B2 ) for his/her own task, and highly more
important than “Technician Log Variation” (i.e., C B3 ).
As previously mentioned, pairwise comparisons as
ratio measurement is used at level 4 of the AHP struc-
CB CC CT
ture in order to compare alternatives with respect to
CB 1
1 1 WCB
0.33 each criterion, based upon measurable/supervised sys-
CC 1 1 1 ➠ WC
C
0.33 (5) tem parameters, e.g. how many times the field “DLC
CT 1 1 1 WCT
0.33
Code” (CC6 ) has been reported on Site l compared
with the other sites (i.e., how many times such as form
CR=0 field has been left empty by maintenance operators).
CB1 CB2 C B3 Eq. 9 gives insight into the pairwise comparisons as
ratio matrix of the set of alternatives/sites Al with re-
C B1 1
3 5 WC
B1
0.61
spect to the monitored system parameter C xh . The ra-
1
C B2 1 3 (6) tio value is denoted by ICφ xh (Al ), as described in Ta-
➠ W 0.29
3 CB2
C B3
1 1
1 WCB3 0.10 ble 3. The normalized eigenvector values of the pair-
5 3
wise comparisons as ratio matrix with respect to cri-
CR=0.040 terion C xh are denoted by WCAxhl in Eq. 9.
9
CC1 CC2 CC3 CC4 CC5 CC6 CC7 CC8
CC1 1 3 1 3 7 3 9 3 WCC1 0.266
CC2 1/3
1 1/3 3 5 3 5 3 WCC2 0.167
CC3 1 3 1 3 5 3 5 3 WCC3 0.242
CC4 1/3 1/3 1/3 1 3 3 5 1
W 0.099
➠ CC4
CR=0.096 (8)
CC5 1/7 1/5 1/5 1/3 1 1 3 3 WCC5 0.065
CC6 1/3 1/3 1/3 1/3 1 1 5 1/3
WCC6 0.058
CC7 1/9 1/5 1/5 1/5 1/3 1/5 1 1/5 WCC7 0.023
CC8 1/3 1/3 1/3 1 1/3 3 5 1 WCC8 0.080
45
A1 A2 ... Az ICfillC6 (A1 ) = = 59% (11)
C
Iφ xh (A1 )
C
Iφ xh (A1 )
76
A1 1
C ... C
49
ICfillC6 (A2 ) =
Iφ xh (A2 ) Iφ xh (Az ) A1
WCxh = 44% (12)
88
C C xh
Iφ xh (A2 ) Iφ (A1 )
W A2
A2 Cxh 1 ... C xh
Cxh
Iφ (A1 ) Iφ (Az ) ➠ . (9) A1 A2 . . . A54
.. . .. .. .. .. A1
. . . . 0.15 WCC6 0.187
59
. Cxh. . . . A A1 1 44
WCxhz
C 44 A2
Iφ xh (Az ) . . . 0.67 WCC6 0.002
Iφ (Az ) A2
59 1
Az Cxh C ... 1 ➠ (13)
Iφ xh (A2 )
Iφ (A1 ) .. .. .. .. .. . .
. .. ..
. . . .
A
A54 6.64 1.50 ... 1 WC 54 3E-06
Three digital indicators ICφ xh (Al ) are defined (i.e., C6
φ = {fill, avg, var}), which are described below. Ta- Pairwise comparison as ratios can lead to com-
ble 2 highlights what indicators is used with regard to C
Iφ xh (Ai )
each criterion (see column named “Type”): putational issues when dividing C since the
Iφ xh (A j )
denominator may be null. For example, consid-
• ICfillxh (Al ) (Filled Indicator – Eq. 10): used to cal- ering the above scenario, if ICfillC6 (A2 ) = 0 (mean-
culate the proportion of reports where a given ing that the DLC Code was never reported by any
“field” was filled out on Site l ; Rep filled(Al ) C
IfillC6 (A1 ) 59
returning the number of reports that have been operator on Site 2), then C = 0 , which pre-
IfillC6 (A2 )
filled out, and Rep Total(Al ) returning the total vents from performing the division. To bypass
number of reports available on Site l: this problem, a penalty score θ is assigned to the
corresponding site (i.e., A j ) with respect to cri-
Rep filled(Al ) A
terion C xh , i.e. WCxhj = θ. However, a more in-
ICfillxh (Al ) = (10)
Rep Total(Al ) depth study must be conducted to identify what
penalty score should be assigned, how it affects
➫ Let us consider pairwise comparisons as ra-
the overall results, and so on. This study is pre-
tio measurements between Sites 1 and 2, with re-
sented in Appendix B to avoid overloading the
spect to CC6 . On Site 1, 76 maintenance reports
paper.
have been carried out and 45 of them contain the
DLC code (meaning that 59% of the available • ICavg
xh
(i) (Average Indicator – Eq. 14): used to cal-
reports contain the requested information, see culate the average delays for maintenance re-
Eq. 11), while on Site 2 only 44% of the avail- porting per site (i.e., regarding CT ), or the av-
able reports contain this information (see Eq. 12). erage length of work description (i.e., CB1 ) per
The resulting pairwise comparisons as ratio ma- site. Mathematically, ICavg
xh
(Al ) is computed based
trix with respect to CC6 is given in Eq. 13, in on Eq. 14, where Size Delay(k, Al ) is either (i)
which the above computed ICfillC6 (A1 ) and ICfillC6 (A2 ) the reporting delay value or (ii) the description
are considered for the pairwise comparison as length value, of a given report (denoted by r)
ratio measurement between Sites 1 and 2 (see available on Site l.
row 1/column 2 of the matrix, and vice-versa). Rep X
Total(Al )
The resulting eingevector indicates how good (or SizeDelay(r, Al )
bad) the reporting quality with respect to CC6 in r=1
this case – is regarding each site. ICavg
xh
(Al ) = (14)
Rep Total(Al )
10
Reporting Quality Assessment and Ranking of OEM Sites
Level 1 1.00
See Eq. 5
Level 2 WCB = 0.33 WCC = 0.33 WCT = 0.33
See Eq. 13
Site 2 : WC 2 Site 2 : WC 2 Site 2 : WC 2 = 0.002 Site 2 : WC 2
B1 B3 C6 T
{
{
{
Head Office
{
{
(OEM)
Site Site
Manager Internet Manager
or other
or other
Maintenance Operator (Site 1) Maintenance Operator (Site 1)
Maintenance Operator (Site
Report ID 1)
: 1A Maintenance Operator (Site
Report ID 1)
: 1A
Maintenance Operator (Site 1) Maintenance Operator (Site 1)
Report ID : 1A Report ID : 1A
Maintenance Operator (Site 1) Maintenance Operator (Site 1)
Report ID : 1A
Asset Location Report ID : 1A
Asset Location
Report ID : 1A
Asset Location Report ID : 1A
Asset Location
AssetDescription
Location AssetDescription
Location
AssetDescription
Location
Actual End-Date AssetDescription
Location
Actual End-Date
Description Description
Actual End-Date
Description Actual End-Date
Description
ActualTarget Start Date
End-Date ActualTarget Start Date
End-Date
Target
Actual Start Date
End-Date ... Target
Actual Start Date
End-Date ...
Target Start Date ... Target Start Date ...
Target Start Date Start Date
Scheduled ... Target Start Date Start Date
Scheduled ...
Scheduled Start Date
Scheduled
Scheduled Start End
DateDate
... Scheduled Start Date
Scheduled
Scheduled Start End
DateDate
...
Scheduled
Scheduled Start End
DateDate Scheduled
Scheduled Start End
DateDate
Scheduled End Date Scheduled End Date
Scheduled End Date Scheduled End Date
b
b
b
b
1 1
✓ TaTasksk 2 ✓ TaTasksk 2
b
Task
3
Task
➠ 3 b
✓ Task ✓ ✓ Task ✓
1 1
➠ Task
Task
2
3
Task
Task
2
3
➠
➠✓✓ ✓Task
Task
1
Task
2
3
✓ TaTasksk 2 ✓
✓ Task
1
3
➠
➠ Reporting Service Other sites Reporting Service
• adjust the assessment period, e.g., to assess/com- AHP. It can be observed that three sites stand out (hav-
pare sites over the last few weeks, months or ing the highest quality scores), namely sites 46, 12 and
years (see the “Assessment period” dashboard el- 18 respectively.
ement);
The officer wants to further investigate the reasons
behind the low quality score of Sites 6 and 54 (i.e.,
• modify his/her preferences related to the crite-
sites having the poorest quality). To this end, the of-
ria importance, e.g. if he/she wants to give – at
ficer selects – in the “Disintegrated Quality View”
a specific point in time – further importance to
dashboard element in Figure 7 – these two sites, thus
one or more dimensions (e.g., Completeness over
having a deeper insight into the site level quality with
Believability) or sub-dimensions (e.g., CC4 over
respect to each level 2 quality dimension. It can be
CC1 ). This is discussed further in this section, but
observed that Site 6 has a particularly poor ranking
the reader can already refers to Figure 10 to have
regarding both the “Timeliness” and “Completeness”
an overview of the dashboard functionality.
dimensions, while Site 54 has a poor ranking regard-
Two distinct scenarios are presented in sections 5.1 ing “Timeliness” and “Believability”. The officer can
and 5.2 respectively. The first one provides in- even go a step further in the analysis in order to under-
sight into the reporting quality assessment results stand the reasons behind the low quality score of a site
when considering the importance between criteria as regarding one of these dimensions. For example, in
roughly equivalent, while the second scenario high- the dashboard’s screenshot (cf. “Disintegrated quality
lights how the MRQA dashboard can be used for a view (level 3)” in Figure 7), the officer has selected the
specific purpose, namely to set up a cost-reduction ac- ‘Completeness’ dimension and can visualize the per-
tion plan in the proposed scenario. centage of form fields (which have been turned into
sub-criteria CC1 to CC8 ) that have been or not reported
with respect to the total number of reports on the se-
5.1. Scenario 1: Equivalence between criteria lected sites (i.e., on Sites 6 and 54). It can be observed
that maintainer operators on Site 6 report less often
At the operational level, the head officer wants to
information than on Site 54, which is the main reason
have an overview of the maintenance reporting qual-
why Site 6 has a lower quality rank than Site 54, as
ity regarding all sites, without prioritizing any qual-
pointed out above. Nonetheless, an interesting point
ity dimension. To this end, the officer does perform
in this graph is that CC6 reports more CC6 -related in-
pairwise comparisons by specifying that all criteria
formation (i.e., the ‘DLC Code’), namely 55% against
are equal in importance (as carried out in Eq. 5), and
18% for Site 54.
similarly for the sub-criteria. Figure 8(a) gives insight
– in the form of a histogram – into the quality assess- In summary, this first scenario offered an overview
ment results, where the x-axis refers to the 54 sites and of the different MRQA dashboard functionalities, and
the y-axis to the quality score obtained after applying how the associated views can be used as decision
13
➙
CC2
C B1 ➙
➙
CC4
➙
CC5 CC3
➙
➙ CC7
CC8
Legend
Form fields assessed in terms of “Believability”: CB = {CB1 , CB2 , CB3 } Form fields assessed in terms of “Timeliness”: CT
Form fields assessed in terms of “Completeness”: CC = {CC1 , CC2 , . . . , CC8 }
(a) Online form used/filled out by maintainer operators on each OEM site
Figure 7: Screenshots of the “Maintenance Work Order System” & the “MRQA dashboard”
14
1 0.12
Relative Quality Score
0.8 0.096
0.6 0.072
0.4 0.048
0.2 0.024
0 0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54
Site number (i.e., set of alternatives) Site number (i.e., set of alternatives)
(a) Scenario 1: equivalence between criteria (b) Scenario 2: Cost-reduction action plan
Report after
Failure Start repairing Repair work customer checking
✹
at scheduled date completion & Validation
T IMELINESS
Maintenance work
y✔
order tracking system
(see Figure 7(a))
Start End B
date date T IMELINESS
Figure 9: Problem to be addressed in the maintenance reporting process to avoid financial losses
support tools by various maintenance stakeholders. the “Timeliness” criterion. This interval is very crit-
The second scenario, presented in the next section, ical for the OEM company because the company has
puts further emphasis on how a site stakeholder can to pay a penalty fee that depends on the time inter-
take advantage of the dashboard for improving cost- val (obviously under the condition that the customer
reduction action plan. does not validate, or complain against the service, and
obtains a favorable ruling).
5.2. Scenario 2: Cost-reduction action plan Given the above-mentioned problem, the head of-
Over the past few years, the OEM company has ficer wants to draw up an assessment of the overall
been facing a major problem in the maintenance re- situation (i.e., with regard to each site), and to shape a
porting process at the tactical level, leading to sub- proper action plan for reducing financial losses. This
stantial financial losses. The traditional maintenance plan consists first in identifying and managing sites
process4 is depicted in Figure 9, where a work order that have the poorest timeliness quality, since they
is created by the maintenance planner as soon as a have the highest financial risk. To this end, the offi-
problem/failure is reported by the customer. Once the cer does specify that Timeliness (CT ) is strongly more
spare parts are supplied on site, the site manager re- important than Believability (C B ) and Completeness
ports it into the system, and the maintainer can start to (CC ) in order to bring to light the sites with the poor-
repair the defective equipment at the scheduled time. est Timeliness quality. Figure 10 shows how this can
As emphasized in Figure 9, the maintainer has to spec- be specified through the MRQA dashboard (see red/-
ify – into the maintenance work order tracking sys- dashed frame). The final ranking is generated and
tem – the “Start date” and “End date” (both dates be- given in Figure 8(b), showing that Sites 18, 42, 12,
ing taken into account in our AHP under the “Com- 6 and 5 are respectively the branch offices that have
pleteness” dimension, and particularly with CC3 ). Fol- the highest risk for monetary losses.
lowing the “Repair work completion” (i.e., End date), Although the action plan to be set up by the OEM
it is necessary to wait until the customer checks and company to reduce such risks on the identified sites
validates the maintenance service as depicted in Fig- is out of scope of this paper, it is worth noting that
ure 9. Such a time interval actually corresponds to IoT messaging protocols will likely be implemented
in the future to address part of the problem. At a more
4 Only
the Scheduled Maintenance Process is described (not the concrete level, such protocols will be used to gener-
Unscheduled process) to ease the understanding. ate ‘event-based’ notifications to the customer as soon
15
Pairwise comparison at level 1 of the AHP structure
by the OEM head officer
Figure 10: Dashboard/User Interface to adjust the pairwise comparison based preference measurement
as the repair work is completed (i.e., when the “End porting quality assessment process. To fulfill this gap,
date” is entered in the system), e.g. by explicitly stat- this paper develops a Maintenance Reporting Qual-
ing that the customer has a n-day deadline to check ity Assessment (MRQA) dashboard that enables any
and validate the maintenance work. From a practical company stakeholder to easily – and in real-time –
viewpoint, the recent IoT standards published by The assess/rank company branch offices in terms of main-
Open Group (Främling et al., 2014) will first be im- tenance reporting quality. In this respect, AHP is used
plemented on the riskiest sites. to integrate various data quality dimensions as well
as expert preferences. The paper presents two scenar-
ios showing how the MRQA dashboard is being used
6. Conclusions, implications, limitations and fu-
by a Finnish multinational equipment manufacturer to
ture research
assess and enhance maintenance reporting practices in
6.1. Conclusions one or more branch offices. This should contribute to
Data, information and knowledge are the “new oil” enhance other organization activities such as:
of the digital era, and are at the heart of all busi- • after-sales services: the quality of maintenance
ness operations. It is therefore crucial for companies reports makes it possible to assess the mainte-
to implement the right infrastructure to monitor and nance work, thus helping to reach a higher qual-
improve the quality of data generated throughout the ity after-sales services;
lifecycle of the company’s assets (either physical or
virtual assets). It is a fact that the data quality has • on the design of future generations of products:
a significant impact on the overall incomes and ex- processing and analyzing relevant maintenance
penditures of companies: poor data quality impacting reports help to better understand how company
the downstream processes, and reciprocally, high data assets behave throughout their lifecycle which,
quality fostering enhanced business activities and de- in turn, help to enhance the design of future gen-
cision making. However, it remains challenging to as- erations of products (Främling et al., 2013);
sess information quality, as information is not as tan-
gible as physical assets. • predictive maintenance strategies: providing
The literature review carried out in this paper brings real-time and remote predictive maintenance is
to light the fact that current expert maintenance sys- becoming a very promising area in the so-called
tems fail, or have no specific interest, to take into ac- IoT, whose objective is to provide systems with
count data quality dimensions in the maintenance re- the capability to discover and process real-time
16
data and contexts so as to make pro-active deci- form of a hierarchy that makes easier for maintenance
sions (e.g., to self-adapt the system before a pos- stakeholders to specify their needs. However, this re-
sible failure) (Främling et al., 2014). Although search has several limitations. First, only a few con-
real-time data is of the utmost importance in the cepts and relationships from the Krogstie’s framework
predictive maintenance process, combining such were considered (see red/bold elements in Figure 1),
data with historical maintenance reporting data which is due to the data that has been made available
(regarding a specific product item) has the poten- by the Finnish OEM company, as well as to their own
tial to generate new knowledge and lead to more expectations/needs. In future research work, the pro-
effective and product-centric decisions; posed AHP framework and underlying criteria should
be extended to take into consideration the other con-
• government regulation compliance: in some do- cepts/relationships such as Language Quality (e.g., for
mains, it is mandatory to comply with govern- domain appropriateness, participant knowledge ap-
ment regulations (e.g., in automotive, avionics, propriateness. . . ), Syntactic Quality, etc.. Such an
or healthcare domains). In this respect, assess- extension might potentially require to combine AHP
ing the quality of maintenance reporting can pre- with other tools and techniques for semantic pro-
vent the company from having regulation non- cessing and matching purposes for example, or still
compliance issues, e.g. by carefully following for handling uncertainty and vagueness in the expert
the data quality on each branch office and identi- judgments/preferences (e.g., using fuzzy logic).
fying as soon as possible quality issues regarding Furthermore, although it is already a great achieve-
one or more dimensions; ment for the Finnish company to be able to iden-
tify how good/bad their branch offices are in report-
6.2. Implications ing maintenance data, we would have liked to carry
This research presents three main theoretical im- out a post-analysis to evaluate benefits of the post-
plications. First, it contributes to the literature on action plans carried out in the different branch of-
maintenance management (MM) by proposing a thor- fices. For example, we are aware that the company
ough state-of-the-art on the use of MCDM techniques have developed on-site training programs, which have
at each MM level (Strategic, Tactical, Operational), been customized according to the quality results re-
which helps identifying criteria at each of these levels lated to each site (e.g., if a site fails in addressing one
(cf. Table 5). This list of criteria can be of potential or more data quality dimensions, the training program
value to future researchers working in MM. Second, is customized accordingly). However, such a post-
this research contains an approach to identify relevant analysis and insightful implications cannot be pro-
data quality dimensions (based on existing data qual- vided because of contractual obligations and project
ity frameworks), and to turn them into a hierarchical constraints.
AHP structure. A theoretical framework is then pro-
posed, enabling the assessment and ranking of differ- 6.4. Future research
ent company branch offices in terms of maintenance
reporting quality. To the best of our knowledge, and From a research perspective, we developed a frame-
as evidenced through our state-of-the-art, this is the work that makes possible the ranking of maintenance
first research work that addresses this specific goal. sites based on their respective reporting quality. Cur-
Finally, the research also contributes three main rently, the proposed MRQA dashboard is limited to
managerial implications. First, it enables organization the visualization of the site’s data quality score and as-
stakeholders to realize how important it is to moni- sociated rank. In future research work, we would like
tor and assess maintenance reporting practices, as it to extend this tool, and particularly to re-use the final
can impact downstream but also upstream activities site’s (AHP) data quality score as an input parameter
of the organization. Second, the proposed dashboard of a more advanced framework (e.g., that would in-
helps practitioners to quickly identify, based on their tegrate live sensor data from manufacture equipment)
needs and preferences, how one or a group of sites that would make it possible to decide – in real-time
behave (i.e., how good/bad they are) with respect to – what predictive failure model for machine is better
one or more dimensions. This is helpful to estab- suited. This is based on two working assumptions:
lish their strategic plans to improve current practices,
• a weak assumption: the higher the maintenance
which may result in savings of both money and time.
reporting quality score on-site (denoted by R(Al )
in this study), the higher the confidence of the
6.3. Limitations
failure prediction;
The theoretical implications discussed above rely
both on the Krogstie’s data quality framework to iden- • a strong assumption: the confidence of one or
tify key data quality dimensions, and AHP as MCDM more predictive models (e.g., binary logic model,
technique to structure these quality dimensions in the cox regression model, regression trees model. . . )
17
Table 5: Percentage of Criteria used in the Maintenance Management (MM) literature
Strategic Level Tactical Level Operational Level
Cost 22.7 Cost 21.9 Cost 25.8
Resource Availability & Utilization 10.1 Environment./Operation. Condition 14.1 Resource Availability & Utilization 19.4
Added Value 7.6 Safety 9.4 Added Value 4.8
Safety 7.6 Resource Availability & Utilization 9.4 DownTime & Time to repair 4.8
Reliability 7.6 Risk/Severity 9.4 Quality 4.8
Environment./Operation. Condition 5.0 DownTime & Time to repair 7.8 Risk/Severity 4.8
Quality 5.0 Reliability 6.3 Organizational Process 4.8
Risk/Severity 5.0 Added Value 3.1 Failure Frequency 3.2
Failure Frequency 4.2 Knowledge 3.1 Knowledge 3.2
Feasibility (implementation) 4.2 Resources Age 3.1 Environment./Operation. Condition 3.2
Repairability 3.4 Failure Frequency 1.6 Reliability 3.2
DownTime & Time to repair 3.4 Detectability 1.6 Safety 1.6
Flexibility 3.4 Feasibility (implementation) 1.6 Repairability 1.6
Knowledge 1.7 Maintenance Frequency 1.6 Abilities & development 1.6
Geographical Location 2.5 Comfort 1.6 Collaboration with Stakeholders 1.6
Component Failed 1.7 Automation 1.6 Operational Time 1.6
Detectability 0.8 Laws and Regulation 1.6 Maintenance Impact 1.6
Difficulty and Challenges 0.8 Number of affected people 1.6 Performance 1.6
Support and Services 0.8 Customer Category 1.6
Management & Organization 0.8 Number of sorties flown 1.6
Machine Uses 1.6
might evolve according to the reporting quality of the reviewed papers, to other quality aspects than
score, whose evolution might even (potentially) Data Quality, except in (Van & Pintelon, 2014).
differ from one model to another. To put it an-
other way, we might assume that according to the Appendix B. Penalty score selection
on-site maintenance reporting quality score (e.g.,
if R(Al ) < 60%), a binary logic model might pro- The methodology defined to tune the penalty score
vide more confident predictions than a regression consists in studying whether the introduced penalty
trees model, or vice-versa. has a significant impact on the overall ranking. Let us
consider, in Eq. B.1, the pairwise comparisons as ratio
The objective of future research will be to validate (or matrix introduced as example in section 4.2, where it
invalidate) these two working assumptions and, if val- is assumed now that the form field(s) related to crite-
idated, to propose a more advanced framework that is rion CC6 has/have been left empty in all reports car-
able to switch between two or more predictive models ried out on Site 1 (i.e., in 100% of the reports). Con-
and react accordingly. sequently, ICfillC6 (A1 ) = 0, as highlighted in the first
column and row of the matrix in Eq. B.1. Our strat-
7. Acknowlegement egy is to give out a penalty score (denoted by θ) as
eigenvalue to the corresponding site (i.e., to Site 1) as
The research leading to this publication is sup- shown in Eq. B.1.
ported by the Finnish Metals and Engineering Compe-
tence Cluster (FIMECC) S4Fleet program and the Na- A1 A2 ... A54
✚✚
tional Research Fund Luxembourg (grant 9095399). A1 1 ✓
✓
0
... 0
44 C θ
IfillB1 (Al )
✚
W A2
✓
44
✓
A2 1 ... 0.67 CC6
0
Appendix A. Percentage of criteria considered in ➠
..
.. .. .. .. ..
.
the Maintenance literature . . . . .
✚✚
C
W A54
IfillB1 (Al )
CC6
A54 1.50 ... 1
Based on the summary matrix given in Table 1, re- ✚0
porting what MCDM techniques is commonly used at A1 A2 ... A54
each MM level, we carried out an in-depth analysis
θ
A1 — — — —
to identify the most commonly used criteria at each of A2
WC
A2 — 1 . . . 0.67 C6
these level in order to see whether data quality is prop-
➠ (B.1)
.. . .. .. . .
. ..
erly addressed in the maintenance literature, and par- . . . . . .. ..
ticularly regarding maintenance reporting activities. A
A54 — 1.50 . . . 1 WC 54
C6
The analysis outcome, regarding each MM level, is
given in the form of tabular in Table 5, which high- In order to select the penalty score θ, we propose to
light that data quality is hardly considered in the re- carry out an analysis to determine whether the intro-
viewed papers knowing that “Quality” refers, in most duced score impacts substantially or not on the overall
18
%co
rr
1
Jacc.
MRQA tool
Figure A.11: Comparison process – based on the Jaccard similarity coefficients – set up for similarity measurements between distinct site
rankings (i.e., considering various criteria preferences and penalty scores)
ranking. To this end, two distinct penalty scores are coefficient goes from 0 (no common list) to 1 (identi-
considered: cal lists).
|A ∩ B| |A ∩ B|
• θ = 0: the site is penalized compared with the J(A, B) = = (B.2)
other sites since any site that does not get a |A ∪ B| z
penalty has automatically an eignevalue greater Let A, B and C be three distinct lists consisting of
than zero, or to be more precise 0 < WCAxhl < 1; five sites {S1 , .., S5 }, where each site receives a final
rank as presented in Figure B.12. In this example, two
• θ = −p | p ∈ R− : the site is penalized compared Jaccard similarity coefficients J(A, B) and J(A, C) are
to the other sites, whose effect (unlike θ = 0) is to calculated. Both coefficients are equal because the in-
bring down the overall ranking when aggregating tersections |A∩B| and |A∩C| have the same cardinality.
all AHP dimensions/criteria.
A B C
To identify whether the penalty scores impact (in a S1 1 1 7 |A∩B| |1,2,3| 3
S2 2 2 6 J(A, B) = z = |1,2,3,4,5| = 5
substantial manner) the overall ranking, we propose S3 3 3 1
– as depicted in Figure A.11 – to generate/compute |A∩C| |1,2,3| 3
S4 4 6 2 J(A, C) = z = |1,2,3,4,5| = 5
the alternative ranking for each penalty score (con- S5 5 7 3
sidering a given set of criteria weights/preferences)
and to compare whether the two rankings vary from Figure B.12: Computation of Jaccard similarity coefficients
each other. This process has been tested for six com-
binations of criteria weights, as emphasized in Fig- In our study, sites are ordered according to their
ure A.11. The similarity measure between distinct data quality score. It could be worthwhile to define
rankings is based on the Jaccard similarity coeffi- a similarity coefficient that would take into account
cients, whose principle is described in section Ap- the rank. To this end, let us define Lq to be a sub-
pendix B.1. Results and concluding remarks about the list of L, where Lq consists of sites from rank 1 to q
penalty score selection are presented in section Ap- (q ≤ z). A progressive similarity coefficient Jq (A, B)
pendix B.2. can therefore be computed as in Eq. B.3.
20
de Almeida, A. T. (2001). Multicriteria decision making on mainte- Maintenance Optimization Using Multi-attribute Utility Theory.
nance: spares and contracts planning. European Journal of Op- Springer.
erational Research, 129,235–241. Gomez, A. and Carnero, M. C. (2011). Selection of a Computerised
de Almeida, A. T., Cavalcante, C. A. V., Alencar, M. H., Ferreira, R. Maintenance Management System: a case study in a regional
J. P., de Almeida-Filho, A. T and Garcez, T. V. (2015). Decision health service. Production Planning and Control, 22,426–436.
on Maintenance Outsourcing. Springer. Goossens, A. J. M. and Basten, R. J. I. (2015). Exploring main-
de Almeida, A. T., Cavalcante, C. A. V., Alencar, M. H., Ferreira, tenance policy selection using the Analytic Hierarchy Process;
R. J. P., de Almeida-Filho, A. T. and Garcez, T. V. (2015). an application for naval ships. Reliability Engineering & System
Decisions on Priority Assignment for Maintenance Planning. Safety, 142, 31–41.
Springer. Ha, S. H. and Krishnan, R. (2008). A hybrid approach to supplier
de Almeida, A. T., Cavalcante, C. A. V., Alencar, M. H., Ferreira, selection for the maintenance of a competitive supply chain. Ex-
R. J. P., and de Almeida-Filho, A. T. and Garcez, T. V. (2015). pert Systems with Applications, 34, 1303–1311.
Preventive Maintenance Decisions. Springer. Hankach, P. and Lepert, P. (2011). Multicriteria Decision Analysis
de un Caso, E. (2008). The Efficiency of Preventive Maintenance for Prioritizing Road Network Maintenance Interventions. Inter-
Planning and the Multicriteria Methods: A Case Study. Com- national Journal of Pavements, 10,.
putación y Sistemas, 12,208–215. Hjalmarsson, L. and Odeck, J. (1996). Efficiency of trucks in road
Dehghanian, P., Fotuhi-Firuzabad, M., Bagheri-Shouraki, S. and construction and maintenance: an evaluation with data envelop-
Kazemi, A. A. R. (2012). Critical component identification in ment analysis. Computers & operations research, 23,393–404.
reliability centered asset management of power distribution sys- Hosseini Firouz, M. and Ghadimi, N. (2015). Optimal preven-
tems via fuzzy AHP. Systems Journal, 4, 593–602. tive maintenance policy for electric power distribution systems
dGonçalves, C. D. F. and Dias, J. A. M. and Cruz-Machado, V. A. based on the fuzzy AHP methods. Complexity,, DOI 10.1002/c-
(2014). Decision Methodology for Maintenance KPI Selection: plx.21668.
Based on ELECTRE I. Springer. Hwang, C. L., and Yoon, K. (1981). Multiple Attribute Decision-
Duffuaa, S. O., Ben-Daya, M., Al-Sultan, K. S. and Andijani, A. Making Methods and Applications. Springer Verlag, Berlin, Hei-
A. (2001). A generic conceptual simulation model for mainte- delberg, New York.
nance systems. Journal of Quality in Maintenance Engineering, Ilangkumaran, M. and Kumanan, S. (2009). Selection of mainte-
7, 207–219. nance policy for textile industry using hybrid multi-criteria de-
Durán, O. (2011). Computer-aided maintenance management sys- cision making approach. Journal of Manufacturing Technology
tems selection based on a fuzzy AHP approach. Advances in En- Management, 20, 1009–1022.
gineering Software, 42, 821–829. Ilangkumaran, M. and Kumanan, S. (2012). Application of hybrid
e Costa, C. A .B., Carnero, M. C. and Oliveira, M. D. (2012). A VIKOR model in selection of maintenance strategy. Interna-
multi-criteria model for auditing a Predictive Maintenance Pro- tional Journal of Information Systems and Supply Chain Man-
gramme. European Journal of Operational Research, 217,381– agement (IJISSCM), 5,59–81.
393. Jarke, M. and Vassiliou, Y. (1997). Data Warehouse Quality: A Re-
Emovon, I., Norman, R. A. and Murphy, A. J.(2010). Hybrid view of the DWQ Project. In Proceedings of the Conference on
MCDM based methodology for selecting the optimum mainte- Information Quality (IQ1997), Cambridge, MA (pp. 299–313).
nance strategy for ship machinery systems. Journal of Intelligent Jeon, J., Kim, C. and Lee, H. (2011). Measuring efficiency of total
Manufacturing, ,1–13. productive maintenance (TPM): a three-stage data envelopment
Eslami, S., Sajadi, S. M. and Kashan, A. H. (2014). Selecting a analysis (DEA) approach. Total Quality Management & Busi-
preventive maintenance scheduling method by using simulation ness Excellence, 22, 911–924.
and multi criteria decision making. International Journal of Lo- Jones-Farmer, L. A., Ezell, J. D. and Hazen, B. T. (2014). Applying
gistics Systems and Management, 18, 250–269. control chart methods to enhance data quality. Technometrics,
Fallah-Fini, S., Triantis, K., Rahmandad, H. and Jesus, M. (2015). 56, 29–41.
Measuring dynamic efficiency of highway maintenance opera- Kahn, B. K., Strong, D. M., and Wang, R. Y. (2002). Information
tions. Omega, 50,18–28. quality benchmarks: product and service performance. Commu-
Fang, C.-C., & Huang, Y.-S. (2008). A Bayesian decision analysis nications of the ACM, 45, 184–192.
in determining the optimal policy for pricing, production, and Köksal, G., Batmaz, l., and Testik, M. C. (2011). A review of data
warranty of repairable products. Expert Systems with Applica- mining applications for quality improvement in manufacturing
tions, 35, 1858–1872. industry. Expert systems with Applications, 38, 13448–13467.
Farhan, J. and Fwa, T. (2009). Pavement maintenance prioritiza- Krogstie, J., Lindland, O. I., and Sindre, G. (1995). Defining quality
tion using analytic hierarchy process. Transportation Research aspects for conceptual models. In Proceedings of the IFIP8.1
Record: Journal of the Transportation Research Board, 12–24. Working Conference on Information Systems Concepts: Towards
Ferdousmakan, M., Vasili, M., Vasili, M., Tang, S.H. and Lim, N.T. a Consolidation of Views (ISCO), Marburg, Germany (pp. 216–
(2014). Selection of Appropriate Risk-based Maintenance Strat- 231).
egy by Using Fuzzy Analytical Hierarchy Process. In 4rd Euro- Kumar, G. and Maiti, J. (2012). Modeling risk based maintenance
pean Seminar on Computing, Pilsen, Czech Republic (pp. 77). using fuzzy analytic network process. Expert Systems with Ap-
Främling, K., Holmström, J., Loukkola, J., Nyman, J., and Kaustell, plications, 39, 9946–9954.
A. (2013). Sustainable PLM through Intelligent Products. Engi- Kuo, T. C. and Wang, M. L. (2012). The optimisation of mainte-
neering Applications of Artificial Intelligence, 26, 789–799. nance service levels to support the product service system. In-
Fouladgar, M.M., Yazdani-Chamzini, A., Lashgari, A., Zavadskas, ternational Journal of Production Research, 50, 6691–6708.
E. K. and Turskis, Z. (2012). Maintenance strategy selection us- Labib, A. W., O’Connor, R. F. and Williams, G. B. (1998). An ef-
ing AHP and COPRAS under fuzzy environment. International fective maintenance system using the analytic hierarchy process.
journal of strategic property management, 16, 85–104. Integrated Manufacturing Systems, 9, 87–98.
Främling, K., Kubler, S., and Buda, A. (2014). Universal Messag- Levrat, E., and Iung, B. and Crespo Marquez, A. (2008). E-
ing Standards for the IoT from a Lifecycle Management Per- maintenance: review and conceptual framework. Production
spective. IEEE Internet of Things Journal, 1, 319–327. Planning & Control, 19, 408–429.
Garcı́a-Cascales, M. S., and Lamata, M. T. (2009). Selection of Li, C., and Xu, M. and Guo, S. (2007). ELECTRE III based on
a cleaning system for engine maintenance based on the ana- ranking fuzzy numbers for deterministic and fuzzy maintenance
lytic hierarchy process. Computers & Industrial Engineering, strategy decision problems. IEEE.
56, 1442–1451. Li, J., Tao, F., Cheng, Y. and Zhao, L.. (2015). Big data in product
Garmabaki, A. H. S., Ahmadi, A. and Ahmadi, M. (2016). lifecycle management. The International Journal of Advanced
21
Manufacturing Technology, 81, 667–684. tenance quality function deployment through the analytical hier-
Liu, J. and Yu, D. (2004). Evaluation of plant maintenance based on archy process. International Journal of Industrial and Systems
data envelopment analysis. Journal of Quality in Maintenance Engineering, 2, 454–478.
Engineering, 10,203–209. Roll, Y., Golany, B. and Seroussy, D.(1989). Measuring the effi-
Liu, M. and Frangopol, D. M. (2006). Decision support system for ciency of maintenance units in the Israeli Air Force. European
bridge network maintenance planning. Springer. Journal of Operational Research, 43, 136–142.
Liu, J., LU, X. and QU, C. (2012). A Priority Sorting Approach Rouse, P., Putterill, M. and Ryan, D. (2002). Integrated perfor-
of Maintenance Task During Mission Based on ELECTRE TRI. mance measurement design: insights from an application in air-
Fire Control & Command Control, ,S1. craft maintenance. Management Accounting Research, 13, 229–
Mardani, A., Jusoh, A. and Zavadskas, E. K. (2015). Fuzzy multi- 248.
ple criteria decision-making techniques and applications – Two Saaty, T. L. (1980). The Analytic Hierarchy Process. New York:
decades review from 1994 to 2014. Expert systems with Appli- McGraw-Hill.
cations, 42, 4126–4148. Sampaio, S. F. M., Dong, C. and Sampaio, P. (2015). DQ 2 S –
Maurino, A. and Batini, C. (2009). Methodologies for data quality A framework for data quality-aware information management.
assessment and improvement. ACM Computing Surveys, 41, 1– Expert Systems with Applications, 42, 8304–8326.
52. Shafiee, M. (2015). Maintenance strategy selection problem: an
Moazami, D., Behbahani, H. and Muniandy, R. (2011). Pavement MCDM overview. Journal of Quality in Maintenance Engineer-
rehabilitation and maintenance prioritization of urban roads us- ing, 21, 378–402.
ing fuzzy logic. Expert Systems with Applications, 38, 12869– Shahin, A., Pourjavad, E. and Shirouyehzad, H. (2012). Selecting
12879. optimum maintenance strategy by analytic network process with
Mobley, R. K. (2002). An introduction to predictive maintenance. a case study in the mining industry. International Journal of Pro-
Butterworth-Heinemann. ductivity and Quality Management, 10, 464–483.
Monte, M. B. S. and others. (2015). A MCDM Model for Preventive Sheikhalishahi, M. (2014). An integrated simulation-data envelop-
Maintenance on Wells for Water Distribution. In IEEE Interna- ment analysis approach for maintenance activities planning. In-
tional Conference on QSystems, Man, and Cybernetics (SMC), ternational Journal of Computer Integrated Manufacturing, 27,
2015 (pp.268–272). 858–868.
Monte, M. B.S and de Almeida-Filho, A. T. (2016). A Multicrite- Shyjith, K., Ilangkumaran, M. and Kumanan, S. (2008). Multi-
ria Approach Using MAUT to Assist the Maintenance of a Wa- criteria decision-making approach to evaluate optimum mainte-
ter Supply System Located in a Low-Income Community. Water nance strategy in textile industry. Journal of Quality in Mainte-
Resources Management, , 1–14. nance Engineering, 14, 375–386.
Muchiri, P., Pintelon, L., Gelders, L. and Martin, H. (2011). De- Simpson, L. (1996). Do decision makers know what they prefer?:
velopment of maintenance function performance measurement MAVT and ELECTRE II. Journal of the Operational Research
framework and indicators. International Journal of Production Society, 47, 919–929.
Economics, 131, 295–302. Sophie, S.-Z., Thomas, A., Dominik, L., Ralf, H., Pierrick, B.
Mumpower, J. L., Phillips, L. D., Renn, O. and Uppuluri, V. R. R. and Javier, G.-S. (2014). Supreme sustainable predictive main-
(2012). Expert Judgment and Expert Systems. Springer Science tenance for manufacturing equipment. In European Congress
& Business Media. & Expo on Maintenance and Asset Management (EuroMainte-
Nyström, B. and Söderholm, P. (2010). Selection of maintenance nance), Helsinki, Finland, (pp. 1–6).
actions using the analytic hierarchy process (AHP): decision- Sun, S. (2004). Assessing joint maintenance shops in the Taiwanese
making in railway infrastructure. Structure and Infrastructure Army using data envelopment analysis. Journal of Operations
Engineering, 6, 467–479. Management, 22, 233–245.
Ofner, M., Otto, B. and Österle, H. (2013). A Maturity Model for Taghipour, S., Banjevic, D. and Jardine, A. K. S. (2011). Prioriti-
Enterprise Data Quality Management. Enterprise Modelling and zation of medical equipment for maintenance decisions. Journal
Information Systems Architectures, 8, 4–24. of the Operational Research Society, 62, 1666–1687.
Ouma, Y. O., Opudo, J. and Nyambenya, S.(2015). Comparison of Tan, P.-N., Steinbach, M. and Kumar, V. (2006). Introduction to
Fuzzy AHP and Fuzzy TOPSIS for Road Pavement Maintenance data mining. Addison-Wesley Longman Publishing Co., Inc.
Prioritization: Methodological Exposition and Case Study. Ad- Tan, Z., Li, J., Wu, Z., Zheng, J. and He, W. (2011). An evalua-
vances in Civil Engineering, ,DOI 10.1155/2015/140189. tion of maintenance strategy using risk based inspection. Safety
Ozbek, M. E., de la Garza, J. M. and Triantis, K. (2010). Data and science, 49, 852–860.
modeling issues faced during the efficiency measurement of road Thor, J., Ding, S. and Kamaruddin, S. (2013). Comparison of multi
maintenance using data envelopment analysis. Journal of Infras- criteria decision making methods from the maintenance alter-
tructure Systems, 16, 21–30. native selection perspective. The International Journal of Engi-
Ozbek, M. E., de la Garza, J. M. and Triantis, K. (2010). Effi- neering and Science, 2, 27–34.
ciency measurement of bridge maintenance using data envelop- Triantaphyllou, E., Kovalerchuk, B., Mann, L. and Knapp, G. M.
ment analysis. Journal of Infrastructure Systems, 16, 31–39. (1997). Determining the most important criteria in maintenance
Palma, J., de León Hijes, F. C. G., Martı́nez, M. C. and Cárceles, L. decision making. Journal of Quality in Maintenance Engineer-
G. (2010). Scheduling of maintenance work: A constraint-based ing, 3, 16–28.
approach. Expert Systems with Applications, 37, 2963–2973. Trojan, F. and Morais, D. C. (2012). Using ELECTRE TRI to sup-
Peck, M. W., Scheraga, C. A. and Boisjoly, R. P. (1998). Assess- port maintenance of water distribution networks. Pesquisa Op-
ing the relative efficiency of aircraft maintenance technologies: eracional, 32, 423–442.
an application of data envelopment analysis. Transportation Re- Trojan, F. and Morais, D. C. (2012). Prioritising alternatives for
search Part A: Policy and Practice, 32, 261–269. maintenance of water distribution networks: a group decision
Pintelon, L. M. and Gelders, L.F. (1992). Maintenance management approach. Water Sa, 38, 555–564.
decision making. European journal of operational research, 58, Umbrich, J., Neumaier, S. and Polleres, A. (2015). Quality assess-
301–317. ment & evolution of Open Data portals. In 3rd International
Pourjavad, E., Shirouyehzad, H. and Shahin, A. (2013). Selecting Conference on Future Internet of Things and Cloud (FiCloud),
maintenance strategy in mining industry by analytic network Roma, Italy (pp. 404–411).
process and TOPSIS. International Journal of Industrial and Van den Bergh, J., De Bruecker, P., Beliën, J., De Boeck, L. and De-
Systems Engineering, 15, 171–192. meulemeester, E. (2013). A three-stage approach for aircraft line
Pramod, V. R., Sampath, K., Devadasan, S.R., Jagathy Raj, V.P. and maintenance personnel rostering using MIP, discrete event sim-
Moorthy, G. D. (2007). Multicriteria decision making in main- ulation and DEA. Expert Systems with Applications, 40, 2659–
22
2668.
Van Horenbeek, A. and Pintelon, L. (2014). Development of a
maintenance performance measurement frameworkusing the an-
alytic network process (ANP) for maintenance performance in-
dicator selection. Omega, 42, 33–46.
Vujanović, D., Momčilović, V., Bojović, N. and Papić, V. (2012).
Evaluation of vehicle fleet maintenance management indicators
by application of DEMATEL and ANP. Expert Systems with Ap-
plications, 39, 10552–10563.
Waisberg, D. (September 2015). Data Analytics: A Matrix for
Better Decision Making. https://www.thinkwithgoogle.
com/articles/data-analysis-a-matrix-for-better-
decision-making.html#utm_source=LinkedIn&utm_
medium=social&utm_campaign=Think
Wakchaure, S. S. and Jha, K. N. (2011). Prioritization of bridges
for maintenance planning using data envelopment analysis. Con-
struction Management and Economics, 29, 957–968.
Wang, R. Y., and Strong, D. M. (1996). Beyond accuracy: What
data quality means to data consumers. Journal of management
information systems, 12, 5–33.
Wang, J., Fan, K. and Wang, W. (2010). Integration of fuzzy AHP
and FPP with TOPSIS methodology for aeroengine health as-
sessment. Expert Systems with Applications, 37, 8516–8526.
Wang, L., Chu, J. and Wu, J. (2007). Selection of optimum main-
tenance strategies based on a fuzzy analytic hierarchy process.
International Journal of Production Economics, 107, 151–163.
Wellsandt, S., Wuest, T., Hribernik, K., and Thoben, K.-D. (2015).
Information Quality in PLM: A Product Design Perspective.
In Advances in Production Management Systems: Innovative
Production Management Towards Sustainable Growth (AMPS),
Tokyo, Japan (pp. 515–523).
Xu, L. and He, W. and Li, S. (2010). Internet of Things in indus-
tries: A survey. IEEE Transactions on Industrial Informatics,
10, 2233–2243.
Zaim, S., Turkyilmaz, A., Acar, M. F., Al-Turki, U. and Demirel, O.
F. (2012). Maintenance strategy selection using AHP and ANP
algorithms: a case study. Journal of Quality in Maintenance En-
gineering, 18, 16–29.
Zhangqiong, W. and Guozheng, S. (1999). A Study of Appraisal
and Decision-making Support System for Maintenance Scheme
of Metal Structure of Crane. Journal of Wuhan University, 5,
007.
23