Sie sind auf Seite 1von 6

Data Driven Framework for Degraded Pogo Pin Detection in

Semiconductor Manufacturing

Theint Theint Aye, Feng Yang, Long Wang, Gary Xiang Li, Jinwen Hu, Manh Cuong Nguyen
Kee Khoon Lee Singapore Institute of Manufacturing Technology,
Institute of High Performance Computing, A*STAR A*STAR
1 Fusionopolis Way, Singapore 138632 71 Nanyang Drive, Singapore 638075
{ayett, yangf, wangle, leekk}@ihpc.a-star.edu.sg {xli, jwhu, nguyenmc}@SIMTech.a-star.edu.sg

Abstract—Integrated Circuit (IC) product test in is in the product testing process where floods of data from
semiconductor manufacturing industry is commonly each testing product are generated in a real-time manner.
conducted through the socket pogo pins contacting with the IC However, many manufacturing engineers will confirm that
products. The socket pogo pins can be degraded due to 90 percent of the test data they store are not utilized for the
repeatedly plugging-into and pulling-out from the socket. purposes such as improving yield, throughput, efficiency,
Degradation in socket pogo pins will greatly affect the and product quality, and how to expand the use of the
accuracy of final test in semiconductor manufacturing, which collected and stored data is still a crucial thing that many
in turn results in economic and reputation losses of companies have been seeking to do [1].
manufacturers. How to rapidly and accurately detect the
degraded pogo pins is still unresolved. In addition, the very In the product testing in semiconductor manufacturing
huge data produced by a large amount of tester machines will industry, a socket is a critical element and it must survive the
further bring difficulties into degradation detection of socket rigorous demands of a production floor. Different types of
pogo pins. Focusing on those existing problems in sockets provide specialized electrical and mechanical
semiconductor manufacturing, this paper proposed a data parameters needed for specific stages in the product testing
driven framework with adopting data mining techniques to process. Within each socket, several hundred of pogo pins
tackle them. This framework transforms the test data are deployed and they are now performing the direct
generated by manufacturing machines into human readable connection between testing machines and products to collect
format and then analyzes them by data mining techniques, testing data for over 90 percent of real applications.
which empowers the manufacturing engineers to automatically Therefore, the quality of connection will play a crucial role
detect the degraded socket pogo pins from the test data. for the whole product testing. However, each pogo pin, made
Extensive experimental studies with real data were carried out of solid hard alloy metal, has its own life time as its quality
and the results show great application prospect of the proposed will degrade over time due to repeatedly plugging-in and
framework.
pulling-out of the testing products. Low quality or failure of
Keywords- pogo pin, degradation detection, data-driven pogo pins in semiconductor test environment affects the
framework, data mining techniques, semiconductor quality of signals collected and hence the accuracy of die (a
manufacturing semi-finished product) testing, which could then lead to a
level of unacceptable false rejection in testing processes. All
I. INTRODUCTION of those rejected dies are needed to be re-tested and as a
The powerful intelligent and smart devices for personal result, the testing will incur higher cost including a lot more
and industrial use such as smart phones and tablets are money and more time. In order to overcome these problems
widespread and proliferating at an accelerated rate day by resulted from low quality of pogo pins, testing engineers
day. These smart devices are connected by powerful back- need to inspect, maintain and replace pogo pins before the
end software intelligent systems. The semiconductor test cycle starts. Traditionally, the engineer detects the
manufacturing industry is a major enabler for these degraded pogo pins by carefully observing the tip of each
intelligent systems. However, the increasing design pogo pin through a microscope. This way is very time
complexity such as multi-core processors, embedded consuming, costly and inefficient in practice where one
memory caches and System-on-a-Chip are making semiconductor test machine generally contains 8 sockets
semiconductor manufacturing more difficult and leading to a with each socket consisting of over 400 pogo pins. Thus, the
rapid complexity increase in manufacturing processes. manual detection of degraded pogo pins becomes challenges
and bottleneck in semiconductor industry.
The multiple numbers of manufacturing stations,
requiring hundreds of process steps and increasing number How to utilizing the collected data from pogo pins to
of related parameters collection, return a flood of determine the quality of pogo pins is the focus of this study.
multivariate data, the processing of which is becoming a In this study, a data driven framework for degraded pogo
great challenge in semiconductor test processes today. One pins detection is proposed. The proposed framework
of the major data resources in semiconductor manufacturing empowers the manufacturing engineers by auto-detection of
degraded pogo pins applying data mining techniques such as

978-1-4799-8389-6/15/$31.00 2015
c IEEE 345
linear regression and classification. There are some whether the data resides in a data d buffer, or is being
challenges in this project such as the unlim mited number of propagated in a network message by b [5]. There are several
test data files with respect to one pogo pin and no record types in one STDF. Tablee I illustrates the STDF
standardization in test data files size. Moreover, all record name, its attributes and frequencies of record per
manufacturing test data files are in machinee readable format STDF file and per dice/part that aree seen in one STDF. Each
(.std). Thus, the following steps are neededd to perform prior die test record is recorded from PIR
R till PRR and this loop is
to auto-detect the degraded pogo pins: (i) traansform machine repeated until no more die to be testted in one semiconductor
readable test data to human readable filee format such as machine. The brief description of each
e record type is as the
comma separated value (.csv); (ii) analyzee all transformed followings.
csv files to find out degraded pogo pins bby applying data TABLE I STDF RECORD NAME AND FREQUENCY
R OF EACH RECORD
mining techniques; and (iii) return the rannks of pogo pins
which have relatively higher probabilities off degradation.
The remainder of this paper is organnized as follows.
Section II describes an overview and complexity of
semiconductor manufacturing test data and Section III
presents the proposed framework architectuure, technologies’
outcomes and discussions and finally Sectiion IV concludes
the paper and discusses for further study.
II. OVERVIEW AND COMPLEXITY OF SEEMICONDUCTOR
MANUFACTURING TEST DATTA
Modern semiconductor manufacturing pprocesses feature
an increasing number processing steps with an exponentially
increasing complexity of the steps themselves and generate a File Attributes Record (FAR) is required as the first
flood of multivariate monitoring data. Typpically, there are record of STDF file and Master In nformation Record (MIR)
two main stages in semiconductor prodduction; frontend contains all information related to a tested lot of die such as
(fabrication or “fab”) and backend (“assemb mbly and test”) by test program name, lot id, tester id, i data and time of job
Shott [2]. During frontend process (wafer ffabrication), bare setup, test temperature and prroduct family id. Pin
silicon wafers up to 8 inches diameter, is connverted to wafers Information Record (PIR) performss as a marker to indicate
and repetitively processing them by adding llayers of material where a particular part testing beg gins and Parametric Test
to produce multiple copies of an integratedd circuit or “die” Record (PTR) consists of the resu ults of a tested dice/part
on the wafer. During backend process (asssembly and test against a specification for a measured electrical
activities), the good dies are put throough a multiple characteristics or quantity such as voltage,
v current, low limit
processing sequence to create neceessary electrical and high limit. Each PTR is assocciated with one pogo pin
connections for the device to function after ddies are separated information such as pin name, ch hannel name, site/socket
and sorted. The completed product is com mmonly known as number, test name and test numb ber. Part Results Record
“dies” that are packaged for use in an electroonic system. (PRR) contains the result informatio on relating to each tested
Data collected during manufacturing is aavailable from a dice such as hardware bin and softw ware bin number, dice/part
variety of sources [4]. In this study, the final test data is information and elapsed time. Part Count
C Record (PCR) will
analyzed to achieve the degraded pogo pin information as it appear near the end of STDF file an nd it contains the dice/part
includes thousands of highly correlated meaasurements taken count totals for one or all test sitess. Master Results Record
on millions of die. (MRR) is the last record in STDF file and it contains date
and time of last tested dice/part.
In the manufacturing process, the fiinal test data is
According to various record types’ information, identical
generated as the standard test data format (STDF) in machine
tests are performed by each site and d one site corresponds to
readable format. The STDF is a proprietarry file format for
each socket on semiconductor mach hine. Each socket contains
semiconductor test information originallyy developed by
identical set of pins and each pin iss identified by site/socket
Teradyne, a leading supplier of Automatic Test Equipment
number, pin (logical) name and chaannel name. Fig.1 shows
used to test semiconductors, wireless produucts, data storage
the example of 4 sockets with set off pogo pins.
and so on [4]. In this section, the complexxity of STDF file
such as its various record types and brief deescription of each
record type are explained. Besides, tthe fundamental
knowledge to perform data analysis such ass STDF file size,
how many tested die per day and each pogoo-pin life span are
also mentioned.

(i)Complexity of STDF
The STDF is a set of logical record types and those
record types can be used as the underlying data abstraction, Figure 1. Example of test socketss with set of pogo pins

346 2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA)
(ii) Fundamental Knowledge for Data Analyysis information from PTR and numberr of die/parts information
To perform semiconductor test datta analysis, the from PRR respectively.
fundamental knowledge such as test data ffile size per day,
how many die are tested per day and life sspan of pogo pin (iii)Data Analysis
have to be known. In this stage, the generated csv files from previous stage
The life span of a pogo pin can predict bbased on the total are directly utilized. Each row in csv
c file represents all the
time of use and how many times are usedd per day. As an testing readings of a particular dice d with all pogo pin
example, the total time of use of a pogo pinn is 150,000 times information, while each column con ntains the testing readings
and 2,500 times are used per day, the llife span of this of all tested die related to a speecific pogo pin. Due to
particular pogo pin is 60 days. In order to deetect the degraded different number of die tested in eacch STDF file, the number
pogo pin from STDF test data files, 60 dayys STDF test data of rows in each csv file is differeent. In current study, we
files are accumulated. As a consequence, the accumulated analyzed 18 test data files. Table II shows the statistics of 18
STDF data file size will be a few hundredd GBs. Thus, the csv files including the minimum, maximum,
m mean rows, the
data mining techniques that we utilized in thhis study might be total number of rows and columns.
robust and provide the ranks of pogo ppins with higher In order to analyze a complicatted test data files, firstly
probability of degradation. data correlation is used and don ne among the different
columns of csv files. The purpose off correlation analysis is to
III. DATA DRIVEN FRAMEWORK ARC CHITECTURE, reduce the data complexity by filtering
f out the highly
TECHNOLOGIES’ OUTCOME AND DISC
CUSSIONS correlated columns. Then data normalization and two
In order to auto-detect the higher possibilities of methods of data modelling: (i) lin near regression and, (ii)
degraded pogo pins, a Data Driven Fram mework (DDF) is classification are performed. The detail
d explanation of each
proposed. Fig. 2 illustrates the propoosed framework Data Analysis step is as follows.
architecture comprising of STDF data, Dataa Transformation TABLE II: STATISTICS OF SEMICON
NDUCTOR TEST DATA
and Preprocessing, Data Analysis Algorithm
ms and Discussion Min Max. Mean Total Total Columns
based on Data Analysis outcomes. The detaailed explanation Rows Rows Rows Rows
of each component is as follows. 62 677 272 4904 1553

A. Correlation Analysis
(ii) Data (iii) Data Analysis (iv)Results Fig. 3 shows the HeatMap off the absolute Pearson’s
(i)STDF Transformation Discussion correlation among all the columns ofo csv files after filtering
Data and Preprocessing
out the highly correlated columns. From
F this figure, it can be
observed that a large amount of columns are highly inter-
correlated (the red area in Fig.3).

A. Correlation B. Data C
C. Data Modelling
Analysis Normalization i. Linear
(Difference-based reegression
normalization)
ii..Classification

Figure 2. Proposed data driven framework (DDF) foor pogo pin anslysis

(i) Standard Test Data Format (STDF)


As mentioned in Section II, STDF file coonsists of various
record types and all records types have the information that
related to tested die/parts, hardware configuurations and pogo Figure 3. HeatMap of the absolutee Pearson’s correlation
pin information.
Based on their inter-correlatioons of the columns, the
(ii) Data Transformation and Preprocessingg filtering process is done as the following. (i) Setting a
correlation threshold (0.9 in this study),
s (ii) Grouping the
During data transformation and prepprocessing stage,
columns by clustering those who ose inter-correlations are
DDF not only transforms machine readable file format (.std)
greater than the threshold and then (iii) Selecting a
to human readable comma separated valuee files (.csv) but
representative from each group an nd the others are filtered
also performs data preprocessing such as cappturing the useful
out. As a result of the filtering prrocess, 389 columns are
information from different record types. F For example, test
selected from the total of 1553 columns in csv file. The
start date and time, test program name, testeer name are from
ducing the computational
filtering process is greatly helps red
MIR, test finished date and time from MRR, pogo pin
complexity in the following analy ysis: Data Normalization
and Data Modelling.

2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA) 347
B. Data Normalization methods description and the corrresponding experimental
Due to the distinction between the tested products (e.g. results are given as follows.
different types of dies), the readings recordeed from the same 1) Regression-based method
pogo pin in a tester machine could be inn different scale,
which will bring difficulty in analyzing the signals. An This section is presented how to detect the pogo pins
example of the distinction in scale is depiccted in Fig. 4(a). with quality degradations based on their collected signals. A
The curve in the Fig. 4(a) contains all readiings in a specific preliminary analysis has been carried out in the followings.
column in csv file. It is observed that there is a segment (x- The pin degradation in a tester machine
m is that the contact
axis value between around 3100 and 44000) where the quality between the pogo pin and the dice will become worse
readings are much higher than the rest. Thiss inconsistency in with time due to, for example, the oxidization on the pogo
scale makes the traditional and popular trend detection pin. A degraded pogo pin will then result
r in inaccuracy in the
algorithms (e.g. linear regression based treend detection) [6, signals measured from it, which will finally affect the
7] unavailable. decision (accept or reject) made to the
t corresponding dice in
In order to avoid this difficulty inccurred by scale manufacturing test. The ultimatee purpose of pogo pin
inconsistency, normalization to the origginal signals is degradation detection is to reduce as much as possible the
required. There are a lot of traditional tecchniques for data decision error made due to degradatiion in pogo pins.
normalization and standardization, such as z-score and min- Even tested on the same kind off dies with perfect contact
max [8]. However, for time series where thhe future statistics to pogo pin, the measured reading gs will still be different.
are unforeseen (in the tester’s signals, this iss due to unknown This may due to uncertainties existed in the system, for
information to the future tested products as well as their example, the uncertainties in sensorrs measuring instruments,
characteristics), those traditional data noormalization and the environmental noises and the t difference between
standardization techniques are not suitablle. Alternatively, individual die. The difference signals
s are the overall
some derived signals may be used which arre in the common reflection of those uncertainties (sh
hown in Fig. 4 (b)). With
scale. increase of the degradation in a poggo pin, more uncertainties
In this study, the continuous difference ssignal s (t+1)-s (t) will be added in. This study will start
s from the normalized
will be used for the rest analysis, where s ((t) is the original difference signals as introduced in seection B.
reading at time t. The assumption of using difference signal The following assumption to the t degradation of pogo
is that, the readings from a pogo pin testing different types of pins can be reasonably made: the quality
q of a pogo pin will
die have the same variation. In fact, this assuumption has been degrade gradually rather than in an n abrupt manner, i.e., the
verified to be true based on experimental results on test data. measured signals will not change ab bruptly and sharply. There
Fig. 4(b) illustrates the difference of thee original signal are a lot of methodologies for trendd analysis in such kind of
shown in Fig. 4(a) and it can be observed that all values in signals, and the linear regression is one of the most popular
the difference signal are located in the same scale. methods [10].
A simple description to the convventional linear regression
is as below:

‫ݕ‬௜ ൌ ߚ଴ ൅ ߚଵ ‫ݐ‬௜ ൅ ߝ௜ (1)

where ‫ݕ‬௜ is the response variablle observed at time ‫ݐ‬௜ ,


ߝ௜ ̱ܰሺͲǡ ߪఌଶ ሻ are residual variablees and independent for
(a) ݅ ൌ ͳǡ ʹǡ ǥ ǡ ݊. The parameter ߚଵ in Eq.
E (1) represents the rate
of change of ‫ ݕ‬with respect to time,, which is also used to as
the indicator for trend detection in time series. The general
way of computing ߚଵ is by least squares estimator [9].
In this study, the linear regreession was not directly
applied to the difference signals buut to their derived energy
signals. A sliding window with win ndow length of 10 is first
put onto the difference signal; the averaged energy in that
(b) window is then calculated. With thee sliding step length of 1,
Figure 4. Example of (a) orignal signal and (b) their difference signals of
an energy signal can be generated d. The basic thought of
test data using this kind of averaged energ gies is that, energy is a
common and generally good represeentation of the changings
C. Data Modelling in a signal. In addition, the movingg average is able to filter
out the bursts in the signals whicch can be considered as
In this section, two methods of dataa modelling are
noises. An example of the corresp ponding energy signal is
applied for the detection of degraded pogo pins: regression-
shown in Fig. 5.
based method and classification-based meethod. These two

348 2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA)
Figure 7. Trend of degradation off a pogo pin over time
Figure 5. : Example of the averaged energiess of test data
To identify the difference by classification-based method,
a flowchart shown in Fig.8 is propo osed. With the input time
series, segmentation is first conduccted to segment the time
series into pieces and feature extraaction is applied on each
piece to generate an instance. Instaances from the beginning
and ending part of the input time seeries will then be used to
build a 2-class classifier and the training
t accuracy can be
obtained. The higher the accuracy is, the greater is the
difference between the beginning and ending part. Hence,
Figure 6. Slops of the selected 389 columnss of test data ranking can be finally carried out to get the top pogo pins
which are having higher possibilitiess of degradation.
After generation of the energy siggnal, the linear
regression is applied to fit a linear model. Thhe rate of change
(i.e. slop) parameter ߚଵ is then used as the indicator of
Segmentation Instan Output
degradation: if there is no degradation, ߚଵ wiill be at around 0. Time
/Feature
Accura
Ranking
In other words, the greater of |ߚଵ |, the highher probability of Series ces Classificatio
on
cy
Extraction
degradation occurs. Fig. 6 illustrates (with sttem plot) the slop
values of the linear regression models fitteed on each of the Figure 8. Process flow of classifiication-based method
389 columns selected from the real data. It can be observed
that some of the columns in the data have rrelatively steeper For the segmentation/feature exxtraction in this study, a
slops (see for example the columns betweenn 300 and 350 in time series is consecutively segm mented into pieces with
the figure). length of 10 points and the popular 11 time domain statistics
The columns of the data are then sortted based on the are produced as the features [10]. Fig. 9 demonstrates the
absolute slop values in descending order. Taable III shows the segmentation/feature extraction proocess and the names and
top 10 pogo pins that will be thought too have relatively equations of 11 features are shown in TABLE IV.
higher probabilities of degradation. Howeever, how those
pogo pins degrade over time will need fuurther exploration
and confirmation with help from domain expperts. There are totally 4904 data po oints (i.e. 4094 dies are
tested) in test data. In order to appply 2-class classifier, the
TABLE III: TOP 10 POGO PINS BY LINEAR REGRESSSION METHOD first 500 and last 500 data pointts were selected as the
Rank Pin Rank Pin beginning and ending parts in the experimental studies. In
the feature space after feature extraction,
e there are 50
1 'P1_an12_OFFSET' 6 'P1_aan11_OFFSET'
instances in both beginning an nd ending parts. After
2 'P1_an11_GAIN' 7 'P1_aan7_OFFSET' calculating the classification accuraccy for each column of the
3 'P0_an13_OFFSET' 8 'P1_aan15_OFFSET' data, all columns are ranked and the t details of the top 10
columns are shown in Table V. Please note that all pin
4 'P1_an3_OFFSET’ 9 'P0_aan15_OFFSET'
names are dummy names duee to consideration of
5 'P1_an6_OFFSET' 10 'P0_aan2_OFFSET' confidentiality.

2) Classification-based method
As mentioned as above, a pogo pin has a trend of
degradation over time, in which the correesponding signals
(time series) collected from a specific pin shhould be different
between the beginning and ending parts. T he classification-
based method is to identify this differencce and hence to
discover the potentially degraded pogo pinss. Fig.7 illustrates
a trend of a pogo pin degradation from m normal status
degrading to a faulty status.
Figure 9. The segmentation/featu
ure extraction process

2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA) 349
TABLE IV: Details of 11 Time Domain Features over time. Further verification and checking by domain
Feature Equation experts will be needed.

Average amplitude of ଵ IV. CONCLUSIONS
‫݌‬ଵ ൌ ௡ ෍ ‫ݔ‬ሺ݅ሻ
vibration ௜ୀଵ
σ೙
೔సభሺ௫ሺ௜ሻି௣భ ሻ
మ ଵȀଶ Focusing on the pogo pin degradation issue in semi-
Standard deviation ‫݌‬ଶ ൌ ቀ ቁ
௡ିଵ conductor manufacturing, this paper proposed a data driven
ଵȀଶ

೙ framework for detecting degraded pogo pins. Two different
Root-mean-square amplitude ‫݌‬ଷ ൌ ൭௡ ෍ ௫ሺ௜ሻమ ൱
೔సభ
methods, namely regression-based method and
௡ ଶ classification-based method, are proposed and tested on the
Square of mean of rooted ଵ real data and their results are discussed. Common top-ranked
absolute amplitude ‫݌‬ସ ൌ ൭௡ ෍ ඥȁ‫ݔ‬ሺ݅ሻȁ൱
௜ୀଵ pogo pins were identified by both methods, which indicate
Peak value ‫݌‬ହ ൌ ݉ܽ‫ݔ‬ȁ‫ݔ‬ሺ݅ሻȁ
high possibility of the pins’ degradation. Further verification
of the detected pogo pins with our industrial partner on their
σ௡௜ୀଵሺ‫ݔ‬ሺ݅ሻ െ ‫݌‬ଵ ሻଷ degrees of degradation will be in the future work.
Skewness coeƥcient ‫ ଺݌‬ൌ
ሺ݊ െ ͳሻ‫݌‬ଶଷ
REFERENCES
σ௡௜ୀଵሺ‫ݔ‬ሺ݅ሻ െ ‫݌‬ଵ ሻସ
Kurtosis coeƥcient ‫ ଻݌‬ൌ
ሺ݊ െ ͳሻ‫݌‬ଶସ
‫݌‬ହ [1] T.Morrow. "Big Data Comes to Semiconductor Test,"
Peak factor ‫ ଼݌‬ൌ http://www.evaluationengineering.com/guest-commentaries/big-data-
‫݌‬ଷ
‫݌‬ହ comes-to-semiconductor-test.php , 24 April 2013.
Margin factor ‫݌‬ଽ ൌ
‫݌‬ସ [2] B. B. Shott. "Treatment of Semiconductor Assembly and Test
‫݌‬ଷ Activities as Manufacturing,"
Waveform factor ‫݌‬ଵ଴ ൌ భ ௡

σ௜ୀଵ ȁ‫ݔ‬ሺ݅ሻȁ http://www.irs.gov/Businesses/Treatment-of-Semiconductor-
‫݌‬ହ Assembly-and-Test-Activities-as-Manufacturing, 16 March 2006.
Impulse factor ‫݌‬ଵଵ ൌ భ ௡

σ௜ୀଵ ȁ‫ݔ‬ሺ݅ሻȁ [3] E.L. Russell. “Massive Data Sets in Semiconductor Manufacturing,”
http://www.nap.edu/openbook.php, Massive Data Sets: Proceedings
of a Workshop (1996).
TABLE V:TOP 10 POGO PINS BY CLASSIFICATION-BASED METHOD [4] Teradyne-Leading Supplier of Automatic Test Equipment:
http://www.teradyne.com.
Rank Pin Name Rank Pin Name
[5] STDF - Engineering Technology & Industrial Distribution,
1 'P0_an3_OFFSET' 6 'P0_an6_OFFSET' http://etidweb.tamu.edu/cdrom0/image/stdf/spec.pdf
[6] Katharine Lynn Gray, Comparisn of Trend Detection Methods, PhD
2 'Pl1' 7 'P0_an13_OFFSET'
Thesis, University of Montana, USA, 2007.
3 'P0_an2_OFFSET' 8 'P0_an15' [7] Melek, W.W.; Lu, Z.; Kapps, A; Fraser, W.D., "Comparison of trend
detection algorithms in the analysis of physiological time-series data,"
4 'P0_an4_OFFSET' 9 'P0_an13_GAIN'
Biomedical Engineering, IEEE Transactions on , vol.52, no.4,
5 'P1_an7_OFFSET' 10 'V0' pp.639,651, April 2005
[8] Anil Jaina, Karthik Nandakumara, Arun Rossb, “Score normalization
D. Results Discussion in multimodal biometric systems,” Pattern Recognition, Volume 38,
Issue 12, December 2005, Pages 2270–2285.
In the experiment study for degraded pogo pin detection,
[9] T. Hastie, R. Tibshirani, J. Friedman, The Elements of Statistical
two different data modelling methods were applied: linear Learning: Data Mining, Inference, and Prediction, Springer Series in
regression and classification. From the top 10 pogo pins Statistics, 2001.
shown in Table III and Table V, it is discovered that three [10] Q. Wu, X. Yang, and Q. Zhou, “Pattern Recognition and Its
pogo pins, i.e. ‘P0_an2_OFFSET’, ‘P0_an13_OFFSET’ and Application in Fault Diagnosis of Electromechanical System”,
‘P1_an7_OFFSET’, are the common pins. For the other Journal of Information and Computational Science, vol. 9, no. 8, pp.
2221–2228, 2012.
number of top pogo pins and the number of common pins,
please refer the following Table VI.
TABLE VI: NUMBER OF TOP AND COMMON POGO PINS RANKED BY LINEAR
REGRESSION AND CLASSIFICATION METHOD

Number of top pins 10 30 50 100


Number of common pins 3 8 13 34

Among the lists of top ranked pogo pins (from 10 to 100


top pogo pins) by two data modelling methods, around 30%
of the pogo pins are appeared on both lists, which indicates
there is a discrepancy between both results. However, the
common pogo pins observed by two methods will gain
higher confidence that their contact quality has degraded

350 2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA)

Das könnte Ihnen auch gefallen