Sie sind auf Seite 1von 8


5, OCTOBER 2014

A Vision-Based Broken Strand Detection Method

for a Power-Line Maintenance Robot
Yifeng Song, Hongguang Wang, Member, IEEE, and Jianwei Zhang, Member, IEEE

AbstractThe broken strand of overhead ground wire (OGW), on the power-line inspection task. In [10], a robot named
which is mainly caused by lightning strikes or the vibration of LineROVer was developed for the deicing maintenance task.
OGW, can lead to serious damage to the power grid system. Reference [11] introduced a robot that could install and remove
Power-line maintenance work is generally carried out by special-
ized workers under extra-high voltage live-line conditions which the aircraft warning spheres on OGW. LineScout robot [12],
involve great risks and high labor intensity. In this paper, we [13], which was equipped with a variety of special tools, could
present a broken strand detection method which can be prac- perform not only the inspection task, but also several mainte-
tically applied by maintenance robots. This method is mainly nance tasks (e.g., tightening screws and temporally repairing a
implemented in three steps. First, we obtain the region of interest broken strand).
(ROI) from the image acquired by the robot. Second, a histogram
of an oriented gradients descriptor vector is calculated to obtain The successful detection of obstacles and power-line mal-
the image gradient feature in ROI. In the third step, we apply a functions is an important prerequisite for power-line mainte-
multiclassifier which consists of two support vector machines to nance. Based on the sensory techniques used for detection, we
classify the wires into normal wire, broken strand malfunction, categorize the relevant literature into two kinds, namely, the de-
and obstacles on OGW. Experiment results successfully demon- tection using nonvision-based techniques [14][21] and the de-
strate the effectiveness of the proposed method.
tection using vision-based techniques [22][26].
Index TermsBroken strand, power-line maintenance robot, vi- For nonvision-based detection methods, a high-temperature
sual detection.
superconductor (HTS) superconducting quantum interference
device (SQUID) was used to detect single-wire breakage in
I. INTRODUCTION aluminum transmission lines [14]. A periodic pattern was de-
tected with wire breakage, while this pattern was not observed
in the normal wire. Using electromagnetic-acoustic transducers

R EGULAR power-line inspection and maintenance play

an important role in the normal running of power grid
systems. Traditionally, power-line inspection and maintenance
(EMAT), [15] presented a nondestructive inspection system to
diagnose the mechanical integrity of conductors. The system
was trained to identify four conditions: 1) normal; 2) minor ab-
tasks are performed by specialized workers. To ensure that the normal; 3) major normal; and 4) corrosion. Local and global
entire power grid system runs smoothly, the workers have to wave-based methods using ultrasonic waves were presented for
work under extra-high voltage live conditions which involve wire breakage detection in overhead transmission lines (OTLs)
great risks [1], [2]. Power-line inspection and maintenance [16]. Both methods used a sending/receiving transducer to gen-
robots are thus strongly practically required for the normal erate an ultrasonic wave in the cable. Defects could be detected
running of power grid systems. through the variation of reflected ultrasonic waves. In [17], the
Research on power-line inspection and maintenance robots eddy current transducer (ECT) was developed to provide in-
began about two decades ago [3], [4]. So far, robotic tech- formation to evaluate serious deterioration due to forest fires
nology has been successfully applied to power-line inspection in aluminum-conductor steel-reinforced conductors. Reference
and maintenance tasks. (See [5], [6], and the references therein.) [18] designed a variant of ECT for the transmission-line broken
References [7][9] introduced three different robots focusing strand detection. The developed ECT had good performance
on reliability and sensitivity. There were also nonvision-based
detection methods based on guided waves technology [19], in-
Manuscript received July 23, 2013; revised March 07, 2014 and March 27,
frared technology [20], and other technologies [21].
2014; accepted May 28, 2014. Date of publication June 25, 2014; date of current
version September 19, 2014. Paper no. TPWRD-00828-2013. For vision-based methods, [22] proposed an algorithm based
Y. Song is with the State Key Laboratory of Robotics, Shenyang Institute of on straight line and circular extraction for obstacle detection.
Automation, Chinese Academy of Sciences, Shenyang 110016, China, and also
The proposed method first removed transmission lines and
with the University of Chinese Academy of Sciences, Beijing 100039, China
(e-mail: songyifeng@ towers that were straight lines in the image. By recognizing the
H. Wang is with the State Key Laboratory of Robotics, Shenyang Institute of different features of circular or elliptic parts left in the image,
Automation, Chinese Academy of Sciences, Shenyang 110016, China (e-mail:
insulators and counterweights could be detected. In [23], a robot
J. Zhang is with the Department of Informatics, the Institute of Technical As- system was set up to detect a spacer in quad-bundled conduc-
pects of Multimodal Systems, University of Hamburg, Hamburg 22527, Ger- tors. The detailed process could be described in the following
many (e-mail:
steps: digital image acquisition, conductor localization, and
Color versions of one or more of the figures in this paper are available online
at spacer detection. One or more disruptions of conductors were
Digital Object Identifier 10.1109/TPWRD.2014.2328572 sufficient proof that a spacer was present. In [24], a geometric

0885-8977 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See for more information.

model, which included the position relationship between the

camera and the obstacle location center, was built by using the
Hough transform for the spacer and counterweight detection
in a dual-split conductor. In [25], a multisensor system was
constructed for conductor strand inspection. With two cameras
and a mirror, all strands of the conductor could be inspected
simultaneously. Furthermore, the laser sensors were used to
detect obstacles by measuring the variance of the conductor
diameter. In [26], an obstacle detection approach based on
contour view synthesis for the inspection robot was presented.
A virtual contour image library was first built offline through Fig. 1. Broken strand maintenance carried out by workers and the robot.
the reconstruction of multiview contour images with regard
to several types of obstacles. The detection could be then
accomplished by extracting obstacle contour and matching the
extracted contour with virtual contour images in the library.
Among all power-line malfunctions, the OGW broken strand Extra-high voltage transmission lines mainly consist of
is one of the most serious malfunctions, which can result in OGW, towers, single, double, or quad-bundled conductors and
flashover or even the crash of a power grid system. The de- fittings. Conductors are installed for power transmission, while
tection of the OGW broken strand is essential for the safety OGW installed above the conductors is mainly used for the
of power grid systems. Each of the nonvision-based and vi- power grid system protection from lightning strikes. According
sion-based methods has its advantages in transmission-line mal- to facility installation requirements, several kinds of fittings
function detection. However, as the sensing unit of the inspec- (e.g., counterweights) are mounted on OGW and conductors.
tion or maintenance robot, the vision sensor has advantages in However, the huge current transferred to the ground by OGW
broken strand detection due to the following reasons: 1) with during lightning strikes may result in local overheating and,
small volume and light weight, the vision sensor can be well-in- thus, causes the strands to melt and break. Typically, the broken
tegrated with the robot system, while some nonvision sensors, strands are located in the middle of the span where they are far
for example, HTS-SQUID and EMAT, are beyond the robots away from the tower. Since the dangling broken strand could be
load capability; 2) the vision sensor is not sensitive with the am- very close to the conductors, the huge potential difference be-
bient temperature and variation of the magnetic field, which may tween the broken strand and the conductors can form a voltage
interfere with some nonvision sensors, for example, infrared breakdown and flashover, which results in severe damage to the
sensor and ECT; and 3) the vision sensor facilitates the me- power grid system. Furthermore, the broken strand reduces the
chanical integration with robots. Most nonvision sensors (ECT, strength of OGW and causes other strands to break due to in-
EMAT, ultrasonic sensors, etc.) used for detection have an in- creased cable tension, which may even result in the rupture of
stallation requirement on the distance between the probe and the OGW and communication optical cables inside OGW. This will
wire. It causes difficulties with the robot motion, because the cause communication signal interruption, damage of power-line
sensors may interfere with robot moving parts or the transmis- equipment, and huge economic losses.
sion line. In addition, the vision sensor can provide information Due to aforementioned reasons, regular inspection of the
on power-line fittings and the transmission corridor for inspec- strands is quite important for the smooth running of a power
tion work. grid system. Once a broken strand is detected, maintenance
In this paper, we present a vision-based OGW broken strand work needs to be carried out immediately. Traditionally, the
detection method for the application of a power-line inspection broken strand maintenance work is performed by specialized
and maintenance robot. The proposed vision-based method for workers, as shown in Fig. 1(a). However, most power grid
the OGW broken strand detection consists of three steps: first, systems are built in complex geographical environments (e.g.,
we obtain the region of interest (ROI) from the acquired image; forests, marshes, and mountains) and OGWs are tens of meters
second, we utilize the histogram of oriented gradients (HOG) over the ground in extra-high voltage transmission systems.
method to extract the image gradient feature; a hybrid classi- These poor working conditions make the maintenance work
fier consisting of support vector machines (SVMs) is applied to hard to implement. Furthermore, the maintenance work usually
the classification of the normal wire, broken wire, and obstacles needs to be performed in the middle of the span and under
in the third step. It should be noticed that the proposed method extra-high voltage live conditions, and these factors also in-
can detect the OGW broken strand effectively, even though the crease the safety risk to the workers.
pattern of broken wires with a broken strand greatly varies. The For these reasons, robots are of great importance to replace
contribution of this paper is the proposal of a vision-based detec- workers in performing the power-line inspection and mainte-
tion method that targets the broken strand malfunction occurring nance work. Fig. 1(b) schematically shows the procedures of a
on OGW, which has not been effectively solved yet in the ex- robot performing inspection and maintenance work. After in-
isting literature. The proposed detection method is highly accu- stallation, the robot runs along OGW to perform the inspec-
rate and with invariance photometric transformations. We have tion and maintenance work, which consists of four parts: 1)
demonstrated the effectiveness of the proposed method through crossing over the tower; 2) crossing the obstacles such as coun-
numerous experimental studies. terweights; 3) detecting and 4) fixing the broken strand. In the

Fig. 3. ROI selection process.

Fig. 2. Framework of the robot maintenance procedure.

following part, we present a vision-based method for the de-

tection of OGW broken strand malfunction. Interested readers
Fig. 4. (a) Gradients of a normal wire are mainly in the vertical and spiral lead-
should refer to our previous work for the development of a se- angle orientation, the principal orientations, as shown by the arrows, (b) The
ries of power-line inspection and maintenance robots, including gradients of the counterweight obstacle distribute irregularly as shown by the
robot mechanical design based on the tasks of towers and coun- arrows. (c) The gradients of a broken wire are mainly in the vertical and spiral
lead-angle orientation as shown by the arrows, but the gradient intensity in these
terweights crossing [27], broken strand repair maintenance tools two orientations is lower than in normal wire due to the broken strand, whose
design [28], and motion control strategy [5]. gradient orientation is shown by the square.


Fig. 2 gives the framework of broken strand detection, outside of the center strip can be removed; 2) the upper part of
which is implemented by a maintenance robot for broken image is closer to the camera and exhibits high resolution, so it
strand repair. The detection method is vision based and can is well-suited to obtain a clear contour of the strands for further
be described as follows. We prepared three classes of images, image gradient feature extraction; and 3) the background in ROI
that is: 1) obstacles (as a common obstacle, counterweight is is the robots driving wheel component, which does not change
selected); 2) broken strand (the wire with broken strand); and with the motion of the robot.
3) normal wire. For these images, a special region of the image
is selected as ROI and an image gradient feature is extracted B. Image Features Extracted by HOG
in ROI as training data for classifiers. After a multiclassifier A broken wire possesses many similar features to a normal
is constructed by training, the robot is installed on OGW to wire, for example, gray intensity distribution. However, the
implement the maintenance task. Images of OGW captured by broken strand affects the strand configuration of OGW. For
the robot are processed in the same way to obtain the image a normal wire, all strands wind the OGW axis as a helix. In
gradient feature. The extracted feature is put into the classifier this case, the image gradients extracted from ROI are mainly
as testing data and, thus, the class label of the captured image distributed in two principal orientations: 1) the vertical orien-
is finally assigned by the classifier. tation (OGW edges) and 2) the spiral-lead angle orientation
of the helix (normal strands), as shown in Fig. 4(a). On the
A. ROI Selection other hand, the broken strand has a different configuration from
In the image captured by the robot, the targeting object the normal strands, and that leads to the increase of image
that is useful for further processing is in a certain region. The gradients in other orientations, as shown in Fig. 4(c). Although
remaining regions are not as important for further processing the image gradient of the broken wire is also mainly in the ver-
and will bring complexity to feature extraction. Therefore, ROI tical orientation and spiral lead-angle orientation, the gradient
should be selected to remove the unnecessary part of the image intensity in these two principal orientations is lower than that of
and reduce the amount of computation. a normal wire due to the broken strand. The image gradient of
Here, ROI is picked in a rectangular region by three steps. the obstacle is not distributed in the aforementioned principal
First, we use Hough transform to extract the straight lines of orientations, as shown in Fig. 4(b), because OGW is blocked
OGW boundaries as the horizontal reference and choose pixels by the counterweight obstacle. Based on the aforementioned
at the top of the image as the vertical reference. Second, a point analysis, the normal wire, the broken wire, and obstacles have
with a fixed distance to the horizontal and vertical reference is different image features on gradient distribution that can be
marked as point A. At last, the rectangular region with a prede- extracted for classification.
fined length and width and point A being the top left vertex, is Based on the aforementioned analysis, the gradient feature
picked as ROI, as shown in Fig. 3. In the ROI, there is OGW can be extracted for the broken strand, normal wire, and obstacle
with a clear strands contour that can be used for image gradient classification. Here, we choose HOG as the image gradient fea-
extraction. ture descriptor for the classification. HOG is a common feature
The position of the ROI is chosen based on the following descriptor used in computer vision for object detection. Given
reasons. 1) due to the robot installation position with respect to the hypothesis that local object appearance and shape can be
OGW, OGW only appears in the center strip of the image and the characterized well by the distribution of local intensity gradient

Fig. 5. ROI before and after histogram equalization.

or edge directions, the object can be described even without pre-

cise knowledge of the corresponding gradient or edge positions Fig. 6. Orientation binning in a 8 8 pixel cell.
[29]. Edges or gradient structures captured by an HOG are in-
trinsic characteristics of the local object shape, and the HOG
descriptor is invariant to geometric and photometric transfor-
mations. These are the key advantages of the HOG descriptor
compared with other descriptors. Typically, there are five steps
to obtain the HOG descriptor of an image, and we describe these
steps as follows.
1) Histogram Equalization: To reduce the computation of
image feature extraction, color images are first converted to
grayscale images. After that, histogram equalization is applied
Fig. 7. Descriptor block normalization.
on the converted grayscale images to regulate the image gray
values into a prescribed intensity scope, which can reduce the
negative impact of the environment lighting condition. Fig. 5
stacles. Fig. 6 shows the division of cells and the local 1-D his-
shows the ROI before and after histogram equalization. Before
histogram equalization, the image background owns similar
4) Descriptor Block Normalization: To increase the robust-
gray values as the strands. It can be found that the gray intensity
ness of the HOG descriptor against local variations in illumina-
distribution of the ROI is regulated by effectively spreading
tion and foreground-background contrast, the gradient of each
out the most frequent intensity values through histogram
cell must be normalized locally within a spatially connected cell
group, which is denoted as a block. A vector is then used to
2) Gradient Computation: For each pixel on the
represent the normalized gradient information of all the cells
image, we calculate the pixel gradient by
within each block. Typically, the blocks overlap, so the local 1-D
histogram of each cell is normalized with respect to different
(1) blocks and contributes several components to the final HOG
descriptor vector. In this way, the HOG descriptor has a fine ef-
where is the image gray value at this pixel, and fect of normalization and has good performance on detection.
are, respectively, the horizontal and So far, there are four different block normalization methods,
vertical gradient magnitudes. We then formulate the orientation namely: 1) the L2-norm method; 2) the L2-hys method; 3) the
angle and norm of the pixel gradient as follows: L1-norm method; and 4) the L1-sqrt method, and the first three
methods have equally good performance on local normalization
(2) [29], [30]. In this paper, we set 2 2 cells for each block and use
the L2-norm method for block normalization.
Fig. 7 shows the process of descriptor block normalization: At
(3) first, the process of orientation binning is accomplished for each
cell in the block. Hence, by combining all of the histograms, we
3) Orientation Binning: Hence, we divide the image into can obtain a vector containing gradient information of all the
many pieces called cells that contain several pixels. Cells that cells within the block. Finally, the normalization is carried out
cover a small spatial region are defined as a standard unit for a to obtain the normalized block vector.
local HOG orientations calculation. For each cell, a local 1-D 5) Image Features Extracted by an HOG: Finally, by con-
histogram of gradient directions is calculated by accumulation necting all normalized block vectors, we can extract the HOG
of gradient magnitudes in each orientation. Here, the cell is di- descriptor as a 1-D combined vector. The dimension of the HOG
vided into a square shape and each cell contains 8 8 pixels. descriptor vector, which affects image-processing speed, is re-
Since the entire image gradients have been calculated in the last lated to the block number in the image, the cell number in a
step, the orientation and magnitude are known. The number of block, and the number of orientation bins. Fig. 8 shows the HOG
orientation bins is set to 9 in the histogram, that is, for every 20 , descriptor vector of a normal wire image.
the gradient magnitude is accumulated for a bin. With this set-
ting of orientation bins, gradients in vertical orientation, spiral C. Classification by the SVM
lead-angle orientation, and other orientations, respectively, con- 1) Maximum Margin Classifier: SVM is a supervised
tribute to different orientation bins so that different image fea- learning method that is usually used as a binary data classifier
tures can be extracted among normal wire, broken wire, and ob- [31]. For a classification task, two data sets are required, that is,

Fig. 8. HOG descriptor vector for a normal wire image.

Fig. 9. Maximum margin separating hyperplane.

the training data set and the testing data set. The training data
Through equivalent transformation and simplification, the
set consists of enormous instances, and each instance is rep-
problem can be further formulated as
resented by several attributes and an associated label showing
the class that the instance belongs to. It should be noted that
the instance class labels are predefined in the training data set,
while the class label of each instance is unknown for the testing Subject to (6)
data set. The SVM classifier aims at assigning the respective
It should be noted that only a few training data sets are right
class label to each instance of the testing data set.
with a distance of to the separating hyperplane, which are then
Given the training data instance is
called support vectors. As shown in Fig. 9, the support vectors
the instance index and is the total number of instances in the
of different classes are, respectively, located in two hyperplanes,
training data set. is a p-dimensional real-value vector
and set off by a gap from the separating hyperplane.
expressing the instance attributes, and is the in-
2) Slack Variables: It is obvious that the support vectors
stance class label. The object of the linear classifier is to find
have an important influence on forming separating hyperplanes.
the separating hyperplane that divides the training data instance
There may be some abnormal instances out of the normal range
into two classes according to their class labels. However, there
in the training data, which are denoted as outliers. If outliers
is probably more than one separating hyperplane that can di-
become support vectors of a separating hyperplane, the sepa-
vide the training data. The SVM method is proposed to find
rating hyperplane has then very poor performance on classifi-
a separating hyperplane with maximum margin. Classification
cation. To reduce the influence of outliers, slack variable is
through a maximum margin hyperplane can maximize the sta-
introduced to the constraint, and the constraint is changed into
bility and confidence of classification and benefit the extension
. It means the support vector is allowed
application of the classifiers [32].
to have an exclusion with respect to the separating hyperplane
In the linear classifier, the separating hyperplane can be ex-
and the exclusion distance is limited to . It should be noted
pressed by function , where and are the normal
that the exclusion distance should be as low as possible; other-
vector and the intercept of the hyperplane. The classification
wise, any hyperplane can be treated as a separating hyperplane.
function can be formulated as follows:
Therefore, the optimization problem can be formulated as

Putting the instance attributes of training data into the classi-

fication function, the function value should be positive for the Subject to
data with class label , while the function value should be
negative for the data with class label .
For each , the formulation can be established as where is the penalty parameter and is the slack variable
, where is the corresponding point of on the of the th instance, which is introduced to measure the degree
hyperplane and is the distance from point to the hyper- of misclassification of the data.
plane. It should be noted that is a point on the hyperplane, Since the optimization problem is a convex
which means exists. Therefore, we can obtain problem, the following equations should be sat-
. Here, the geomet- isfied based on KarushKuhnTucker conditions
rical margin is defined as (5) to show the minimum distance .
from the data set to the hyperplane By solving the optimization problem, we can obtain the
maximum margin hyperplane parameters , and find the
hyperplane. Furthermore, the optimized classification function
(5) for testing data can be described as
As previously discussed, SVM is a special classifier to find
the maximum margin hyperplane. After the definition of the ge- (8)
ometrical margin, the problem can be described as .
Besides, the constraint should be satis- 3) Nonlinearity Classification: Since the SVM is established
fied based on the definition of the geometrical margin. based on the linear classification model, it cannot directly be

Fig. 10. Multiclassification for normal wire, broken wire, and counterweight.
Fig. 11. AApe-D robot and broken strand detection experiment.

used for nonlinear classification. In this case, the data need to

be mapped into feature space from the input space and then the
SVM can be applied to classification in feature space. The clas-
sification function can be formulated due to the dual form of the

where is the mapping from the input space to feature space.

With a kernel trick [33], the inner product in feature space
can directly be calculated through the kernel Fig. 12. ROI edge orientation extraction. (a) ROI. (b) Image edge. (c) Hori-
function zontal gradient magnitude. (d) Vertical gradient magnitude.

AApe-D as the experimental platform. AApe-D, which consists

Here, the widely used Gaussian radial basis is chosen as the of a mobile platform and specialized maintenance tools, is a
kernel function as shown maintenance robot for broken strand repair, as shown in Fig. 11.
With the wheel structure, the robot can move continuously
(9) along OGW. The gripper wheels installed under the driving
wheels can increase the adhesion capability of the driving wheel
The application of the kernel trick effectively avoids the cal- and guarantee the safety of the robot during the maintenance
culation complexity in feature space and makes nonlinear clas- task. A passive revolute joint is installed on each arm to im-
sification with an SVM practical. prove the obstacle crossing capability. Through these mechan-
4) Multiclassifier: Before the robot is applied to this spe- ical designs, the mobile platform can run along OGW; cross ob-
cific maintenance task, numerous images of three classesthe stacles, for example, counterweights and splicing sleeves; and
normal wire, the broken wire, and the obstacleshave been carry repair tools to the broken strand. A pan-tilt-zoom camera
taken in the laboratory, and HOG descriptor vectors of these im- is installed on the electrical box for image acquisition.
ages have been calculated as the training data for SVMs. Since The specialized tools installed on the platform, Looper and
SVM is a binary classifier, a multiclassifier needs to be con- Clamper, play different roles during the repair process. At first,
structed with several SVMs. Fig. 10 shows the structure of the Looper is used to put the broken strand back to the original lo-
constructed multiclassifier. Two SVMs are established and put cation of OGW, and a clamp is then installed at the malfunction
into two layers, respectively. Since the HOG descriptor of the location by Clamper to prevent the broken strand from unrav-
obstacles is quite different from the normal wire and the broken eling once again.
wire, we treat the obstacles alone as a class and the normal wire The robot operator can operate AApe-D through a
and the broken wire as the other class to train an SVM, which humancomputer interface at the ground control station,
is inserted into the first layer in the multiclassifier. The second where the information of the robot state and the environment
SVM is trained to classify the normal wire and broken wire and state can be transferred by wireless data and image modems.
is inserted into the second layer in the multiclassifier. After the
two-layers classification, the testing data can be assigned with B. Experimental Result Analysis
proper labels.
Before the robot is installed on OGW for the detection, the
multiclassifier is trained by three sets of data with known labels.
IV. EXPERIMENT After that, the HOG descrptor is calculated from the image cap-
tured by the robot on OGW. The horizontal and vertical gradient
A. Experimental Platform
magnitudes are first calculated as shown in Fig. 12(c) and (d),
The inspection and maintenance robots that we have de- where the color bar represents the gradient magnitude. Based on
veloped [8] can be used for the experiments and we selected the calculated image gradient, the image feature of HOG can be

Fig. 13. HOG descriptor vector extracted from normal wire, broken wire, and obstacle images. (a) Normal wire. (b) Broken wire. (c) Obstacles.

This paper presents a vision-based method for the OGW
broken strand detection, which facilitates the practical ap-
plication of power-line maintenance robots. The proposed
vision-based detection method is mainly implemented in three
steps, namely, the 1) ROI selection; 2) the acquisition of image
obtained by the method mentioned in Section III, and the HOG features of an HOG; and 3) the application of a multiclas-
descriptor vectors are as shown in Fig. 13. sifier for classification. The effectiveness of the proposed
HOG features of obstacles are apparently different from vision-based broken strand detection method is verified and
the other two classes, because OGW is partially or totally demonstrated by numerous experimental studies.
blocked by obstacles. Although the gradients of both broken In our future work, we will work on enhancing the robustness
wire images and normal wire images mainly distribute in the of the proposed method and design a specialized mechanism to
vertical orientation and the spiral lead-angle orientation due to reduce the effect of external factors, for example, illumination
the configuration of OGW, the normalized gradient intensity in for the experiments in the real environment. In addition, we will
these two orientations of the normal wire images is higher than focus on autonomous robot control based on the detection result
that of the broken wire images. The reason is that the broken and enlarging the detection range of other obstacles and mal-
strand, whose image gradient is not in the vertical direction or functions on OGW.
the strands spiral lead-angle direction, affects the image gra-
dient distribution. Based on the aforementioned analysis, the REFERENCES
three classes of images own different image gradient features [1] W. Li, Risk Assessment of Power Systems, Models, Methods, and Ap-
for classification. The source code of the proposed method is plications. Piscataway, NJ, USA: IEEE, 2005.
[2] T. A. Short, Electric Power Distribution Handbook. Boca Raton, FL,
available at USA: CRC, 2004.
change/45784-broken-strand-detection [34], [35]. [3] S. Aoshima, T. Tsujimura, and T. Yabuta, A wire mobile robot with
Through our laboratory experiments, we demonstrate the multi-unit structure, in Proc. IEEE/RSJ Int. Workshop Intell. Robots
Syst., 1989, pp. 414421.
effectiveness of the proposed method. By testing 100 images [4] J. Sawada, K. Kusumoto, Y. Maikawa, T. Munakata, and Y. Ishikawa,
of each environmental state, it shows that images of different A mobile robot for inspection of power transmission lines, IEEE
classes have characteristic gradient features, and the detection Trans. Power Del., vol. 6, no. 1, pp. 309315, Jan. 1991.
[5] K. Toussaint, S. Montambault, and N. Pouliot, Transmission line
accuracy for each kind is above 96%, as shown in Table I. maintenance robots capable of crossing obstacles: State-of-the-art
According to the detection result, the robot can apply a proper review and challenges ahead, J. Field Robot., vol. 26, no. 5, pp.
control strategy on OGW. If the detection result is a broken wire 477499, 2009.
[6] J. Katrasnik, F. Pernus, and B. Likar, A survey of mobile robots for
or obstacle, the corresponding location can be estimated through distribution power line inspection, IEEE Trans. Power Del., vol. 25,
the position of the ROI for robot motion planning. Regarding no. 1, pp. 485493, Jan. 2010.
the detection result of normal wire and obstacle, the robot can [7] P. Debenest, M. Guarnieri, K. Takita, E. F. Fukushima, S. Hirose,
K. Tamura, A. Kimura, H. Kubokawa, N. Iwama, and F. Shiga,
run along OGW and cross the obstacle to approach the broken ExplinerRobot for inspection of transmission lines, robotics and
strand malfunction. The broken strand repair maintenance is im- automation, in Proc. IEEE Int. Conf. Robot. Autom., 2008, pp.
plemented with specialized tools of the robot once the broken 39783984.
[8] H. G. Wang, F. Zhang, Y. Jiang, G. J. Liu, and X. J. Peng, Develop-
strand is detected. The detection of the broken strand and ob- ment of an inspection robot for 500 kV EHV power transmission lines,
stacle lays a foundation for autonomous robot control. in Proc. IEEE Int. Conf. Intell. Robots Syst., 2010, pp. 51075112.

[9] L. Tang, S. Fu, L. Fang, and H. Wang, Obstacle-navigation strategy [28] Y. F. Song, H. G. Wang, Y. Jiang, and L. Ling, AApe-D: A novel
of a wire-suspend robot for power transmission lines, in Proc. IEEE power transmission line maintenance robot for broken strand repair,
Int. Conf. Robot. Biomet., 2004, pp. 8287. in Proc. IEEE Int. Conf. Appl. Robot. Power Ind., 2012, pp. 117122.
[10] S. Montambault and N. Pouliot, The HQ LineROVer: Contributing to [29] N. Dalal and B. Triggs, Histograms of oriented gradients for human
innovation in transmission line maintenance, in Proc. IEEE Int. Conf. detection, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recogn.,
Transm. Distrib. Construct., Oper. Live-Line Maint., 2003, pp. 3340. 2005, vol. 2, pp. 886893.
[11] M. F. M. Campos, G. A. S. Pereira, S. R. C. Vale, A. Q. Bracarense, G. [30] D. G. Lowe, Distinctive image features from scale-invariant key-
A. Pinheiro, and M. P. Oliveira, A mobile manipulator for installation points, Int. J. Comput. Vis., vol. 2, no. 60, pp. 91110, 2004.
and removal of aircraft warning spheres on aerial power transmission [31] C. Cortes and V. Vapnik, Support-vector networks, Mach. Learning,
lines, in Proc. IEEE Int. Conf. Robot. Autom., 2002, pp. 35593564. vol. 20, no. 3, pp. 273297, 1995.
[12] N. Pouliot and S. Montambault, Geometric design of the linescout, a [32] M. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector
teleoperated robot for power line inspection and maintenance, in Proc. Machines. Cambridge, U.K.: Cambridge Univ. Press, 2000.
IEEE Int. Conf. Robot. Autom., 2008, pp. 39703977. [33] B. Schoeumlkopf and A. Smola, Learning With KernelsSupport
[13] N. Pouliot and S. Montambault, LineScout technology: From inspec- Vector Machines, Regularization, Optimization and Beyond. Cam-
tion to robotic maintenance on live transmission power lines, in Proc. bridge, MA, USA: MIT Press, 2001.
IEEE Int. Conf. Robot. Autom., 2009, pp. 10341040. [34] Leo, Histograms of oriented gradients. Aug. 2012. [Online]. Available:
[14] A. Miyazaki, Y. Hatsukade, H. Matsuura, T. Maeda, A. Suzuki, and
S. Tanaka, Detection of wire element breakage in power transmission tograms-of-oriented-gradients
line using HTSSQUID, Phys. C, vol. 469, no. 1520, pp. 16431648, [35] C. C. Chang and C. J. Lin, LIBSVM : A library for support vector
2009. machines, ACM Trans. Intell. Syst. Technol. vol. 2, pp. 27:127:27,
[15] S. W. Lim and R. A. Shoureshi, Advanced monitoring system for in- 2011. [Online]. Available:
tegrity assessment of electric power transmission lines, in Proc. Amer.
Control Conf., 2006, pp. 1416.
[16] H. Thomas, B. M. Beadle, H. Sprenger, and L. Gaul, Wave-based
defect detection and interwire friction modeling for overhead trans- Yifeng Song received the B.Eng. degree in mechan-
mission lines, Archive Appl. Mechan., vol. 79, no. 67, pp. 517528, ical engineering from Huazhong University of Sci-
2009. ence and Technology, Wuhan, China, in 2007 and is
[17] S. D. Kim and M. M. Morcos, Power engineering lettersAn applica- currently pursuing the Ph.D. degree in mechatronic
tion of solenoid sensor for inspecting deterioration of acsr conductors engineering at the State Key Laboratory of Robotics,
due to forest fires, IEEE Power Eng. Rev., vol. 21, no. 10, pp. 5052, Shenyang Institute of Automation and University of
2001. Chinese Academy of Sciences, Beijing, China.
[18] X. L. Jiang, Y. F. Xia, J. L. Hu, Z. J. Zhang, L. C. Shu, and C. S. His current research interests are power-line
Sun, An S-transform and support vector machine (SVM)-based online inspection and maintenance robot.
method for diagnosing broken strands in transmission lines, Energies,
vol. 4, no. 9, pp. 12781300, Aug. 2011.
[19] C. D. Hernandez-Salazar, A. Baltazar, R. Mijarez, and L. Solis,
Structural damage monitoring on overhead transmission lines using
guided waves and signal processing, in Proc. AIP Conf., 2010, pp.
17211728. Hongguang Wang (M11) received the B.Eng.
[20] G. Wu, H. Cao, X. Xu, H. Xiao, S. Li, Q. Xu, B. Liu, Q. Wang, Z. degree in mechanical engineering from Zhengzhou
Wang, and Y. Ma, Design and application of inspection system in a Institute of Technology, Zhenzhou, China, in 1986,
self-governing mobile robot system for high voltage transmission line the M.Eng. degree in mechanical engineering
inspection, in Proc. IEEE Int. Conf. Power Energy Eng., 2009, pp. from the Northeastern University in Mechanics,
14. Shenyang, China, in 1993, and the Ph.D. degree in
[21] F. J. F. Peters, S. Ramanna, and M. S. Szczuka, Towards a mechatronic engineering from the Graduate School
line-crawling robot obstacle classification system: A rough set of the Chinese Academy of Sciences, Beijing, China,
approach, Rough Sets, Fuzzy Sets, Data Mining, and Granular in 2008.
Computing, LNAI 2639, pp. 303307, 2003. Currently, he is a Professor at the State Key Lab-
[22] S. Y. Fu, W. M. Li, Y. C. Zhang, Z. Z. Liang, Z. G. Hou, M. Tan, W. B. oratory of Robotics, Shenyang Institute of Automa-
Ye, B. Lian, and Q. Zuo, Structure-constrained obstacles recognition tion, Chinese Academy of Sciences, Shenyang, China. His current research in-
for power transmission line inspection robot, in Proc. IEEE Int. Conf. terests include the analysis and synthesis of robot mechanisms, the mechanics
Intell. Robots Syst., 2006, pp. 33633368. of serial and parallel manipulators, as well as modular reconfigurable robots and
[23] W. H. Li, A. Tajbakhsh, C. Rathbone, and Y. Vashishtha, Image autonomous mobile robots.
processing to automate condition assessment of overhead line com-
ponents, in Proc. IEEE Int Conf. Appl. Robot. Power Ind., 2010, pp.
[24] C. S. Hu, G. P. Wu, H. Cao, and X. H. Xiao, Obstacle recognition Jianwei Zhang (M92) received the B.Eng. (Hons.)
and localization based on the monocular vision for double split trans- and M.Eng. degrees in computer science from
mission lines inspection robot, in Proc. IEEE Int. Conf. Image Signal the Department of Computer Science of Tsinghua
Process., 2009, pp. 15. University, Beijing, China, in 1986 and 1989, re-
[25] P. Debenest, M. Guarnieri, K. Takita, E. F. Fukushima, S. Hirose, spectively, and the Ph.D. degree in computer science
K. Tamura, A. Kimura, H. Kubokawa, N. Iwama, and F. Shiga, at the Institute of Real-Time Computer Systems
Sensor-arm robotic manipulator for preventive maintenance and and Robotics, Department of Computer Science,
inspection of high-voltage transmission lines, in Proc. IEEE Int. University of Karlsruhe, Karlsruhe, Germany, in
Conf. Intell. Robots Syst., 2008, pp. 17371744. 1994.
[26] G. Yao, Y. Liu, F. Dong, and B. Lei, An obstacle detection approach Currently, he is a Professor and Director of the In-
of transmission lines based on contour view synthesis, in Proc. IEEE stitute of Technical Aspects of Multimodal Systems,
Int. Conf. Autom. Logist., 2010, pp. 1924. Department of Informatics, University of Hamburg, Hamburg, Germany. His re-
[27] H. G. Wang, Y. Jiang, A. H. Liu, L. J. Fang, and L. Ling, Research search interests are multimodal information systems, machine learning, service
of power transmission line maintenance robots in SIACAS, in Proc. and cognitive robots, human-computer communication, and networked audio-
IEEE Int. Conf. Intell. Robots Syst., 2010, pp. 17. video systems.