Sie sind auf Seite 1von 8

제25권 제7호 2019년 7월 ISSN 2383-6318(Print)

ISSN 2383-6326(Online)

정보과학회 컴퓨팅의 실제 논문지


KIISE Transactions on Computing Practices
VOLUME 25, NUMBER 7, JULY 2019

블록체인 기반의 분산형 테스트 관리 도구 ···················································································· 추현지, 송인혜, 최병주 321

머신러닝 기술을 사용한 비트코인 특성들의 비트코인 가격에 미친 영향 분석 ··················································· 윤성욱 329

uC/OS-II 운영체제의 보안 개선을 위한 자체 코드 무결성 검증 기법 ·································· 한승재, 김규식, 조성제 335

5G 초과밀 네트워크에서 협력적 모바일 에지 컴퓨테이션 ················································ 호씬 엠디 딜로와르, 허의남 344

오프로딩의 성능 분석

단편 논문
자율주행 로봇을 위한 영상 기반 자가 탐색 시스템 연구 ······························· 고 티엔 투, 나엑 엠디 알람깃, 허의남 351

매니코어 시스템을 위한 MPI 노드 내 통신 프리미티브 성능 비교 ························································· 조중연, 진현욱 357

버퍼 오버플로우 디버깅 플랫폼의 메모리 오버헤드 분석 ···························· 최영호, 권재욱, 정석재, 박한섭, 엄영익 363

경희대학교 국제캠퍼스 | IP:163.***.117.100 | Accessed 2019/07/26 09:24(KST)


KIISE Transactions on Computing Practices
VOLUME 25, NUMBER 7, JULY 2019

A Decentralized Test Management Tool Based on··························· Hyun-Ji Chu, Inhae Song, Byoungju Choi 321

Blockchain Technique

Analyzing Impact of Bitcoin Features to Bitcoin Price via·································································· Seongwook Youn 329

Machine Learning Techniques

A Self-Code Integrity Verification Technique for·························· Seungjae Han, Gyoosik Kim, Seong-je Cho 335

Improving the Security of uC/OS-II Operating System

Performance Analysis of Collaborative Mobile Edge····································· Md Delowar Hossain, Eui-Nam Huh 344

Computation Offloading in 5G Ultra-Dense Networks

Short Papers
An Effective Vision-based Self-Navigation System·················································· Ngo Thien Thu, Md Abu Layek 351

for Autonomous Indoor Vehicle Eui Nam Huh

Performance Comparison of MPI Intra-node Communication······················· Joong-Yeon Cho, Hyun-Wook Jin 357

Primitives for Manycore System

Analysis of Memory Overhead for Buffer Overflow···················· Youngho Choi, Jaeook Kwon, Seokjae Jeong 363

Debugging Platform Hansub Park, Young Ik Eom

경희대학교 국제캠퍼스 | IP:163.***.117.100 | Accessed 2019/07/26 09:24(KST)


ISSN 2383-6318(Print) / ISSN 2383-6326(Online)
KIISE Transactions on Computing Practices, Vol. 25, No. 7, pp. 351-356, 2019. 7
https://doi.org/10.5626/KTCP.2019.25.7.351

자율주행 로봇을 위한 영상 기반
자가 탐색 시스템 연구
(An Effective Vision-based Self-Navigation System
for Autonomous Indoor Vehicle)

† † †

고 티엔 투 나엑 엠디 알람깃 허 의 남
(Ngo Thien Thu) (Md Abu Layek) (Eui Nam Huh)

요 약 스마트 시티는 스마트 빌딩의 개발에서부터 시작해 인간 삶의 질 향상을 목표로 발전해 나가


고 있다. 특히, 자율주행 실내 로봇은 스마트 빌딩의 운영을 위한 혁신적인 요소가 될 것으로 예측된다.
이를 실현하기 위해 본 논문에서는 영상 기반의 센싱 및 하드웨어 제어 메커니즘을 활용한 자가 탐색형
이동 로봇을 개발하였다. 먼저, 복도의 양 끝 가장자리를 슬라이딩 윈도우 라인 검출 방법을 이용하여 인
식하게 되면, 방향 가중치 알고리즘을 통해 복도의 가운데를 인식해 로봇의 방향을 예측할 수 있게 된다.
본 제안 기법을 실제 로봇에 제안 탐색 기법을 적용하여 다양한 실험을 진행한 결과 실시간으로 자율주행
로봇이 장애물 회피가 가능함을 보였다.
키워드: 자가 탐색, 네비게이션 시스템, 자율주행로봇, 로봇 공학

Abstract The advancements in a smart city have paved a way towards the development of smart
buildings with an aim to improve the resident environment and quality of life. Apparently, autonomous
indoor robots are considered as a critical factor for the operation of smart buildings. In the present
work, we have developed a self-navigation system for mobile robots that utilize vision-based sensing
and hardware control mechanism. First, sliding window line detection method is used to find the
polynomial fit to two lines of the hallway, and then direction weight estimation algorithm predicts the
direction of the robot with respect to the center of the two estimated lines. The results relying on real
robot experiments show that the proposed navigation system can enhance the in-front obstacle for the
real-time task in autonomous indoor robots.
Keywords: self-navigation, autonomous mobile robot, navigation system, robotics

․This research was supported by the MSIT(Ministry of Science and ICT), 논문접수 : 2019년 3월 29일
Korea, under the ICT Consilience Creative program(IITP-2019-2015- (Received 29 March 2019)
0-00742) supervised by the IITP(Institute for Information & commu- 논문수정 : 2019년 5월 20일
nications Technology Planning & Evaluation) (Revised 20 May 2019)
․이 논문은 제45회 한국소프트웨어종합학술대회에서 ‘An Efficient Navigation 심사완료 : 2019년 5월 21일
Method for Self-Driving Robot in Hallway Environment’의 제목으로 (Accepted 21 May 2019)
발표된 논문을 확장한 것임
CopyrightⒸ2019 한국정보과학회ː개인 목적이나 교육 목적인 경우, 이 저작물
† 비 회 원 : 경희대학교 컴퓨터공학과 의 전체 또는 일부에 대한 복사본 혹은 디지털 사본의 제작을 허가합니다. 이 때,
thu.ngo@khu.ac.kr 사본은 상업적 수단으로 사용할 수 없으며 첫 페이지에 본 문구와 출처를 반드시
layek@khu.ac.kr 명시해야 합니다. 이 외의 목적으로 복제, 배포, 출판, 전송 등 모든 유형의 사용행위
†† 종신회원 : 경희대학교 컴퓨터공학과 교수(Kyung Hee Univ.) 를 하는 경우에 대하여는 사전에 허가를 얻고 비용을 지불해야 합니다.
johnhuh@khu.ac.kr 정보과학회 컴퓨팅의 실제 논문지 제25권 제7호(2019. 7)
(Corresponding author임)

경희대학교 국제캠퍼스 | IP:163.***.117.100 | Accessed 2019/07/26 09:19(KST)


352 정보과학회 컴퓨팅의 실제 논문지 제 25 권 제 7 호(2019. 7)

1. Introduction focuses on the navigation implementation to enhance


obstacle avoidance task that combines the hardware
With the proliferation of smart cities, modern
control mechanism and image processing algorithm.
technologies such as internet of things, sensors,
The key constraint is that the robot must be safe
artificial intelligence, robotics process automation are
and effective to human and property.
expected to provide various services as they offer the
The paper is organized as follows, in section 2 we
potential for increased productivity, and positive
present the overall design of the proposed system.
impact in human living environments[1]. In the auto-
Section 3 describes the mapless navigation system
mation area, mobile service robots are designed to
for an autonomous mobile robot. Section 4 shows the
interact with people and gradually enter daily life in
experiment results, following by conclusion in section 5.
many areas such as hospitality, healthcare, ware-
house. Recently, we have seen significant efforts in 2. The Overall Navigation System
developing robots in both sensing and computing
technologies[2]. With the continued improvement The system consists of three main components:
computer vision, and distributed cloud computing, the Pre-processing, Hallway Line Detection, and Direction
utilizations of mobile service robots are increasing Weight Estimation Algorithm (DWEA). The figure of
faster than any other time. the overall autonomous system is shown in the
One of the most critical tasks in mobile service Figure 1. Robot scans and analyzes the perceptual
robots is to design a good navigation system for a information of the surrounding area. With the com-
particular environment. However, the design will be bination of PSD, Sonar sensors and the direction
complicated due to several factors such as the com- weight estimation module, robots can avoid the
plexity of the surrounding or incomplete perceptual obstacle and send the command to ATMega128 board
information. In this case, the navigation system is for the navigation process.
expected to adapt effectively to the changes in the
environment (obstacles avoiding, control ability, safe
distance to the target). In cope with these problems,
vision-based navigation systems combine vision sen-
sors in robots and the structure of the environment
to enhance the performance. These systems can be
divided into two categories: system using prior
knowledge (map), systems based on the conditions of
Fig. 1 The overview of the proposed method
the environment (mapless)[3,10-12]. In mapless navi-
gation systems, there is no guide from the prior
3. The Proposed Algorithm
strategy, the robots can operate by analyzing the
required information about the surrounding in the 3.1 Image Processing
environment. Mapless navigation system can be classi- The camera captures frames and records the area
fied into various kind of techniques, for example, in front of the robot. We observed that the hallway
optical flow, appreance-based, object recognition based line areas have an identical structure, dark color,
on feature tracking[4,5]. however they are in different point of view and
In this paper, we implement an effective mapless appearance in the captured frame. Therefore, the
navigation system for an autonomous indoor robot/ proposed pre-processing includes HSV color space
vehicle in hallway environments. The overall system conversion, ROI extraction, image filtering as shown
consists of three main components: pre-processing, in Figure 2. The upper half region of the frame is
hallway line detection, and the direction weight value removed to reduce the computation time because the
estimation for robot navigation. According to the system focuses on the corridor lines.
designed, the robot is integrated into a real product 3.2 Hallway Line Detection
as a running base with a control system. This paper The line detection is performed on the binary image

경희대학교 국제캠퍼스 | IP:163.***.117.100 | Accessed 2019/07/26 09:19(KST)


자율주행 로봇을 위한 영상 기반 자가 탐색 시스템 연구 353

lines of the hallway, we add up pixels along each


pixel column in the binary image. The two highest
peaks in the histogram will become starting locations
for the base of each line. From the base, the sliding
window is created to find the polynomial fit and
follow up the lines up to top of the frame.
3.3 Direction Weight Estimation (DWE) Algorithm
Fig. 2 The pre-processing pipeline
Robot direction can be predicted by the weight
value which is calculated based on the current
which is the output of the pre-processing step. This
location of the robot with respect to the center of two
part consists of various mathematical methods that
detected lines. We map the weight value into range
intend to identify two lines from the left and right
from 1 to 9 for further processing in ATMega128. If the
wall of the hallway. Many researchers have been
weight value is greater than 5, the robot position is in
working on this topic, and most of them try to solve
the right direction of the hallway. The opposite
the problem using the state of the art method Hough
direction happens when the weight value is less than
Transform (HT). However, the disadvantage of
5. If the weight value is equal to 5, it means the robot
Hough Transform is that they are expensive when
direction is going straightforward. We assume that
the parameters increase, and they are unable to iden-
the camera is positioned at the center of the robot or
tify the endpoints of the line segment[6,7]. In this
its center is located in the center of the frame. The
section, we describe the new process for finding two
line center is computed as the mean value between
lines of the hallway and propose a new method to
the base of left and right lines, and the offset is the
enhance the navigation process.
difference of the robot center to the line center. The
∙Perspective Warping for getting Bird’s Eye view
details can be found in Algorithm 1.
The Bird’s Eye View transformation [8] is the
process to generate the top view perspective of the
image. From the observation that the image has a
perspective which causes two lines of the hallway to
appear like they are converging at the distance even
though they are parallel to each other. By removing
the perspective from the image, it helps to detect the
curvature of the hallway lines easier. Figure 3 shows
the image that has distored lines and the desired
image after removing the distortion using Perspec-
tive transformation.

Fig. 3 The distored image (left) and the desired image


without distortion (right)

∙Sliding Window Line Detection


In the binary image, the value of each pixel is
either 1 or 0. In order to generate the base of two

경희대학교 국제캠퍼스 | IP:163.***.117.100 | Accessed 2019/07/26 09:19(KST)


354 정보과학회 컴퓨팅의 실제 논문지 제 25 권 제 7 호(2019. 7)

4. Experiments 5 for the center position while it has small oscillation


in the result for the left and right position. We
In order to verify the effectiveness of the proposed performed on different images in the video sequence,
system, several experiments were performed in the the results show the average processing time for the
hallway of our university. The whole system was first 50 frames is about 12 miliseconds. The testing
implemented on OpenCV 4.0.1 with Python 3, Visual performance result is shown in Figure 6.
Studio 2017, PC Configuration (Win 8.1 64bits, Intel ∙Experiment 2: Go Live
Core i5, RAM 16GB). The navigation system is integrated into a real robot
∙Experiment 1: Offline Testing for checking the performance in real-time environment.
The dataset is collected from the photos taken at The sample image of the robot is shown in Figure 7.
Computer Engineering Building, Kyung Hee Univer- We perform the experiment as a running base for
sity. We place the camera at three positions of the other works. The scenario is the experiment of robot
hallway, the scenario is illustrated on Figure 4. The movement to perform obstacle avoidance when they
output images are shown in Figure 5. The experi- move along the corridor under normal light condition.
ments show that system returned weight value exactly The navigation system is integrated into a real

(a) left side (b) center side (c) right side


Fig. 4 An example scenario of robot position in DWE Agorithm

(a) (b)

(c) (d)
Fig. 5 (a) Input frame (b) Output binary image (c) Sliding window on unwarped image (d) Output image

경희대학교 국제캠퍼스 | IP:163.***.117.100 | Accessed 2019/07/26 09:19(KST)


자율주행 로봇을 위한 영상 기반 자가 탐색 시스템 연구 355

the result, the performance relies on the computing


resources of this platform. Although Raspberry Pi
can be easily mounted on the robot, we have to take
into account the constraint in the computing resource.
However, the system can meet realtime performance
as if they are deploying in GPU/CPU core. In furture
work, we are planing to incorporate the proposed
navigation system into an ioT framekwork for mobile
robot in indoor environment.
Fig. 6 Processing time for each frame in the video

5. Conclusions

Vision-based navigation is potentially the most


advanced technique in providing a reliable mobile
robot system. In this paper, we have introduced a
new algorithm for improving robot navigation process
in hallway environment. The experiments show that
robot integrated with DWE module can avoid the
obstacles as well when they move along the corridor.
The system can perform tasks in real-time when
testing on PC; however it tends to consume lot of
Fig. 7 The real robot in our lab computation times when we deploy on Raspberry Pi
due to the limited resources of this platform.
robot for checking the performance in real-time
environment. The sample image of the robot is shown References
in Figure 7. We perform the experiment as a running
[ 1 ] Available: https://blogs.cisco.com/government/top-10
base for other works. The scenario is the experiment -smartcity-trends-for-2018 (accessed on 12 May 2019)
of robot movement to perform obstacle avoidance when [ 2 ] Pendleton SD, Andersen H, Du X, Shen X,
they move along the corridor under normal light Meghjani M, Eng YH, Rus D, Ang MH.,
"Perception, Planning, Control, and Coordination for
condition.
Autonomous Vehicles," Journal of Machines, Vol. 5,
There are two objects are placed randomly in the
No. 1, pp. 1-54, Feb. 2017.
hallway. PSD and sonar sensors are combined to [ 3 ] Budi Rahmani, "Review of Vision-Based Robot
make robot have more accurately sensing range for Navigation Method," International Journal of Robotics
avoiding obstacles. The sonar sensors have its advan- and Automation (IJRA), Vol. 4, No. 4, Dec. 2015.
[ 4 ] G. N. DeSouza and A. C. Kak, "Vision for mobile
tage in detecting small objects over long distance,
robot navigation: a survey," IEEE Transactions on
while PSD sensors have strong perceive ability of Pattern Analysis and Machine Intelligence, Vol. 24,
short-range objects[9]. Our observation is that the No. 2, pp. 237-267, 2002.
proposed navigation system can improve the effiency [ 5 ] Bonin-Font, F., Ortiz, A. and Oliver, G., "Visual
Navigation for Mobile Robots: A Survey," Journal
for obstacles avoiding task by the combination of
of Intell. Robot System, Vol. 53, pp. 263-296, 2008.
DWE algorithm and hardware control mechanism.
[ 6 ] Hough Transform. Available: https://en.wikipedia.org/
The weight estimation module helps robot localize its wiki/Hough_transform (accessed on 12 May 2019)
position with respect to the left and right line of the [ 7 ] P. Mukhopadhyay, B.B. Chaudhuri, "A survey of
hallway. The generated weight value is sent to board Hough Transform," Pattern Recognition, Vol. 48,
No. 3, pp. 993-1010, Mar. 2015.
ATMega128, and the control command is sent to the
[ 8 ] Available: https://www.mathworks.com/help/driving/
wheels mechanism, therefore, robot can navigate to ref/birdseyev iew.transformimage.html (accessed on
the left, right or center direction. 12 May 2019)
We deploy the whole system on Raspberry Pi, as [ 9 ] J. Wang and Y. Hou, "Research on Robot Obstacle

경희대학교 국제캠퍼스 | IP:163.***.117.100 | Accessed 2019/07/26 09:19(KST)


356 정보과학회 컴퓨팅의 실제 논문지 제 25 권 제 7 호(2019. 7)

Avoidance Method on PSD and Sonar Sensors," 2016 types of chairs. He is a vice-chairman of Cloud/Bigdata
3rd International Conference on Information Science Special Technical Group of TTA, and Editor of ITU-T
and Control Engineering (ICISCE), pp. 1071-1074, SG13 Q17. He is now with Kyung Hee University, South
2016. Korea as Professor in Dept. of Computer Science and
[10] V. E. Semencha and T. N. Skripnik, "Convolution Engineering. His interesting research areas are Cloud
Neural Network Based Autonomous Navigation Sys- Computing, Internet of Things, Future Internet, Distri-
tem for Mobile Robots," IEEE Conference of Russian buted Real Time System, Mobile Computing, Big Data
Young Researchers in Electrical and Electronic and Security.
Engineering (EIConRus), pp. 331-334, 2019.
[11] G. Tsai and B. Kuipers, "Dynamic visual understan-
ding of the local environment for an indoor naviga-
ting robot," IEEE/RSJ International Conference on
Intelligent Robots and Systems, pp. 4695-4701, 2012.
[12] Güzel, M. S, "Autonomous Vehicle Navigation Using
Vision and Mapless Strategies: A Survey," Advances
in Mechanical Engineering, Vol. 5, pp. 1-10, Jan.
2015.

Ngo Thien Thu received BS degree


in management information system
from Viet Nam National University,
Ho Chi Minh City, in 2010. She is
currently pursuing the combined ph.D.
Degree in the department of Computer
Science and Engineering, Kyung Hee
University, Her research interests include Video/Image
Processing, Video Streaming in Realtime Environment.

Md. Abu Layek received his B.Sc.


and M.Sc. degrees from Information
and Communication Engineering depart-
ment, Islamic University, Bangladesh
in 2004 and 2006 respectively. He is an
Assistant Professor in the department
of Computer Science and Engineering,
Jagannath University, Dhaka, Bangladesh. At present, he
is pursuing his Ph.D. in Computer Science and Engi-
neering, Kyung Hee University, Republic of Korea. His
current research interest includes Cloud Computing,
Virtual Desktop Infrastructure, Internet of Things and
Ubiquitous Computing.

Eui-Nam Huh has earned BS degree


from Busan National University in
Korea, Master’s degree in Computer
Science from University of Texas,
USA in 1995 and Ph. D degree from
the Ohio University, USA in 2002. He
is a member of IEEE and Review
Board of National Research Foundation of Korea. He has
also served many community services for ICCSA,
WPDRTS/IPDPS, APAN Sensor Network Group,
ICUIMC, ICONI, APIC-IST, ICUFN, SoICT as various

경희대학교 국제캠퍼스 | IP:163.***.117.100 | Accessed 2019/07/26 09:19(KST)

Das könnte Ihnen auch gefallen